• Icon: Bug Report Bug Report
    • Resolution: Won't Fix
    • Icon: L3 - Default L3 - Default
    • None
    • None
    • spring-boot
    • None

      I'm using camunda.spring.boot.starter.version 3.3.6.

      The Camunda creates the tables and initialized the data in the BPM DB.

      I tried to get a process definition by KEY and I got an exception saying it found two.

      I checked the DB there are duplicate process name in the DB.

        This is the controller panel for Smart Panels app

            [CAM-11246] Duplicate process name found in DB

            Hi Lee,

            thanks for approaching us with this.
            On a first glance, this rather looks like a configuration problem in your application since all "duplicate" entries are bound to different deployments.
            Since this rather looks like a usage/configuration issue, I'd like to point you to the forum.
            That usually is a better place for posting issues like yours.

            I will close this as WON'T FIX for now.
            If this turns out to be a bug in the engine still, feel free to reopen and maybe explain the bug in more detail, especially with regards to your application configuration for example.

            Note that if you are an enterprise customer, you have the possibility to open a help request SUPPORT ticket as well.

            Best,
            Tobias

            Tobias Metzke-Bernstein added a comment - Hi Lee , thanks for approaching us with this. On a first glance, this rather looks like a configuration problem in your application since all "duplicate" entries are bound to different deployments. Since this rather looks like a usage/configuration issue, I'd like to point you to the forum . That usually is a better place for posting issues like yours. I will close this as WON'T FIX for now. If this turns out to be a bug in the engine still, feel free to reopen and maybe explain the bug in more detail, especially with regards to your application configuration for example. Note that if you are an enterprise customer, you have the possibility to open a help request SUPPORT ticket as well. Best, Tobias

            Arsene added a comment - - edited

            Hi @Tobias Metzke

            I respect Camunda team's decision. However, I think it is a good idea to fix the issue. We are using Camunda in K8s env. So we have a couple of nodes running Springboot embedded Camunda but sharing one DB. My guess is when multiple nodes starting up and try to initialize Camunda we might see this issue.  

            From my point of view, this should be an easy fix on the DB schema design. All the Camunda team need to do is to add a unique validation to the database table schema. This fix can prevent future issues and also consistent with Camunda's business rule. Camunda code expects the process definition name to be unique. The fix can ensure that Camunda can be safely used in a K8s like env.

             

            Arsene added a comment - - edited Hi @Tobias Metzke I respect Camunda team's decision. However, I think it is a good idea to fix the issue. We are using Camunda in K8s env. So we have a couple of nodes running Springboot embedded Camunda but sharing one DB. My guess is when multiple nodes starting up and try to initialize Camunda we might see this issue.   From my point of view, this should be an easy fix on the DB schema design. All the Camunda team need to do is to add a unique validation to the database table schema. This fix can prevent future issues and also consistent with Camunda's business rule. Camunda code expects the process definition name to be unique. The fix can ensure that Camunda can be safely used in a K8s like env.  

            Hi Arsene,

            Thanks for your update.

            The engine has a mechanism to avoid duplicate deployment of a process definition. Therefore an engine acquires a pessimistic lock in the database so that only one engine can deploy at a given time. For further details please have a look at our use guide: https://docs.camunda.org/manual/7.12/user-guide/process-engine/deployments/#deployments-in-a-clustered-scenario

            From the described setup (Spring Boot and multiple nodes), it seems to be a problem with the configured data source and it's integration with a transaction manager and not a bug in the Camunda engine.

            You have to make sure that the Camunda data source is managed by a dedicated transaction manager. This transaction manager needs to be configured with the Camunda engine.

            Please check the data source configuration and it's transaction manager integration.

            Best,
            Roman

            Roman Smirnov added a comment - Hi Arsene, Thanks for your update. The engine has a mechanism to avoid duplicate deployment of a process definition. Therefore an engine acquires a pessimistic lock in the database so that only one engine can deploy at a given time. For further details please have a look at our use guide: https://docs.camunda.org/manual/7.12/user-guide/process-engine/deployments/#deployments-in-a-clustered-scenario From the described setup (Spring Boot and multiple nodes), it seems to be a problem with the configured data source and it's integration with a transaction manager and not a bug in the Camunda engine. You have to make sure that the Camunda data source is managed by a dedicated transaction manager. This transaction manager needs to be configured with the Camunda engine. Please check the data source configuration and it's transaction manager integration. Best, Roman

            Arsene added a comment -

            Hi Smirnov Roman,

            Thank you for the response.

            There is no dedicated transaction manager configured in my Springboot application. We are using the default spring transaction manager. We are expecting the transaction to be handled at the database level. Is this assumption incorrect?

            Arsene added a comment - Hi Smirnov Roman, Thank you for the response. There is no dedicated transaction manager configured in my Springboot application. We are using the default spring transaction manager. We are expecting the transaction to be handled at the database level. Is this assumption incorrect?

            A transaction is started by the application and gets committed (or rolled back) by the application and not by the database.

            Have you configured multiple data sources in your application?

            Roman Smirnov added a comment - A transaction is started by the application and gets committed (or rolled back) by the application and not by the database. Have you configured multiple data sources in your application?

            Arsene added a comment -

            Smirnov,

            I do have multiple data sources. However, only one is used to connect to Camunda database. I understand the transaction is started by the application and gets committed (or rolled back) by the application, however, the lock is a database level lock. If the lock is the database lock then why do we need a dedicated transaction manager? When Camunda try to acquire an exclusive lock on a row in the table ACT_GE_PROPERTY, this should work without a dedicated transcation manager and prevent the issue.

            Arsene added a comment - Smirnov, I do have multiple data sources. However, only one is used to connect to Camunda database. I understand the transaction is started by the application and gets committed (or rolled back) by the application, however, the lock is a database level lock. If the lock is the database lock then why do we need a dedicated transaction manager? When Camunda try to acquire an exclusive lock on a row in the table  ACT_GE_PROPERTY, this should work without a dedicated transcation manager and prevent the issue.

            Roman Smirnov added a comment - - edited

            Because when a data source is not managed by a transaction manager each submitted SQL gets auto-committed. So once the lock is acquired the lock is released immediately in that case as the respective statement gets committed directly after submission.

            In addition, you might end up in an inconsistent state, if the data source is not managed correctly.

            Again, please make sure that Camunda data source has a dedicated transaction manager which is used by the Camunda engine. This is a crucial configuration, otherwise, the database state will end up in an inconsistent state.

            Best,
            Roman

            Roman Smirnov added a comment - - edited Because when a data source is not managed by a transaction manager each submitted SQL gets auto-committed. So once the lock is acquired the lock is released immediately in that case as the respective statement gets committed directly after submission. In addition, you might end up in an inconsistent state, if the data source is not managed correctly. Again, please make sure that Camunda data source has a dedicated transaction manager which is used by the Camunda engine. This is a crucial configuration, otherwise, the database state will end up in an inconsistent state. Best, Roman

            Arsene added a comment -

            Smirnov Roman, thank you for the info. 

            Arsene added a comment - Smirnov Roman, thank you for the info. 

              Unassigned Unassigned
              Lee Arsene
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: