-
Bug Report
-
Resolution: Unresolved
-
L3 - Default
-
None
-
None
-
None
-
Not defined
Brief summary of the bug. What is it ? Where is it ?
When importing object variables, Optimize by default will flatten these variables into usable subvariables, then store the original variable string as a separate variable. When customers use large variable values, the import of these variables can cause Optimize to block import or OOM.
It is important to note that there is no maximum depth configurable in the flattening library we use, so the recursive flattening can result in an unbounded amount of variables.
We also save the original variable as a separate variable, which is possibly not necessary.
With deep flattening, we also considerably increase the chances that we will hit the max nested document limit of 10,000. Having a smaller depth configured, minus the original value being saved, would help minimise the risk of this happening.
Steps to reproduce:
Start processes instances with large object variables, with a lot of depth
Actual result:
If Optimize doesn't have enough resources, it will crash
Expected result:
Optimize can handle import of large object varibles
Notes:
Some options for solution ideas:
- Find alternative flattening library or a custom solution that allows for a max depth of variable flattening
- Explore whether we can stop importing the original large variable value. The contained data within is duplicated within the contained flattened variables
Solution Proposals:
- Low hanging fruit: Stop saving the raw value of the variable additionally to the flattened variables
- More complex solution: Allow admins to configure how deep a variable import shall process the Json. With the current library, the flattening occurs for the entire depth of the nested variables, this causes an exponential increase in the number of variables as the depth increases. The suggestion here is to make a configurable parameter for maximal nested variable depth. This configuration can be set on a per-cluster basis by using the public API, in a similar fashion to enabling/disabling sharing. This would require that we write our own logic for unpacking the Json, since there doesn't seem to be any library that would support that
- Alternatively: Ditch variable flattening altogether
- Additionally: Make sure we don't hit the maximum nested documents limit when importing the variables
This is the controller panel for Smart Panels app
[OPT-6592] Object Variable import can cause OOM with big variable values
Description |
Original:
h2. Brief summary of the bug. What is it ? Where is it ?
When importing object variables, Optimize by default will flatten these variables into usable subvariables, then store the original variable string as a separate variable. When customers use large variable values, the import of these variables can cause Optimize to block import or OOM. It is important to note that there is no maximum depth configurable in the flattening library we use, so the recursive flattening can result in an unbounded amount of variables. We also save the original variable as a separate variable, which is possibly not necessary. h2. Steps to reproduce: Start processes instances with large object variables, with a lot of depth h3. Actual result: If Optimize doesn't have enough resources, it will crash h3. Expected result: Optimize can handle import of large object varibles *Notes:* Some options for solution ideas: * Find alternative flattening library or a custom solution that allows for a max depth of variable flattening * Explore whether we can stop importing the original large variable value. The contained data within is duplicated within the contained flattened variables |
New:
h2. Brief summary of the bug. What is it ? Where is it ?
When importing object variables, Optimize by default will flatten these variables into usable subvariables, then store the original variable string as a separate variable. When customers use large variable values, the import of these variables can cause Optimize to block import or OOM. It is important to note that there is no maximum depth configurable in the flattening library we use, so the recursive flattening can result in an unbounded amount of variables. We also save the original variable as a separate variable, which is possibly not necessary. With deep flattening, we also considerably increase the chances that we will hit the max nested document limit of 10,000. Having a smaller depth configured, minus the original value being saved, would help minimise the risk of this happening. h2. Steps to reproduce: Start processes instances with large object variables, with a lot of depth h3. Actual result: If Optimize doesn't have enough resources, it will crash h3. Expected result: Optimize can handle import of large object varibles *Notes:* Some options for solution ideas: * Find alternative flattening library or a custom solution that allows for a max depth of variable flattening * Explore whether we can stop importing the original large variable value. The contained data within is duplicated within the contained flattened variables |
Assignee | New: Giuliano Rodrigues Lima [ giuliano.rodrigues-lima ] |
Status | Original: Triage [ 10612 ] | New: In Development [ 10312 ] |
Link | New: This issue is related to SUPPORT-15168 [ SUPPORT-15168 ] |
Description |
Original:
h2. Brief summary of the bug. What is it ? Where is it ?
When importing object variables, Optimize by default will flatten these variables into usable subvariables, then store the original variable string as a separate variable. When customers use large variable values, the import of these variables can cause Optimize to block import or OOM. It is important to note that there is no maximum depth configurable in the flattening library we use, so the recursive flattening can result in an unbounded amount of variables. We also save the original variable as a separate variable, which is possibly not necessary. With deep flattening, we also considerably increase the chances that we will hit the max nested document limit of 10,000. Having a smaller depth configured, minus the original value being saved, would help minimise the risk of this happening. h2. Steps to reproduce: Start processes instances with large object variables, with a lot of depth h3. Actual result: If Optimize doesn't have enough resources, it will crash h3. Expected result: Optimize can handle import of large object varibles *Notes:* Some options for solution ideas: * Find alternative flattening library or a custom solution that allows for a max depth of variable flattening * Explore whether we can stop importing the original large variable value. The contained data within is duplicated within the contained flattened variables |
New:
h2. Brief summary of the bug. What is it ? Where is it ?
When importing object variables, Optimize by default will flatten these variables into usable subvariables, then store the original variable string as a separate variable. When customers use large variable values, the import of these variables can cause Optimize to block import or OOM. It is important to note that there is no maximum depth configurable in the flattening library we use, so the recursive flattening can result in an unbounded amount of variables. We also save the original variable as a separate variable, which is possibly not necessary. With deep flattening, we also considerably increase the chances that we will hit the max nested document limit of 10,000. Having a smaller depth configured, minus the original value being saved, would help minimise the risk of this happening. h2. Steps to reproduce: Start processes instances with large object variables, with a lot of depth h3. Actual result: If Optimize doesn't have enough resources, it will crash h3. Expected result: Optimize can handle import of large object varibles *Notes:* Some options for solution ideas: * Find alternative flattening library or a custom solution that allows for a max depth of variable flattening * Explore whether we can stop importing the original large variable value. The contained data within is duplicated within the contained flattened variables h2. Solution Proposals: * Low hanging fruit: Stop saving the raw value of the variable additionally to the flattened variables * More complex solution: Allow admins to configure how deep a variable import shall process the Json. With the current library, the flattening occurs for the entire depth of the nested variables, this causes an exponential increase in the number of variables as the depth increases. The suggestion here is to make a configurable parameter for maximal nested variable depth. This configuration can be set on a per-cluster basis by using the public API, in a similar fashion to enabling/disabling sharing. This would require that we write our own logic for unpacking the Json, since there doesn't seem to be any library that would support that * Alternatively: Ditch variable flattening altogether * Additionally: Make sure we don't hit the maximum nested documents limit when importing the variables |
Status | Original: In Development [ 10312 ] | New: Triage [ 10612 ] |
Assignee | Original: Giuliano Rodrigues Lima [ giuliano.rodrigues-lima ] | New: Joshua Windels [ joshua.windels ] |
Link | New: This issue is related to SUPPORT-15214 [ SUPPORT-15214 ] |
Status | Original: Triage [ 10612 ] | New: Open [ 1 ] |
Assignee | Original: Joshua Windels [ joshua.windels ] |