-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Correct handling of null max aggregation values in SearchResponse #1292
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Previously, when the max aggregation in SearchResponse returned a null value, it was converted to a negative number (-9223372036854775808), leading to erroneous timestamps. This fix adds a check to ensure that only valid positive aggregation values are considered, preventing null or invalid values from being misinterpreted as negative numbers. The change applies to only timestamps. So a positive number filter is always right. Testing done: 1. Added unit tests to cover scenarios where max aggregation returns null or negative values, ensuring the method returns Optional.empty() when the value is invalid. 2. Added unit tests to verify correct handling of valid positive max aggregation values. 3. Ran e2e manual testing. Previously: ``` POST /_plugins/_anomaly_detection/detectors/_validate/model { "name": "test-detector-13", "description": "Test detector", "time_field": "customer_birth_date", "indices": ["opensearch_dashboards_sample_data_ecommerce"], "feature_attributes": [ { "feature_name": "cpu", "feature_enabled": true, "aggregation_query": { "total_revenue_usd-field": { "max": { "field": "day_of_week_i" } } } } ], "detection_interval": { "period": { "interval":1, "unit": "Minutes" } }, "window_delay": { "period": { "interval": 1, "unit": "Minutes" } }, "category_field": ["currency"] } ``` returns: ``` nested: OpenSearchParseException[failed to parse date field [-9223372036854775808] with format [strict_date_optional_time||epoch_millis]: [failed to parse date field [-9223372036854775808] with format [strict_date_optional_time||epoch_millis]]]; nested: IllegalArgumentException[failed to parse date field [-9223372036854775808] with format [strict_date_optional_time||epoch_millis]]; nested: DateTimeParseException[Failed to parse with all enclosed parsers]; } opensearch-ccs-node1 | [2024-09-03T15:05:45,776][ERROR][o.o.t.r.h.ModelValidationActionHandler] [2a2cd14da04d] Failed to create search request for last data point opensearch-ccs-node1 | org.opensearch.action.search.SearchPhaseExecutionException: all shards failed opensearch-ccs-node1 | at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:770) [opensearch-3.0.0.jar:3.0.0] ``` Now the call returns: ``` { "model": { "time_field": { "message": "There isn't enough historical data found with current timefield selected." } } } ``` Signed-off-by: Kaituo Li <[email protected]>
kaituo
requested review from
jmazanec15,
jngz-es,
saratvemulapalli,
ohltyler,
vamshin,
VijayanB,
ylwu-amzn,
amitgalitz,
jackiehanyang,
sean-zheng-amazon,
dbwiddis,
owaiskazi19 and
joshpalis
as code owners
September 3, 2024 19:04
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1292 +/- ##
============================================
+ Coverage 71.83% 77.48% +5.65%
- Complexity 4898 5422 +524
============================================
Files 518 532 +14
Lines 22879 23264 +385
Branches 2245 2305 +60
============================================
+ Hits 16434 18025 +1591
+ Misses 5410 4180 -1230
- Partials 1035 1059 +24
Flags with carried forward coverage won't be shown. Click here to find out more.
|
amitgalitz
approved these changes
Sep 3, 2024
opensearch-trigger-bot bot
pushed a commit
that referenced
this pull request
Sep 3, 2024
) Previously, when the max aggregation in SearchResponse returned a null value, it was converted to a negative number (-9223372036854775808), leading to erroneous timestamps. This fix adds a check to ensure that only valid positive aggregation values are considered, preventing null or invalid values from being misinterpreted as negative numbers. The change applies to only timestamps. So a positive number filter is always right. Testing done: 1. Added unit tests to cover scenarios where max aggregation returns null or negative values, ensuring the method returns Optional.empty() when the value is invalid. 2. Added unit tests to verify correct handling of valid positive max aggregation values. 3. Ran e2e manual testing. Previously: ``` POST /_plugins/_anomaly_detection/detectors/_validate/model { "name": "test-detector-13", "description": "Test detector", "time_field": "customer_birth_date", "indices": ["opensearch_dashboards_sample_data_ecommerce"], "feature_attributes": [ { "feature_name": "cpu", "feature_enabled": true, "aggregation_query": { "total_revenue_usd-field": { "max": { "field": "day_of_week_i" } } } } ], "detection_interval": { "period": { "interval":1, "unit": "Minutes" } }, "window_delay": { "period": { "interval": 1, "unit": "Minutes" } }, "category_field": ["currency"] } ``` returns: ``` nested: OpenSearchParseException[failed to parse date field [-9223372036854775808] with format [strict_date_optional_time||epoch_millis]: [failed to parse date field [-9223372036854775808] with format [strict_date_optional_time||epoch_millis]]]; nested: IllegalArgumentException[failed to parse date field [-9223372036854775808] with format [strict_date_optional_time||epoch_millis]]; nested: DateTimeParseException[Failed to parse with all enclosed parsers]; } opensearch-ccs-node1 | [2024-09-03T15:05:45,776][ERROR][o.o.t.r.h.ModelValidationActionHandler] [2a2cd14da04d] Failed to create search request for last data point opensearch-ccs-node1 | org.opensearch.action.search.SearchPhaseExecutionException: all shards failed opensearch-ccs-node1 | at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:770) [opensearch-3.0.0.jar:3.0.0] ``` Now the call returns: ``` { "model": { "time_field": { "message": "There isn't enough historical data found with current timefield selected." } } } ``` Signed-off-by: Kaituo Li <[email protected]> (cherry picked from commit e47e3c3) Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
amitgalitz
pushed a commit
that referenced
this pull request
Sep 3, 2024
) (#1293) Previously, when the max aggregation in SearchResponse returned a null value, it was converted to a negative number (-9223372036854775808), leading to erroneous timestamps. This fix adds a check to ensure that only valid positive aggregation values are considered, preventing null or invalid values from being misinterpreted as negative numbers. The change applies to only timestamps. So a positive number filter is always right. Testing done: 1. Added unit tests to cover scenarios where max aggregation returns null or negative values, ensuring the method returns Optional.empty() when the value is invalid. 2. Added unit tests to verify correct handling of valid positive max aggregation values. 3. Ran e2e manual testing. Previously: ``` POST /_plugins/_anomaly_detection/detectors/_validate/model { "name": "test-detector-13", "description": "Test detector", "time_field": "customer_birth_date", "indices": ["opensearch_dashboards_sample_data_ecommerce"], "feature_attributes": [ { "feature_name": "cpu", "feature_enabled": true, "aggregation_query": { "total_revenue_usd-field": { "max": { "field": "day_of_week_i" } } } } ], "detection_interval": { "period": { "interval":1, "unit": "Minutes" } }, "window_delay": { "period": { "interval": 1, "unit": "Minutes" } }, "category_field": ["currency"] } ``` returns: ``` nested: OpenSearchParseException[failed to parse date field [-9223372036854775808] with format [strict_date_optional_time||epoch_millis]: [failed to parse date field [-9223372036854775808] with format [strict_date_optional_time||epoch_millis]]]; nested: IllegalArgumentException[failed to parse date field [-9223372036854775808] with format [strict_date_optional_time||epoch_millis]]; nested: DateTimeParseException[Failed to parse with all enclosed parsers]; } opensearch-ccs-node1 | [2024-09-03T15:05:45,776][ERROR][o.o.t.r.h.ModelValidationActionHandler] [2a2cd14da04d] Failed to create search request for last data point opensearch-ccs-node1 | org.opensearch.action.search.SearchPhaseExecutionException: all shards failed opensearch-ccs-node1 | at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:770) [opensearch-3.0.0.jar:3.0.0] ``` Now the call returns: ``` { "model": { "time_field": { "message": "There isn't enough historical data found with current timefield selected." } } } ``` (cherry picked from commit e47e3c3) Signed-off-by: Kaituo Li <[email protected]> Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Previously, when the max aggregation in SearchResponse returned a null value, it was converted to a negative number (-9223372036854775808), leading to erroneous timestamps.
This fix adds a check to ensure that only valid positive aggregation values are considered, preventing null or invalid values from being misinterpreted as negative numbers. The change applies to only timestamps. So a positive number filter is always right.
Testing done:
Previously:
returns:
Now the call returns:
Check List
--signoff
.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.