-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test the changes on staging before pushing to production #103
Comments
@iantei can you please address this? |
Yes, I will fix this. |
@iantei Please also test against staging and the USAID Laos project to see if there are any other showstoppers before we push out to production in general. |
Ah the "All Data" has now been updated 🎉 However, @iantei unlike the previous implementation, the colors are not consistent between graphs, which makes it a bit confusing while comparing - i.e. "Dark Blue" = Motorcycle in trip count versus "Dark Blue" = Airplane in trip distance. If you have the stacked bar chart implementation ready, and it fixes this, I am OK with fixing it then. But if you don't have that yet, we may need to fix it sooner. Comments? |
Completed the changes for the STUDY_CONFIG=smart-commute-ebike. Now it'll reflect the energy impact calculation option as a metric dropdown list.
I am facing some issue while trying to push the changes [Cloned the repo again, created branch, tried to push changes.],
I am trying to fix this. I regenerated the Personal Access Token, and tried to use it using the below command. Still getting the same error. Meanwhile, I will some further testing with staging and USAID Laos Project too. |
The stacked bar chart implementation in |
we use fork + pull request. you don't have access to the upstream fork |
I observe different behaviors for the Comparing the difference in Staging environment and Development environment:
Process followed to generate the development mode STUDY_CONFIG=usaid-laos-ev:
Execution steps:
Process to Launch:
Results:
Even though I am not sure what could be resulting in this issue. |
@iantei please put the PR into "Ready for review" wrt differences between
the As I said in #103 (comment)
The Laos production environment is at: https://usaid-laos-ev-openpath.nrel.gov/public/ If we had more time before it was pushed to production, we would have been able to take better before and after snapshots, but the code was not ready until the last minute. You can use the snapshots above (for the trip count before and after the change) for the comparison. |
I have moved the PR into "Ready for Review" stage. #104
Things look good at the Laos production environment. |
Laos Production Charts Test [Aggregate Data]
Launched the following site https://usaid-laos-ev-openpath.nrel.gov/public/ For the aggregated data, I have enlisted all the charts.
Laos Production Charts Test [11/2023]
Launched the following site https://usaid-laos-ev-openpath.nrel.gov/public/ For the aggregated data, I have enlisted all the charts.
Common observations in both the cases above:
displays the distance/weight in Metric system (as the Laos config has dynamic config) i.e. while the following other charts like:
Could this be a cause of concern or confusion to the end user? |
Interesting. Is this also true on staging (with the default
Correct, we need to fix this, either immediately, or (if the stacked bar charts are imminent), then while implementing them.
Yes. There are many aspects of the public dashboard that need to respect the dynamic config. The first and most important one was the Per the comment, have you already filed an issue? If so, we should track the |
Executed on staging mode (in development setting) for `study_config` as `smart-commute-ebike`
Code changes:
Steps followed:
3. Execution for different notebooks
Results:
Observations: There is a slight issue with the arrangement of charts like " Illustration of the overlapping charts issue: Overlapping_Charts.movThere is a work-around for this issue though. Close those overlapping charts and re-launch it. Possible work around solution:
We can observe no charts are available for |
Execution of Smart Commute E-bike: Using dataset: Some observation:
Each the individual month selection were not able to generate charts, while the aggregate one was able to give the chart representation properly. Isn't it required that at least one of the individual chart should have enough data to generate the chart alongside aggregate one? Observed the same for the below metrics too:
Available dates: 3/2023 - 12/2023 |
@iantei when you are checking the frontend and not the graphs, is there a reason you are not testing on staging directly? |
The below test have been conducted after launching: In reference to this comment's status about the chart representation in
It's not possible to compare between these two charts distinctly because the total number of testers has changed from 6 to 10 and 19 to 20 respectively for One of the observation in the below chart is: [Observed on 6 Dec 2023]
Even though the distribution of |
According to the meeting discussion we had yesterday, I understood that I do not have access to staging, dataset or configuration changes related with staging. |
@iantei given that we did discuss it yesterday and it wasn't clear, can you please add in what your understanding and proposal is, and I can respond in writing? That might make it easier to have a clear plan going forward. |
Sure.
This will load the corresponding charts file in the My understanding is, since I do not have access to staging environment configuration changes. I would try to emulate it by spinning up the
My understanding is I do not have access to the staging configuration changes or the dataset used in staging environment. I tried the following,
For this part - the
Here's my understanding about this.
I have been using the When you asked me to test against staging, I wasn't sure about proceed to test in staging environment, and felt that emulating the development testing scenario but with Could you please let me know the appropriate way to "test against staging"? |
@iantei there are three aspects that come together in the public dashboard:
(1) is public for all studies and all programs You need to think through what you are testing, and how you can assemble the pieces to check that it works. As a concrete example and an explanation for "test against staging", consider testing the frontend. For the frontend, you want to check that the cloud deployment is correct since that is the big difference. So you should test using https://openpath-stage.nrel.gov/public which you can access (as can anybody else in the world). When running the viz_scripts, the difference between dev and production is in other areas, and you should focus on testing those. Hope that helps! I would suggest thinking through what those differences are, and coming up with a plan to test them. This may involve additional support from me. |
The https://openpath-stage.nrel.gov/public/ takes the For the development testing in localhost, I change the environment in the
Re-generate all the charts by getting inside the For the staging testing, is there any specific container (other than docker-compose.dev.yml or docker-compose.yml ), which I can modify to integrate the change of config, so it gets reflected on the https://openpath-stage.nrel.gov/public/?study_config=smart-commute-ebike? |
Your comment in #103 (comment) is focused on testing viz_scripts. My comment in #103 (comment) talked about the frontend and not the graphs.
I repeated this in #103 (comment)
You need to test both the frontend and the viz_scripts and you need to do test them in different ways because the meaningful differences between dev and staging are different for the two components.
No. How can there be? Other comments:
This is incorrect. This takes
Not sure what you mean here - these are the same file. |
Yes, that's right. Updated the above comment - I used the wrong text representation for the right hyperlink.
Corrected the above comment: I meant, |
I have tested the viz_scripts execution in |
Based on the above discussion, There are two aspects we need to check for:
Test both the aspects above with the below combination: This way, we're validating all the default and list of metrics to be displayed for both program/study with/out dynamic labels in the We are already validating the charts generated in prod environment through localhost. Note: The |
I have tested the With the I didn't understand the idea of testing the publicly available interfaces from the locally run container.
|
Testing with different variations of dataset in prod container. Testing with blank database:
Execution For STUDY_CONFIG = `washingtoncommons` Runs successfully for all notebooks
For the below ones:
Execution For STUDY_CONFIG = `ebikegj` Runs successfully for all notebooks
Execution For STUDY_CONFIG = `dev-emulator-program` Runs successfully for all notebooks
Testing with fc_* dataset:
Changes:
Execution For STUDY_CONFIG = `dev-emulator-program` Runs successfully for all notebooks EXCEPT `generic_metrics_sensed.ipynb` notebook
|
@shankari Issue observed while testing with the dataset for Procedure followed:
generic_metrics and generic_timeseries mode_specific_metrics , mode_specific_timeseries and energy_calculations with the following error:
There seems to be a difference in data between these charts, and thereby all other charts are also different. Am I doing something wrong in the above process? |
As you can see from the exception, this will happen for all studies. Did you not see this in WashingtonCommons?
This is not what the graph is showing. The one on the left does not have ~ 20k participants.
Looks like it has both stage data and laos data loaded. Are you sure you are reading the correct one?
|
Yes, this is well expected with all studies. I did observe this with other studies too. I just wanted to make a note for it here.
Left Chart: The chart represents total confirmed trips: 2782 + 2052 + 1500 + 1065 + 457 = 7856 (Total Confirmed Trips)
Yes, I removed the existing dataset, and loaded the dataset which you've shared with me. I am placing the
|
was referring to "Participants: 19954". I am not sure what the rest of your comment refers to, but I am glad that you see that
You do not need to place the file in the em-public-dashboard folder. You can specify the fully qualified name instead.
What is the reason? You should not have the Stage_database loaded at the same time. I bet the issue is that you are not specifying the alternate DB name If you are using a database that is not called |
Similar to @achasmita 's changes - Changes in
Changes in
This yields in json loading error. Detailed Error log:
The dataset is available as seen with the |
There were a few changes which were required in the above approach: NOTE: Code changes:
For docker-compose.yml:
For start_notebook.sh:
Database changes:
Execution Process:
Make sure the changes in Execute Jupyter notebooks: 1. Generic Metrics: Executed properlyashrest2-35384s:em-public-dashboard ashrest2$ docker exec -it em-public-dashboard-notebook-server-1 /bin/bash root@fe3d5eb64501:/usr/src/app# source setup/activate.sh (emission) root@fe3d5eb64501:/usr/src/app# cd saved-notebooks (emission) root@fe3d5eb64501:/usr/src/app/saved-notebooks# PYTHONPATH=.. python bin/update_mappings.py mapping_dictionaries.ipynb (emission) root@fe3d5eb64501:/usr/src/app/saved-notebooks# PYTHONPATH=.. python bin/generate_plots.py generic_metrics.ipynb default /usr/src/app/saved-notebooks/bin/generate_plots.py:30: SyntaxWarning: "is not" with a literal. Did you mean "!="? if r.status_code is not 200: About to download config from https://raw.githubusercontent.com/e-mission/nrel-openpath-deploy-configs/main/configs/usaid-laos-ev.nrel-op.json Successfully downloaded config with version 1 for USAID-NREL Support for Electric Vehicle Readiness and data collection URL https://USAID-laos-EV-openpath.nrel.gov/api/ Dynamic labels download was successful for nrel-openpath-deploy-configs: usaid-laos-ev Running at 2024-01-05T23:10:55.325373+00:00 with args Namespace(plot_notebook='generic_metrics.ipynb', program='default', date=None) for range (, ) Running at 2024-01-05T23:10:55.367695+00:00 with params [Parameter('year', int), Parameter('month', int), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('dynamic_labels', dict, value={'MODE': [{'value': 'walk', 'baseMode': 'WALKING', 'met_equivalent': 'WALKING', 'kgCo2PerKm': 0}, {'value': 'e-auto_rickshaw', 'baseMode': 'MOPED', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.085416859}, {'value': 'auto_rickshaw', 'baseMode': 'MOPED', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.231943784}, {'value': 'motorcycle', 'baseMode': 'MOPED', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.113143309}, {'value': 'e-bike', 'baseMode': 'E_BIKE', 'met': {'ALL': {'range': [0, -1], 'mets': 4.9}}, 'kgCo2PerKm': 0.00728}, {'value': 'bike', 'baseMode': 'BICYCLING', 'met_equivalent': 'BICYCLING', 'kgCo2PerKm': 0}, {'value': 'drove_alone', 'baseMode': 'CAR', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.22031}, {'value': 'shared_ride', 'baseMode': 'CAR', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.11015}, {'value': 'e_car_drove_alone', 'baseMode': 'E_CAR', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.08216}, {'value': 'e_car_shared_ride', 'baseMode': 'E_CAR', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.04108}, {'value': 'taxi', 'baseMode': 'TAXI', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.30741}, {'value': 'bus', 'baseMode': 'BUS', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.20727}, {'value': 'train', 'baseMode': 'TRAIN', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.12256}, {'value': 'free_shuttle', 'baseMode': 'BUS', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.20727}, {'value': 'air', 'baseMode': 'AIR', 'met_equivalent': 'IN_VEHICLE', 'kgCo2PerKm': 0.09975}, {'value': 'not_a_trip', 'baseMode': 'UNKNOWN', 'met_equivalent': 'UNKNOWN', 'kgCo2PerKm': 0}, {'value': 'other', 'baseMode': 'OTHER', 'met_equivalent': 'UNKNOWN', 'kgCo2PerKm': 0}], 'PURPOSE': [{'value': 'home'}, {'value': 'work'}, {'value': 'at_work'}, {'value': 'school'}, {'value': 'transit_transfer'}, {'value': 'shopping'}, {'value': 'meal'}, {'value': 'pick_drop_person'}, {'value': 'pick_drop_item'}, {'value': 'personal_med'}, {'value': 'access_recreation'}, {'value': 'exercise'}, {'value': 'entertainment'}, {'value': 'religious'}, {'value': 'other'}], 'translations': {'en': {'walk': 'Walk', 'e-auto_rickshaw': 'E-tuk tuk', 'auto_rickshaw': 'Tuk Tuk', 'motorcycle': 'Motorcycle', 'e-bike': 'E-bike', 'bike': 'Bicycle', 'drove_alone': 'Car Drove Alone', 'shared_ride': 'Car Shared Ride', 'e_car_drove_alone': 'E-Car Drove Alone', 'e_car_shared_ride': 'E-Car Shared Ride', 'taxi': 'Taxi/Loca/inDrive', 'bus': 'Bus', 'train': 'Train', 'free_shuttle': 'Free Shuttle', 'air': 'Airplane', 'not_a_trip': 'Not a trip', 'home': 'Home', 'work': 'To Work', 'at_work': 'At Work', 'school': 'School', 'transit_transfer': 'Transit transfer', 'shopping': 'Shopping', 'meal': 'Meal', 'pick_drop_person': 'Pick-up/ Drop off Person', 'pick_drop_item': 'Pick-up/ Drop off Item', 'personal_med': 'Personal/ Medical', 'access_recreation': 'Access Recreation', 'exercise': 'Recreation/ Exercise', 'entertainment': 'Entertainment/ Social', 'religious': 'Religious', 'other': 'Other'}, 'lo': {'walk': 'ດ້ວຍການຍ່າງ', 'e-auto_rickshaw': 'ລົດ 3 ລໍ້ໄຟຟ້າ ຫລື ຕຸກຕຸກໄຟຟ້າ', 'auto_rickshaw': 'ເດີນທາດ້ວຍ ລົດຕຸກຕຸກ ຫລື ລົດສາມລໍ້', 'motorcycle': 'ລົດຈັກ', 'e-bike': 'ວຍລົດຈັກໄຟຟ້າ', 'bike': 'ລົດຖີບ', 'drove_alone': 'ເດີນທາງ ດ້ວຍລົດໃຫ່ຍ ເຊີ່ງເປັນລົດທີ່ຂັບເອງ', 'shared_ride': 'ເດີນທາງດ້ວຍລົດໃຫ່ຍ ຮ່ວມກັບລົດຄົນອຶ່ນ', 'e_car_drove_alone': 'ດ້ວຍການຂັບລົດໄຟຟ້າໄປເອງ', 'e_car_shared_ride': 'ດ້ວຍການຈ້າງລົດໄຟຟ້າໄປ', 'taxi': 'ແທັກຊີ', 'bus': 'ລົດເມ', 'train': 'ລົດໄຟ', 'free_shuttle': 'ລົດຮັບສົ່ງຟຣີ', 'air': 'ຍົນ', 'not_a_trip': 'ບໍ່ແມ່ນການເດີນທາງ', 'home': 'ບ້ານ', 'work': 'ໄປເຮັດວຽກ', 'at_work': 'ຢູ່ບ່ອນເຮັດວຽກ', 'school': 'ໄປໂຮງຮຽນ', 'transit_transfer': 'ການຖ່າຍໂອນການເດີນທາງ', 'shopping': 'ຊອບປິ້ງ', 'meal': 'ອາຫານ', 'pick_drop_person': 'ໄປຮັບ ຫລື ສົນ ຄົນ', 'pick_drop_item': 'ໄປຮັບ ຫລື ສົ່ງສິນຄ້າ', 'personal_med': 'ໄປຫາໝໍ', 'access_recreation': 'ເຂົ້າເຖິງການພັກຜ່ອນ', 'exercise': 'ພັກຜ່ອນ/ອອກກຳລັງກາຍ', 'entertainment': 'ບັນເທີງ/ສັງຄົມ', 'religious': 'ຈຸດປະສົງທາງສາດສະໜາ', 'other': 'ອື່ນໆ'}}})] 2. Generic Metrics Sensed: Executed properly(emission) root@fe3d5eb64501:/usr/src/app/saved-notebooks# PYTHONPATH=.. python bin/generate_plots.py generic_metrics_sensed.ipynb default /usr/src/app/saved-notebooks/bin/generate_plots.py:30: SyntaxWarning: "is not" with a literal. Did you mean "!="? if r.status_code is not 200: About to download config from https://raw.githubusercontent.com/e-mission/nrel-openpath-deploy-configs/main/configs/usaid-laos-ev.nrel-op.json Successfully downloaded config with version 1 for USAID-NREL Support for Electric Vehicle Readiness and data collection URL https://USAID-laos-EV-openpath.nrel.gov/api/ Dynamic labels download was successful for nrel-openpath-deploy-configs: usaid-laos-ev Running at 2024-01-05T23:12:47.282348+00:00 with args Namespace(plot_notebook='generic_metrics_sensed.ipynb', program='default', date=None) for range (, ) Running at 2024-01-05T23:12:47.323788+00:00 with params [Parameter('year', int), Parameter('month', int), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] Running at 2024-01-05T23:12:53.781223+00:00 with params [Parameter('year', int, value=2023), Parameter('month', int, value=5), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] Running at 2024-01-05T23:12:59.575014+00:00 with params [Parameter('year', int, value=2023), Parameter('month', int, value=6), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] Running at 2024-01-05T23:13:04.877180+00:00 with params [Parameter('year', int, value=2023), Parameter('month', int, value=7), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] Running at 2024-01-05T23:13:09.940155+00:00 with params [Parameter('year', int, value=2023), Parameter('month', int, value=8), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] Running at 2024-01-05T23:13:15.856148+00:00 with params [Parameter('year', int, value=2023), Parameter('month', int, value=9), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] Running at 2024-01-05T23:13:21.157176+00:00 with params [Parameter('year', int, value=2023), Parameter('month', int, value=10), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] Running at 2024-01-05T23:13:26.737892+00:00 with params [Parameter('year', int, value=2023), Parameter('month', int, value=11), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] Running at 2024-01-05T23:13:31.919421+00:00 with params [Parameter('year', int, value=2023), Parameter('month', int, value=12), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] Running at 2024-01-05T23:13:37.726137+00:00 with params [Parameter('year', int, value=2024), Parameter('month', int, value=1), Parameter('program', str, value='default'), Parameter('study_type', str, value='study'), Parameter('include_test_users', bool, value=True), Parameter('sensed_algo_prefix', str, value='cleaned')] 3. Generic Timeseries: Executed Properly
(emission) root@fe3d5eb64501:/usr/src/app/saved-notebooks# PYTHONPATH=.. python bin/generate_plots.py generic_timeseries.ipynb default Results:
These comparison would not be a valid one, since there would be new data every other day. So, comparing between 20th Dec 2023 dataset with 5th Jan 2024 dataset would not yield a rightful comparison. |
Comparing between charts generated for both 20th Dec 2023 dataset.
Update: Comparing individual chart:
There seems to be some discrepancy with these two chart. Possible cause:
Result: Different database size for same dataset-
Update: In last both cases, where the db size were 0.758GB and 0.757GB - the |
Inside the
Deleted all data which has timestamp greater than 2023-12-20T03:27:00
Reloaded the Generic Metrics notebook charts. Details
(emission) root@d839eeb20ee0:/usr/src/app/saved-notebooks# PYTHONPATH=.. python bin/update_mappings.py mapping_dictionaries.ipynb Cleared cache from the Firefox browser. Result |
@shankari Here we are loading the dataset, and validating the changes are reflecting as the snapshot on the production website. Steps involved: Loading the dataset After loading the
Executed the following command for Generic Metrics notebook:
Introduced a new function in scaffolding.py
This is called in Mongo DB after the removal of the entries later than 20th Dec 2023 3:28:45 GMT and removing the extra confirmed trips.
Results: Overall comparison:
Individual comparison:
Thank you @Abby-Wheelis for the suggestions and support in completing this. |
For the record, the code used here is not what I suggested
This is not the matching algorithm. This will not work for labels on draft trips, for example.
@iantei can you please report the results after fixing the code? I anticipate they will be similar, but the goal is is to check the calculation so we want to make the data massaging as accurate as possible. |
@shankari is there a specific matching algorithm in the server code that you're thinking of? From Ok, wait, digging in a little deeper today I've found something that looks promising: Maybe that's a good place to start, I had skimmed over it yesterday because it looked like it dealt with place user inputs, but I think it's more general than I first thought. |
Inside Since we are not looking for specific So, in the function Even with the potential_candidates, we have a list. So, I am not sure which item from the list to choose from.
@shankari Could you please let know if I looking into the right direction? |
Got the working code: Updated filtering code to identify the right confirmed trip item.
Steps of execution:
Executed the following script for Generic Metrics:
Result:
|
This is not using the correct code to identify the right confirmed trip item. I don't see you using
I am not sure where you found that "the specified conditions" include matching by Also, is there a reason why you are copy-pasting code instead of just calling the server function? |
UPDATE: FIXED - I encountered a challenge while trying to call the server function in the following way:
This resulted into the following error:
Upon investigating a little more, I found the following. Currently, we have the following:
We still have the following:
We don’t have I will try to see if I can fix this. |
Adding the below code in the
Re-execution of the Generic metrics. There are some issues with the function Update: Added case to handle when Getting assertion error for:
I don't have much insights about |
I took inference for this above condition from here, where there is computation of
|
Debugging update: Getting this error when the
Re-execution of the above block leads to same ASSERTION error again. Clear the database, load the dataset and re-execute. |
Since,
Now, we will make a call to
Calling This results in getting all the
I am unsure whether using the unique corresponding timerange for a unique UUID is a good idea, but the other way around |
Summary of the approach attempted: The function inside Since the calculation of |
That finds the confirmed trip for an inferred trip, not for a user input. It is not enough to check the output, you also need to check the input. Why are you trying to call
Where does that have |
This is very simple pseudocode. - delete all entries written after a particular timestamp - reset all labels that were entered after the timestamp by finding the corresponding confirmed trip and removing the `user_input` ``` to_be_deleted_manual_entries = # entries whose metadata.write_ts > timestamp for me in to_be_deleted_manual_entries: matching_confirmed_trip = ea...find_matching_confirmed_trip(me) del matching_confirmed_trip['data']['user_input'] edb.get_analysis_timeseries_db.delete_many({"metadata.write_ts" > timestamp ) ``` This is essentially the same code as e-mission#103 (comment) but with the matching code changed from ``` confirmed_trip = edb.get_analysis_timeseries_db().find_one({"user_id": t["user_id"], "metadata.key": "analysis/confirmed_trip", "data.start_ts": t["data"]["start_ts"]}) #gets confirmed trip with matching user id & timestamp ``` to ``` confirmed_trip = esdt.get_confirmed_obj_for_user_input_obj(ts, ecwe.Entry(t)) ``` to support the richer matching algorithm. And lots of improvements to the logging Testing done: ``` After parsing, the reset timestamp is 2023-12-20T03:27:00+00:00 -> 1703042820.0 Planning to delete 40502 records from the timeseries Planning to delete 6477 records from the analysis timeseries 6477 number of manual records after cutoff 143 For input 2023-12-20T15:49:36.486127+07:00 of type manual/purpose_confirm, labeled at 2023-12-20T19:04:06.452000+07:00, found confirmed trip starting at 2023-12-20T14:22:50.209000+07:00 with no user input For input 2023-12-19T06:28:58.685000+07:00 of type manual/purpose_confirm, labeled at 2023-12-20T19:05:11.080000+07:00, found confirmed trip starting at 2023-12-19T06:28:58.685000+07:00 with user input {'purpose_confirm': 'ໄປວຽກ', 'mode_confirm': 'motorcycle'} Resetting input of type purpose_confirm Update results = {'n': 1, 'nModified': 1, 'ok': 1.0, 'updatedExisting': True} For input 2023-12-19T05:33:56.424000+07:00 of type manual/purpose_confirm, labeled at 2023-12-20T19:05:18.335000+07:00, found confirmed trip starting at 2023-12-19T05:33:56.424000+07:00 with user input {'purpose_confirm': 'ໄປວຽກ', 'mode_confirm': 'motorcycle'} Resetting input of type purpose_confirm Update results = {'n': 1, 'nModified': 1, 'ok': 1.0, 'updatedExisting': True} For input 2023-12-18T20:12:15.920000+07:00 of type manual/purpose_confirm, labeled at 2023-12-20T19:05:28.385000+07:00, found confirmed trip starting at 2023-12-18T20:12:15.920000+07:00 with user input {'purpose_confirm': 'ໄປວຽກ', 'mode_confirm': 'motorcycle'} Resetting input of type purpose_confirm Update results = {'n': 1, 'nModified': 1, 'ok': 1.0, 'updatedExisting': True} For input 2023-12-18T20:12:15.920000+07:00 of type manual/mode_confirm, labeled at 2023-12-20T19:06:27.949000+07:00, found confirmed trip starting at 2023-12-18T20:12:15.920000+07:00 with user input {'mode_confirm': 'motorcycle'} Resetting input of type mode_confirm Update results = {'n': 1, 'nModified': 1, 'ok': 1.0, 'updatedExisting': True} ... For input 2023-12-20T19:31:21.577000+07:00 of type manual/purpose_confirm, labeled at 2023-12-20T20:14:33.454000+07:00, found confirmed trip starting at 2023-12-20T19:05:27.266615+07:00 with no user input For input 2023-12-20T20:02:07.148000+07:00 of type manual/mode_confirm, labeled at 2023-12-20T20:15:57.305000+07:00, found confirmed trip starting at 2023-12-20T19:05:27.266615+07:00 with no user input For input 2023-12-20T20:02:07.148000+07:00 of type manual/purpose_confirm, labeled at 2023-12-20T20:16:02.741000+07:00, found confirmed trip starting at 2023-12-20T19:05:27.266615+07:00 with no user input delete all entries after timestamp 1703042820.0 deleting all timeseries entries after 1703042820.0, {'n': 40502, 'ok': 1.0} deleting all analysis timeseries entries after 1703042820.0, {'n': 6477, 'ok': 1.0} ```
After reverting to a previous snapshot by using the script in #112, which uses the standard matching algorithm, and incorporates multiple assertions to validate the reset, I still get the same values #112 (comment) next steps:
|
Changes have been pushed to production, closing this now. |
Confirmed using https://openpath-stage.nrel.gov/public/?study_config=smart-commute-ebike
which does not have the energy impact calculation
The text was updated successfully, but these errors were encountered: