This repository withholds the benchmark results and visualization code of the plotly_resampler
paper and toolkit.
The benchmark process follows these steps for each visualization-configuration:
- Each toolkit-visualization configuration script is called 10 times to average out the memory usage and runtime. Remark that by re-calling the script in separate runs, no caching or memory is shared among executions.
- Script execution:
- Construct the synthetic visualization data
- VizTracer starts logging
- Construct the visualization according to the configuration
- Wait till the graph is rendered in a selenium browser.
- VizTracer stops logging
- Write the VizTracer results to a JSON-file
The existing benchmark JSONs were collected on a desktop with an AMD Ryzen 5 2600x @3.8Ghz CPU and 48GB RAM, with Arch Linux as operating system. Other running processes were limited to a minimum.
more information about these outcomes can be found in the reports readme.
To install the required dependencies, just run:
poetry install
If you want to re-run the benchmarks, use the run_scripts notebook to generate new benchmark JSONs and then visualize them with the benchmark visualization notebook.
We are open to new-benchmark use-cases via pull-requests!
Examples of other interesting benchmarks are
- other data properties
- other eligible tools
- benchmarking graph-interaction response time.
👤 Jonas Van Der Donckt