I created a simple speed test to compare Python 3.11 to 3.10 (and 3.9 .. 3.5). The tests use a Monte Carlo Pi estimation. This is probably not the best workload for a full Python stress test.
This is the code which belongs to a blog post you can find here.
- Rust contribution by Luander Ribeiro -> github.
- Julia contribution by Xiaoguang Pan (潘小光) -> github.
- Golang and automation contribution by Greg Hellings -> github
- Cython and Numba contribution by Jev Kuznetsov -> github
- C++ (improved) by Juraj Szitas -> github
You can see the latest results in Github Actions for different languages as well as different versions of Python. The results are collected in a JSON artifact. Below is an example output:
{
"Julia": 0.0448,
"Go": 0.0774,
"Numba": 0.1626,
"Rust": 0.1755,
"Cpp": 0.2071,
"Cython": 0.2703,
"C": 0.4208,
"Python_3.11": 12.1766,
"Python_3.9": 14.1459,
"Python_3.10": 14.3943,
"Python_3.7": 16.0291,
"Python_3.8": 18.6563
}
- Python environment to run the tester
- Docker to download each Python Env.
cd pi_estimates/Python
python run_main_test.py
The new Python 3.11 took 6.4605 seconds per run.
Python 3.5 took 11.3014 seconds per run.(Python 3.11 is 74.9% faster)
Python 3.6 took 11.4332 seconds per run.(Python 3.11 is 77.0% faster)
Python 3.7 took 10.7465 seconds per run.(Python 3.11 is 66.3% faster)
Python 3.8 took 10.6904 seconds per run.(Python 3.11 is 65.5% faster)
Python 3.9 took 10.9537 seconds per run.(Python 3.11 is 69.5% faster)
Python 3.10 took 8.8467 seconds per run.(Python 3.11 is 36.9% faster)