Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks for ctypes function call overhead #197

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

mdboom
Copy link
Contributor

@mdboom mdboom commented May 10, 2022

This additionally adds support for building C extensions as part of creating a benchmark's virtual environment, since that didn't seem to be possible prior to this change.

See faster-cpython/ideas#370 for some background discussion.

@ericsnowcurrently ericsnowcurrently self-requested a review May 16, 2022 16:25
@mdboom
Copy link
Contributor Author

mdboom commented May 23, 2022

Wanted to add a note: This is definitely a "microbenchmark" that isn't a great example of a real-world workload.

Copy link
Member

@ericsnowcurrently ericsnowcurrently left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly LGTM. There are some slight changes we should consider though, particularly using runner.bench_time_func() for micro benchmarks.

pyperformance/venv.py Outdated Show resolved Hide resolved
Comment on lines +29 to +75
void_foo_void()
int_foo_int(1)
void_foo_int(1)
void_foo_int_int(1, 2)
void_foo_int_int_int(1, 2, 3)
void_foo_int_int_int_int(1, 2, 3, 4)
void_foo_constchar(b"bytes")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The real benefit of micro-benchmarks is that it narrows down where performance regressions might be. With that in mind, would these different signatures have enough independent potential for regression that it would it make sense to have a separate benchmark for each? Would it be worth bothering even if they did?

Copy link
Member

@ericsnowcurrently ericsnowcurrently left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes. I just have a couple more suggestions.

pyperformance/venv.py Outdated Show resolved Hide resolved
doc/custom_benchmarks.rst Outdated Show resolved Hide resolved
pyperformance/_benchmark.py Outdated Show resolved Hide resolved
@mdboom mdboom requested a review from ericsnowcurrently May 27, 2022 17:57
@kumaraditya303 kumaraditya303 self-requested a review June 7, 2022 17:00
doc/benchmarks.rst Outdated Show resolved Hide resolved
doc/benchmarks.rst Outdated Show resolved Hide resolved
@mdboom mdboom requested a review from kumaraditya303 June 22, 2022 12:20
@kumaraditya303
Copy link
Contributor

LGTM, but I am a bit concerned that benchmarking all of the call argtypes in one iteration can hide regressions, but I can't think of anyway to fix that apart from splitting this benchmark further to each call argtypes.

Copy link
Member

@ericsnowcurrently ericsnowcurrently left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly LGTM. I've left a few comments on a few minor things and to get clarification in a couple spots.

pyperformance/_benchmark.py Outdated Show resolved Hide resolved
pyperformance/venv.py Outdated Show resolved Hide resolved
Copy link
Member

@ericsnowcurrently ericsnowcurrently left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A quick follow-up suggestion...

pyperformance/venv.py Outdated Show resolved Hide resolved
pyperformance/_pip.py Outdated Show resolved Hide resolved
Copy link
Member

@ericsnowcurrently ericsnowcurrently left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

I'll leave it to you about my recommended change.

@mdboom mdboom requested a review from ericsnowcurrently July 27, 2022 15:27
Copy link
Member

@ericsnowcurrently ericsnowcurrently left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

There's only one small suggestion, which I'll leave to your discretion.

if os.path.isfile(req):
name = os.path.basename(req)
if name == "setup.py":
req = os.path.dirname(req)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason we use the dirname isn't obvious, so it may be worth adding a comment here indicating pip's limitations.

Comment on lines +28 to +29
# pip doesn't support installing a setup.py,
# but it does support installing from the directory it is in.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is what I was thinking of above. Consider moving it there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants