Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Avoid _like function in Chunking #340

Merged
merged 1 commit into from
Apr 14, 2022

Commits on Apr 13, 2022

  1. Fix: Avoid _like function in Chunking

    When we prepare chunked reads, we assume a single chunk for all
    backends but ADIOS2. Preparing the returned data, we use
    `data = np.full_like(record_component, np.nan)`. It turns out
    that numpy seems to trigger a `__getitem__` access or full copy
    of our `record_component` at this point, which causes severe
    slowdown.
    
    This was first seen for particles, but affects every read where
    we do not slice a subset.
    
    Co-authored-by: AlexanderSinn <[email protected]>
    ax3l and AlexanderSinn committed Apr 13, 2022
    Configuration menu
    Copy the full SHA
    fd41324 View commit details
    Browse the repository at this point in the history