Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove vectorization #4987

Open
wants to merge 18 commits into
base: main
Choose a base branch
from
Open

remove vectorization #4987

wants to merge 18 commits into from

Conversation

AlexGuteniev
Copy link
Contributor

@AlexGuteniev AlexGuteniev commented Sep 28, 2024

📜 The algorithm

In-place remove algorithm that uses bit mask of vector comparisons as shuffle index to remove certain elements.
The destination is advanced by a size taken from a lookup table too, although popcnt could have been used.

The details vary on depending on element size:

  • The tables are selected to be of up to 256 entries, to process up to 8 elements at once. Bigger tables don't fit top level caches well. Some tables are smaller, when less than 8 elements can fit the vector.
  • 8 and 16 bit variants use SSE only, as 8 elements of 8 or 16 bits fit SSE register, and more elements will take bigger table, Also there's no cross-lane shuffle within SSE4.2 / AVX2 that works with elements of such sizes. They use the famous pshufb / _mm_shuffle_epi8 to remove elements.
  • 32 and 64 bit variants use AVX2, and 64 bit variant uses only a table of 16 entries, as there are up to 4 elements only. They use vpermd / _mm256_permutevar8x32_epi32 to remove elements, which is cross lane. SEE fallbacks are used with smaller tables, still surprisingly more efficient than scalar.
  • Need contiguous mask of bits, single bit per element, not few bits for the same comparison. 8-bit uses pmovmskb. 32 and 64 bit use vmovmskps / vmovmskpd, though they are for floating types, they fit well, and avoid the need of cross-lane swizzling to compress the mask. For 16-bit, packsswb is used, although pshufb could have been used as well.

🔍 Find first!

Before even starting, find is performed to find the first mismatch element. This is done for the correctness, and also there are performance reasons why it is good:

  • Correctness. [algorithms.requirements]/3 states: For purposes of determining the existence of data races, algorithms shall not modify objects referenced through an iterator argument unless the specification requires such modification. Whereas [alg.remove] is vague on how the algorithm should work, I think we should only write to elements that has to be written to
  • Vectorization, We can have full AVX2 vector size as the step always, not only for 32 and 64 bit elements
  • Memory bandwidth. The vectorized algorithm might be memory bound, saving writes may make it faster
  • Number of operations. Fewer ops to just test the content

The existing find implementation is called. Hypothetically I could implement it inline and save some instructions in some cases, but such optimization has too negligible effect on performance, while increasing complexity noticeably. Though this might be revised for future remove_copy if that and this would share the implementation.

⚠️ Correctness doubt - superfluous writes

The algorithm removes elements from the source vector (of 8 or less elements) by a shuffle operation, so that non-removed elements are placed contiguously in that vector. Then it writes the whole vector to the destination, and advances the destination pointer to the size of non-removed elements.

As a result:

  • In the remaining range some elements are overwritten with some values, before they are overwritten with expected values
  • In the removed range, some amount of elements (up to 8 of them) are overwritten with values of other elements, and never restored.

I have no doubts that overwriting elements in the resulting range to to some intermediate values before setting them to the expected values is correct. The write and the data race (in abstract machine terms) exist anyway, so extra write is not observable.

I have concerns regarding damaging the removed range. Changing these values is observable.

I'd appeal to that elements in removed range stay in valid-but-uspecified state, although I understand that the purpose of standard saying that is to enable moving of non-trivially-copyables, but not to do what I did.

Note that:

  • It is possible to avoid to do any of superfluous write, but it will have some cost
  • The cost of avoiding superfluous writes is small for 32 and 64 bit elements, and larger for 8 and 16 bit elements
  • When//if vectorizing remove_copy in a similar way have to avoid superfluous writes anyway

🗄️ Memory usage

Unlike most other vectorization algorithms, this one uses large lookup tables. 8 and 32 bit variants use 2 KiB table, 16 bit variant uses 4 KiB table.

This has different performance characteristics, compared to pure-computational optimizations. In particular, it tends to behave worse in some programs that don't fit cache well on their critical path. This doesn't apply to benchmarks, but unfortunately often applies to realistic programs, especially the ones that are not written with having performance in mind.

I believe that the optimization is still good or at least not bad most of the time where it is needed.

⏱️ Benchmark results

Benchmark main this
r<alg_type::std_fn, std::uint8_t> 944 ns 294 ns
r<alg_type::std_fn, std::uint16_t> 1470 ns 297 ns
r<alg_type::std_fn, std::uint32_t> 1059 ns 403 ns
r<alg_type::std_fn, std::uint64_t> 1498 ns 884 ns
r<alg_type::rng, std::uint8_t> 1208 ns 307 ns
r<alg_type::rng, std::uint16_t> 1386 ns 288 ns
r<alg_type::rng, std::uint32_t> 1218 ns 397 ns
r<alg_type::rng, std::uint64_t> 1411 ns 842 ns

@AlexGuteniev AlexGuteniev requested a review from a team as a code owner September 28, 2024 14:36
@CaseyCarter CaseyCarter added the performance Must go faster label Sep 28, 2024
@CaseyCarter
Copy link
Member

Irony: a PR that adds vectorization entitled "remove vectorization".

@StephanTLavavej StephanTLavavej self-assigned this Sep 28, 2024
@AlexGuteniev

This comment was marked as outdated.

@AlexGuteniev

This comment was marked as outdated.

@AlexGuteniev
Copy link
Contributor Author

Modern AMD data would be interesting.
To avoid tricky swizzling, I added mixing integers and floats in 39974d1. The overall results are better for me, but this mixing seems to have more penalty on AMD than on Intel

Comment on lines +3651 to +3652
uint8_t _Shuf[_Size_v][_Size_h];
uint8_t _Size[_Size_v];
Copy link
Contributor Author

@AlexGuteniev AlexGuteniev Oct 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to elaborate on the table element bit with decision, though it probably doesn't deserve a code comment.
_Shuf is explicitly widened for 32-bit and 64-bit cases using _mm256_cvtepu8_epi32(_mm_loadu_si64(...));
_Size is implicitly widened for all cases.

This widening causes some extra instructions for _Shuf. For _Size apparently it is free (movzx instead of mov)

I prefer having to widen over having larger tables. Larger tables are consuming more cache, which is hard to notice in a synthetic benchmark, but likely to be noticeable in a realistic program.

Currently the _Shuf table, which is the largest of the two, is 128 bytes for 64-bit elements, 2 KiB for 8 bit and 32 bit elements, 4 KiB for 16 bit elements. Typical L1 cache size is 64 KiB per core.

@AlexGuteniev
Copy link
Contributor Author

remove_copy if it succeeds going to be very similar, but would always use AVX2 mask, and so available only for 32 and 64 bit elements. I can try this within the same PR, although it seems big already.

I'm now seeing multiple solutions how to do remove_copy for 8 and 16 bit elements

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Must go faster
Projects
Status: Initial Review
Development

Successfully merging this pull request may close these issues.

3 participants