-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpectedly slow for hashing large when compiling to WASM #303
Comments
Hmm, that's not expected, and I'm not sure what might cause it. Is there any chance that memory leak in your alloc function is compounding over lots of iterations? Could you give me a complete benchmark script to try it out on my end? |
https://github.com/dao-xyz/blake3-wasm I created a repo only with the minimal things. Currently I can not even run it for more than 1 benchmark with the minimal hash function like this #[wasm_bindgen]
pub fn hash(data: &[u8]) -> Vec<u8> {
blake3::hash(data).as_bytes().to_vec()
} I get following error code upon next benchmark
Previously with this repo, I also observed the behaviour when I run the benchmark multiple times with the same data size to hash, it becomes slower and slower and sometimes crash with error "bus error". Now however, I can not reproduce that since I for some reason can not do multiple iterations anymore before I get the SIGBUS error. I am using Node 19.3.0 currently, but get the same behaviour on 18.12.1 |
So when I run the benchmark asynchronously/concurrent hash calls it fails with the SIGBUS error i.e. suite.add("hash wasm simple" + size / 1e3 + "kb", {
defer: true,
fn: (deferred: any) => {
const rng = getSample(size);
bhash(rng)
deferred.resolve();
},
}); sequentially however suite.add("hash wasm simple" + size / 1e3 + "kb", {
fn: () => {
const rng = getSample(size);
bhash(rng)
},
}); works an gives following output
Doing the the hashing according to the rust code provided in the beginning of the issue yields similar performance (or a little worse) than this. In this case I do alloc outside the benchmark. To mitigate if there would be some kind of memory leak |
In case for the performance when the benchmark runs until the end (run non-deferred). I assume that the compiled wasm does not use "wasm simd" which makes it worse when the data sizes increases (?) Maybe related to #187 Still, given the benchmarking results provided in the README in this repo, I would assume the WASM version would still outperform or be on par with sha256 even though it is not as optimized |
So I have done a simple lib to wrap this project in WASM
I am trying to understand why I am not getting good performance for hashing large data sizes.
It performs great when I run it on input lengths of a few < 5 kb in comparison to Node's sha256 implementation
For 1kb data it is 5x faster.
For 10kb it is a little slower
For 1 mb of data, it is 3x slower.
Is this expected results considering the runtime, perhaps the single threaded nature?
I same performance with the js lib blake3. It performs just as good/bad.
Is there anything I can do to improve this?
The text was updated successfully, but these errors were encountered: