Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test(VDF): add bench tests for VDF in rsa group #42

Open
wants to merge 17 commits into
base: master
Choose a base branch
from

Conversation

0xmountaintop
Copy link
Contributor

This PR adds an exmaple of benchmarking VDF in rsa_group (using RSA-2048 modulus).

environment

  • Ubuntu 20.04.1 LTS
  • linux 5.4.0-48-generic
  • cargo 1.44.1 (88ba85757 2020-06-11)
  • rustup 1.22.1 (b01adbbc3 2020-07-08)
  • rustc 1.44.1 (c7087fe00 2020-06-17)

how to test it

cargo bench --bench bench-vdf-rsa  --jobs 1

test result

Benchmarking eval with difficulty 1000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 38.2s, or reduce sample count to 10.
eval with difficulty 1000                                                                            
                        time:   [378.84 ms 379.39 ms 380.08 ms]
                        change: [-0.9753% -0.6037% -0.2520%] (p = 0.00 < 0.05)
                        Change within noise threshold.
Found 7 outliers among 100 measurements (7.00%)
  2 (2.00%) high mild
  5 (5.00%) high severe

Benchmarking verify with difficulty 1000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 55.9s, or reduce sample count to 10.
verify with difficulty 1000                                                                            
                        time:   [545.13 ms 545.60 ms 546.16 ms]
                        change: [-7.4577% -6.4547% -5.4789%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  2 (2.00%) high mild
  5 (5.00%) high severe

Benchmarking eval with difficulty 2000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 5.2s, or reduce sample count to 90.
eval with difficulty 2000                                                                            
                        time:   [51.903 ms 51.967 ms 52.035 ms]
Found 3 outliers among 100 measurements (3.00%)
  2 (2.00%) high mild
  1 (1.00%) high severe

Benchmarking verify with difficulty 2000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 15.0s, or reduce sample count to 30.
verify with difficulty 2000                                                                            
                        time:   [141.99 ms 142.08 ms 142.18 ms]
Found 10 outliers among 100 measurements (10.00%)
  10 (10.00%) high mild

Benchmarking eval with difficulty 5000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 13.2s, or reduce sample count to 30.
eval with difficulty 5000                                                                            
                        time:   [125.14 ms 125.30 ms 125.48 ms]
Found 21 outliers among 100 measurements (21.00%)
  11 (11.00%) high mild
  10 (10.00%) high severe

Benchmarking verify with difficulty 5000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 52.3s, or reduce sample count to 10.
verify with difficulty 5000                                                                            
                        time:   [487.24 ms 487.65 ms 488.12 ms]
Found 9 outliers among 100 measurements (9.00%)
  2 (2.00%) high mild
  7 (7.00%) high severe

Benchmarking eval with difficulty 10000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 25.5s, or reduce sample count to 10.
eval with difficulty 10000                                                                            
                        time:   [252.55 ms 252.86 ms 253.21 ms]
Found 15 outliers among 100 measurements (15.00%)
  6 (6.00%) high mild
  9 (9.00%) high severe

Benchmarking verify with difficulty 10000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 31.9s, or reduce sample count to 10.
verify with difficulty 10000                                                                            
                        time:   [308.68 ms 309.44 ms 310.36 ms]
Found 20 outliers among 100 measurements (20.00%)
  16 (16.00%) high mild
  4 (4.00%) high severe

Benchmarking eval with difficulty 100000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 83.7s, or reduce sample count to 10.
eval with difficulty 100000                                                                            
                        time:   [842.41 ms 847.44 ms 853.19 ms]
Found 12 outliers among 100 measurements (12.00%)
  4 (4.00%) high mild
  8 (8.00%) high severe

Benchmarking verify with difficulty 100000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 96.0s, or reduce sample count to 10.
verify with difficulty 100000                                                                            
                        time:   [948.37 ms 953.38 ms 959.61 ms]
Found 7 outliers among 100 measurements (7.00%)
  3 (3.00%) high mild
  4 (4.00%) high severe

Benchmarking eval with difficulty 1000000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 694.4s, or reduce sample count to 10.
eval with difficulty 1000000                                                                            
                        time:   [6.9406 s 6.9448 s 6.9492 s]
Found 7 outliers among 100 measurements (7.00%)
  7 (7.00%) high mild

Benchmarking verify with difficulty 1000000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 16.1s, or reduce sample count to 30.
verify with difficulty 1000000                                                                            
                        time:   [158.90 ms 159.14 ms 159.43 ms]
Found 8 outliers among 100 measurements (8.00%)
  5 (5.00%) high mild
  3 (3.00%) high severe

@omershlo
Copy link
Contributor

is this PR ready to be merged?

@0xmountaintop
Copy link
Contributor Author

is this PR ready to be merged?

@omershlo Yes I think so!

Would you mind taking a look again?

for _ in 0..t {
r2 = r.clone() * two.clone();
b = r2.clone().div_rem_floor(l.clone()).0;
r = r2.clone().div_rem_floor(l.clone()).1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need to run r2.clone().div_rem_floor(l.clone()) only once. (instead of twice)

Comment on lines 78 to 82
// inverse, to get enough security bits
let inverse = match hashed.invert(&modulus.clone()) {
Ok(inverse) => inverse,
Err(unchanged) => unchanged,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually not needed in this function

Copy link
Contributor Author

@0xmountaintop 0xmountaintop Nov 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need the same trick as #42 (comment) here?

hasher.update(seed.to_digits::<u8>(Order::Lsf));
let hashed = Integer::from_digits(&hasher.finalize(), Order::Lsf);

// inverse, to get enough security bits
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I double checked and we can actually cannot do the inverse trick as it do not provide a random element in the group.
Instead you should concat results of nine sha256 and take the result modulo N

Copy link
Contributor Author

@0xmountaintop 0xmountaintop Nov 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I run recursive hashing only on the imtermediate hashing result itself, then at the end use them to reconstruct the randomness?

Or should I hash->construct->hash->construct repeatedly?

The latter seems can offer more entropy?

i.e., should I

let mut part = h_g_inner(seed);
let mut result = part.clone();
let mut ent = HASH_ENT;
while ent < GROUP_ENT {
    part = h_g_inner(&part);
    result = (result << HASH_ENT) + part.clone();
    ent += HASH_ENT;
}
result.div_rem_floor(modulus.clone()).1

or

let mut result = h_g_inner(seed);
let mut ent = HASH_ENT;
while ent < GROUP_ENT {
    result = h_g_inner(&result);
    result = (result.clone() << HASH_ENT) + result.clone();
    ent += HASH_ENT;
}
result.div_rem_floor(modulus.clone()).1

Copy link
Contributor Author

@0xmountaintop 0xmountaintop Nov 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also come up with another idea.

We can actually

sha256("part1"||seed) || sha256("part2"||seed) || ... || sha256("part8"||seed)

i.e., we keep using the same seed for a part of the input, but introduces "partXXX" to provide different randomness.

We then concat then all.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all of the methods above works and we should take the fastest

0xmountaintop added a commit to 0xmountaintop/class-groups that referenced this pull request Nov 21, 2020
@0xmountaintop
Copy link
Contributor Author

0xmountaintop commented Nov 21, 2020

BTW, h_g_inner now converts between bytes and Integer just for better demostrating the logic. This is indeed tedious and unnecessay. We can actually run hashing and concat only using bytes, and convert it to Integer at the end.

I will refactor it in this way once we decide which hash_and_concat way (we mentioned 3 ways above) we want to use.

@omershlo
Copy link
Contributor

let me know when ready to merge

@0xmountaintop
Copy link
Contributor Author

updated bechmark results:

$ cargo fmt && cargo bench --bench bench-vdf-rsa  --jobs 1
    Finished bench [optimized] target(s) in 0.07s
     Running target/release/deps/bench_vdf_rsa-6a178b6bb8445182
Gnuplot not found, using plotters backend
eval with difficulty 1000                                                                            
                        time:   [7.5328 ms 7.5852 ms 7.6428 ms]
                        change: [+2.8713% +3.8219% +4.7723%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
  2 (2.00%) high mild
  2 (2.00%) high severe

verify with difficulty 1000                                                                            
                        time:   [944.50 us 948.74 us 953.69 us]
                        change: [-14.040% -10.687% -7.5197%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
  2 (2.00%) high mild
  3 (3.00%) high severe

eval with difficulty 2000                                                                            
                        time:   [13.640 ms 13.719 ms 13.801 ms]
                        change: [-14.981% -11.285% -7.6220%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
  4 (4.00%) high mild

Benchmarking verify with difficulty 2000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 7.0s, enable flat sampling, or reduce sample count to 50.
verify with difficulty 2000                                                                             
                        time:   [1.3749 ms 1.3810 ms 1.3894 ms]
                        change: [+12.347% +13.774% +15.231%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
  5 (5.00%) high mild
  5 (5.00%) high severe

eval with difficulty 5000                                                                            
                        time:   [34.430 ms 34.672 ms 34.962 ms]
                        change: [-3.8130% -2.1946% -0.5532%] (p = 0.01 < 0.05)
                        Change within noise threshold.
Found 9 outliers among 100 measurements (9.00%)
  5 (5.00%) high mild
  4 (4.00%) high severe

Benchmarking verify with difficulty 5000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 6.6s, enable flat sampling, or reduce sample count to 60.
verify with difficulty 5000                                                                             
                        time:   [1.3095 ms 1.3201 ms 1.3346 ms]
                        change: [+20.283% +21.947% +23.644%] (p = 0.00 < 0.05)
                        Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
  6 (6.00%) high mild
  2 (2.00%) high severe

Benchmarking eval with difficulty 10000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 6.9s, or reduce sample count to 70.
eval with difficulty 10000                                                                            
                        time:   [65.710 ms 65.900 ms 66.130 ms]
                        change: [-4.7590% -3.6740% -2.6779%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
  4 (4.00%) high mild
  8 (8.00%) high severe

Benchmarking verify with difficulty 10000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 5.1s, enable flat sampling, or reduce sample count to 70.
verify with difficulty 10000                                                                             
                        time:   [1.0345 ms 1.0411 ms 1.0493 ms]
                        change: [-99.746% -99.742% -99.739%] (p = 0.00 < 0.05)
                        Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
  5 (5.00%) high mild
  1 (1.00%) high severe

Benchmarking eval with difficulty 100000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 69.8s, or reduce sample count to 10.
eval with difficulty 100000                                                                            
                        time:   [670.80 ms 674.53 ms 678.68 ms]
Found 2 outliers among 100 measurements (2.00%)
  1 (1.00%) high mild
  1 (1.00%) high severe

Benchmarking verify with difficulty 100000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 6.3s, enable flat sampling, or reduce sample count to 60.
verify with difficulty 100000                                                                             
                        time:   [1.2220 ms 1.2251 ms 1.2302 ms]
Found 21 outliers among 100 measurements (21.00%)
  2 (2.00%) high mild
  19 (19.00%) high severe

Benchmarking eval with difficulty 1000000: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 685.1s, or reduce sample count to 10.
eval with difficulty 1000000                                                                            
                        time:   [6.6123 s 6.6311 s 6.6512 s]
Found 22 outliers among 100 measurements (22.00%)
  17 (17.00%) high mild
  5 (5.00%) high severe

verify with difficulty 1000000                                                                            
                        time:   [913.82 us 919.67 us 925.02 us]
Found 11 outliers among 100 measurements (11.00%)
  3 (3.00%) high mild
  8 (8.00%) high severe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants