Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc updates #126

Merged
merged 8 commits into from
Dec 7, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 10 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ The library has not been released as a crate yet (as of Nov 2023) but the API ha
- generate inclusion proofs from a list of entity IDs (tree required)
- verify an inclusion proof using a root hash (no tree required)

See the [examples](https://github.com/silversixpence-crypto/dapol/examples) directory for details on how to use the API.
See the [examples](https://github.com/silversixpence-crypto/dapol/examples) directory or [docs](https://docs.rs/dapol/latest/dapol/#rust-api) for details on how to use the API.

### CLI

Expand Down Expand Up @@ -81,12 +81,12 @@ Building a tree can be done:

Build a tree using config file (full log verbosity):
```bash
./target/release/dapol -vvv build-tree config-file ./tree_config_example.toml
./target/release/dapol -vvv build-tree config-file ./examples/tree_config_example.toml
```

Add serialization:
```bash
./target/release/dapol -vvv build-tree config-file ./tree_config_example.toml --serialize .
./target/release/dapol -vvv build-tree config-file ./examples/tree_config_example.toml --serialize .
```

Deserialize a tree from a file:
Expand All @@ -96,7 +96,13 @@ Deserialize a tree from a file:

Generate proofs (proofs will live in the `./inclusion_proofs/` directory):
```bash
./target/release/dapol -vvv build-tree config-file ./tree_config_example.toml --gen-proofs ./examples/entities_example.csv
./target/release/dapol -vvv build-tree config-file ./examples/tree_config_example.toml --gen-proofs ./examples/entities_example.csv
```

Build a tree using cli args as apposed to a config file:
```bash
# this will generate random secrets & 1000 random entities
./target/release/dapol -vvv build-tree new --accumulator ndm-smt --height 16 --random-entities 1000
```

#### Proof generation
Expand Down
15 changes: 7 additions & 8 deletions benches/criterion_benches.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ mod memory_usage_estimation;
use memory_usage_estimation::estimated_total_memory_usage_mb;

mod utils;
use utils::{abs_diff, bytes_as_string, system_total_memory_mb};
use utils::{abs_diff, bytes_to_string, system_total_memory_mb};

/// Determines how many runs are done for number of entities.
/// The higher this value the more runs that are done.
Expand Down Expand Up @@ -116,7 +116,7 @@ pub fn bench_build_tree<T: Measurement>(c: &mut Criterion<T>) {
format!(
"height_{}/max_thread_count_{}/num_entities_{}",
h.as_u32(),
t.get_value(),
t.as_u8(),
n
),
),
Expand Down Expand Up @@ -168,19 +168,18 @@ pub fn bench_build_tree<T: Measurement>(c: &mut Criterion<T>) {
let path = Accumulator::parse_accumulator_serialization_path(dir).unwrap();
let acc = Accumulator::NdmSmt(ndm_smt.expect("Tree should have been built"));

group.bench_with_input(
group.bench_function(
BenchmarkId::new(
"serialize_tree",
format!(
"height_{}/max_thread_count_{}/num_entities_{}",
h.as_u32(),
t.get_value(),
t.as_u8(),
n
),
),
&(h, t, n),
|bench, tup| {
bench.iter(|| acc.serialize(path.clone()));
|bench| {
bench.iter(|| acc.serialize(path.clone()).unwrap());
},
);

Expand All @@ -190,7 +189,7 @@ pub fn bench_build_tree<T: Measurement>(c: &mut Criterion<T>) {

println!(
"\nSerialized tree file size: {}\n",
bytes_as_string(file_size as usize)
bytes_to_string(file_size as usize)
);
}
}
Expand Down
2 changes: 1 addition & 1 deletion benches/inputs.rs
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ pub fn num_entities_greater_than(n: u64) -> Vec<u64> {
pub fn max_thread_counts() -> Vec<MaxThreadCount> {
let mut tc: Vec<u8> = Vec::new();

let max_thread_count: u8 = MaxThreadCount::default().get_value();
let max_thread_count: u8 = MaxThreadCount::default().as_u8();

let step = if max_thread_count < 8 {
1
Expand Down
8 changes: 4 additions & 4 deletions benches/large_input_benches.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ mod memory_usage_estimation;
use memory_usage_estimation::estimated_total_memory_usage_mb;

mod utils;
use utils::{bytes_as_string, system_total_memory_mb, abs_diff};
use utils::{bytes_to_string, system_total_memory_mb, abs_diff};

/// Determines how many runs are done for number of entities.
/// The higher this value the fewer runs that are done.
Expand Down Expand Up @@ -87,7 +87,7 @@ fn main() {
"\nRunning benchmark for input values \
(height {}, max_thread_count {}, num_entities {})",
h.as_u32(),
t.get_value(),
t.as_u8(),
n
);

Expand Down Expand Up @@ -137,9 +137,9 @@ fn main() {
Serialized tree file size: {}\n \
========================================================================",
tree_build_time,
bytes_as_string(mem_used_tree_build),
bytes_to_string(mem_used_tree_build),
serialization_time,
bytes_as_string(file_size as usize)
bytes_to_string(file_size as usize)
);
}
}
Expand Down
8 changes: 4 additions & 4 deletions benches/memory_measurement.rs
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ impl Measurement for Memory {
struct MemoryFormatter;
impl ValueFormatter for MemoryFormatter {
fn format_value(&self, value: f64) -> String {
bytes_as_string(value as usize)
bytes_to_string(value as usize)
}

fn format_throughput(&self, throughput: &Throughput, value: f64) -> String {
Expand All @@ -60,7 +60,7 @@ impl ValueFormatter for MemoryFormatter {
}
}

fn scale_values(&self, typical_value: f64, values: &mut [f64]) -> &'static str {
fn scale_values(&self, _typical_value: f64, values: &mut [f64]) -> &'static str {
for val in values {
*val = ((*val / 1024u64.pow(2) as f64) * 1000.0).round() / 1000.0;
}
Expand All @@ -69,7 +69,7 @@ impl ValueFormatter for MemoryFormatter {

fn scale_throughputs(
&self,
typical_value: f64,
_typical_value: f64,
throughput: &Throughput,
values: &mut [f64],
) -> &'static str {
Expand Down Expand Up @@ -105,7 +105,7 @@ impl ValueFormatter for MemoryFormatter {
}
}

fn bytes_as_string(num_bytes: usize) -> String {
fn bytes_to_string(num_bytes: usize) -> String {
if num_bytes < 1024 {
format!("{} bytes", num_bytes)
} else if num_bytes >= 1024 && num_bytes < 1024usize.pow(2) {
Expand Down
2 changes: 1 addition & 1 deletion benches/memory_usage_estimation.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
use dapol::{Accumulator, EntityId, Height, InclusionProof, MaxThreadCount};
use dapol::{Height};

/// Estimated memory usage in MB.
/// The equation was calculated using the plane_of_best_fit.py script
Expand Down
7 changes: 4 additions & 3 deletions benches/utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ pub fn abs_diff(x: usize, y: usize) -> usize {
}
}

pub fn bytes_as_string(num_bytes: usize) -> String {
pub fn bytes_to_string(num_bytes: usize) -> String {
if num_bytes < 1024 {
format!("{} bytes", num_bytes)
} else if num_bytes >= 1024 && num_bytes < 1024usize.pow(2) {
Expand All @@ -46,6 +46,7 @@ pub fn bytes_as_string(num_bytes: usize) -> String {
// -------------------------------------------------------------------------------------------------
// Testing jemalloc_ctl to make sure it gives expected memory readings.

#[allow(dead_code)]
pub fn bench_test_jemalloc_readings() {
use jemalloc_ctl::{epoch, stats};

Expand All @@ -65,8 +66,8 @@ pub fn bench_test_jemalloc_readings() {

println!(
"buf capacity: {:<6}",
bytes_as_string(buf.capacity())
bytes_to_string(buf.capacity())
);

println!("Memory usage: {} allocated", bytes_as_string(diff),);
println!("Memory usage: {} allocated", bytes_to_string(diff),);
}
2 changes: 1 addition & 1 deletion src/accumulators/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ use super::{ndm_smt, Accumulator};
/// accumulator_type = "ndm-smt"
/// ```
///
/// The rest of the config details can be found in the submodules:
/// The rest of the config details can be found in the sub-modules:
/// - [crate][accumulators][NdmSmtConfig]
///
/// Config deserialization example:
Expand Down
18 changes: 16 additions & 2 deletions src/accumulators/ndm_smt.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ use std::collections::HashMap;
use primitive_types::H256;
use serde::{Deserialize, Serialize};

use log::error;
use log::{error, info};
use logging_timer::{timer, Level};

use rayon::prelude::*;
Expand Down Expand Up @@ -89,6 +89,20 @@ impl NdmSmt {
let salt_b_bytes = secrets.salt_b.as_bytes();
let salt_s_bytes = secrets.salt_s.as_bytes();

info!(
"\nCreating NDM-SMT with the following configuration:\n \
- height: {}\n \
- number of entities: {}\n \
- master secret: 0x{}\n \
- salt b: 0x{}\n \
- salt s: 0x{}",
height.as_u32(),
entities.len(),
master_secret_bytes.iter().map(|b| format!("{:02x}", b)).collect::<String>(),
salt_b_bytes.iter().map(|b| format!("{:02x}", b)).collect::<String>(),
salt_s_bytes.iter().map(|b| format!("{:02x}", b)).collect::<String>(),
);

let (leaf_nodes, entity_coord_tuples) = {
// Map the entities to bottom-layer leaf nodes.

Expand Down Expand Up @@ -263,7 +277,7 @@ fn new_padding_node_content_closure(
move |coord: &Coordinate| {
// TODO unfortunately we copy data here, maybe there is a way to do without
// copying
let coord_bytes = coord.as_bytes();
let coord_bytes = coord.to_bytes();
// pad_secret is given as 'w' in the DAPOL+ paper
let pad_secret = generate_key(None, &master_secret_bytes, Some(&coord_bytes));
let pad_secret_bytes: [u8; 32] = pad_secret.into();
Expand Down
13 changes: 11 additions & 2 deletions src/accumulators/ndm_smt/ndm_smt_config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,11 @@ impl NdmSmtConfigBuilder {
pub fn build(&self) -> NdmSmtConfig {
let entities = EntityConfig {
file_path: self.entities.clone().and_then(|e| e.file_path).or(None),
num_random_entities: self.entities.clone().and_then(|e| e.num_random_entities).or(None),
num_random_entities: self
.entities
.clone()
.and_then(|e| e.num_random_entities)
.or(None),
};

NdmSmtConfig {
Expand Down Expand Up @@ -256,7 +260,12 @@ mod tests {
fn builder_without_any_values_fails() {
use crate::entity::EntitiesParserError;
let res = NdmSmtConfigBuilder::default().build().parse();
assert_err!(res, Err(NdmSmtConfigParserError::EntitiesError(EntitiesParserError::NumEntitiesNotSet)));
assert_err!(
res,
Err(NdmSmtConfigParserError::EntitiesError(
EntitiesParserError::NumEntitiesNotSet
))
);
}

#[test]
Expand Down
45 changes: 29 additions & 16 deletions src/accumulators/ndm_smt/x_coord_generator.rs
Original file line number Diff line number Diff line change
Expand Up @@ -35,24 +35,37 @@ use std::collections::HashMap;
/// optimized by a HashMap. This algorithm wraps the `rng`, efficiently avoiding
/// collisions. Here is some pseudo code explaining how it works:
///
/// ```bash,ignore
/// if N > max_x_coord throw error
/// for i in range [0, N]:
/// - pick random k in range [i, max_x_coord]
/// - if k in map then set v = map[k]
/// - while map[v] exists: v = map[v]
/// - result = v
/// - else result = k
/// - set map[k] = i
/// Key:
/// - `n` is the number of users that need to be mapped to leaf nodes
/// - `x_coord` is the index of the leaf node (left-most x-coord is 0,
/// right-most x-coord is `max_x_coord`)
/// - `user_mapping` is the result of the algorithm, where each user is given a
/// leaf node index i.e. `user_mapping: users -> indices`
/// - `tracking_map` is used to determine which indices have been used
///
/// ```python,ignore
/// if n > max_x_coord throw error
///
/// user_mapping = new_empty_hash_map()
/// tracking_map = new_empty_hash_map()
///
/// for i in [0, n):
/// pick random k in range [i, max_x_coord]
/// if k in tracking_map then set v = traking_map[k]
/// while traking_map[v] exists: v = tracking_map[v]
/// set user_mapping[i] = v
/// else user_mapping[i] = k
/// set tracking_map[k] = i
/// ```
///
/// Assuming `rng` is constant-time the above algorithm has time complexity
/// `O(N)`. Note that the second loop (the while loop) will only execute a
/// total of `N` times throughout the entire loop cycle of the first loop.
/// This is because the second loop will only execute if a chain in the map
/// exists, and the worst case happens when there is 1 long chain containing
/// all the elements of the map; in this case the second loop will only execute
/// on 1 of the iterations of the first loop.
/// Assuming `rng` is constant-time and the HashMap is optimized by some
/// balanced search tree then the above algorithm has time and memory complexity
/// `O(n log(n))` in the worst case. Note that the second loop (the while loop)
/// will only execute a total of `n` times throughout the entire loop cycle of
/// the first loop. This is because the second loop will only execute if a chain
/// in the map exists, and the worst case happens when there is 1 long chain
/// containing all the elements of the map; in this case the second loop will
/// only execute on 1 of the iterations of the first loop.
pub struct RandomXCoordGenerator {
rng: ThreadRng,
used_x_coords: HashMap<u64, u64>,
Expand Down
4 changes: 2 additions & 2 deletions src/binary_tree.rs
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ impl Coordinate {
/// the next 8 elements of the array, directly after the first element.
/// Both x- & y-coords are given in Little Endian byte order.
/// https://stackoverflow.com/questions/71788974/concatenating-two-u16s-to-a-single-array-u84
pub fn as_bytes(&self) -> [u8; 32] {
pub fn to_bytes(&self) -> [u8; 32] {
let mut c = [0u8; 32];
let (left, mid) = c.split_at_mut(1);
left.copy_from_slice(&self.y.to_le_bytes());
Expand Down Expand Up @@ -427,7 +427,7 @@ mod tests {
let x = 258;
let y = 12;
let coord = Coordinate { x, y };
let bytes = coord.as_bytes();
let bytes = coord.to_bytes();

assert_eq!(bytes.len(), 32, "Byte array should be 256 bits");

Expand Down
Loading