Skip to content

Latest commit

 

History

History
157 lines (121 loc) · 6.73 KB

ARCHITECTURE.md

File metadata and controls

157 lines (121 loc) · 6.73 KB

Architecture

Alt text

Our benchmarking framework is designed to support all types of different frameworks. We specify a generic set of interfaces, such that benchmarks can be invoked through a configuration file config.json, which produces a standardized output for a given benchmarking scenario.

Overview

On a high level, zk-Harness takes as input a configuration file. The “Config Reader” reads the standardized config and invokes the ZKP framework as specified in the configuration file. You can find a description of the configuration file below. Each integrated ZKP framework exposes a runner that take as an input the standardized configuration parameters to execute the corresponding benchmarks. The output of benchmarking a given ZKP framework is a log file in csv format with standardized metrics. The log file is read by the “Log Analyzer”, which compiles the logs into pandas dataframes that are used by the front- end and displayed on the public website. You can find the standardized logging format in the following.

Config.json

The config.json file contains information about what benchmarks to execute when invoking the zk-Harness. The following describes how to run benchmarks given a specific config file. Further, we specify each of the keys in a common config file and their expected values.

Running benchmarks for a specific config file

Given a config file, you can run python3 -m src.reader --config path/to/config.json to run the benchmarks defined by a given config file.

config.json key specification

project

The name of the project being benchmarked.

project_url

The URL(s) to the repository for the project.

category

The category for which the zk-Harness should benchmark a ZKP-framework. Currently, it supports only circuit.

count

The number of runs over which a given computation specified by the config should be averaged. This option is taken under consideration only if the benchmarking is performed manually and not through a benchmarking framework such as criterion.

payload

payload specifies the exact algorithms to benchmark.

  • backend The backend algorithm(s) to use for proving the specified circuit(s).

  • curves The curve(s) for which the ZKP-framework should be benchmarked.

  • circuits The name of the circuit to benchmark. Equivalent circuits across frameworks should have the same naming scheme for ease of comparison. If a new circuit is added, which does not yet exist in any framework, one should create a new input specification in the input/circuit/<circuit_name>/input_<input_related_identifier>.json.

  • algorithm The algorithm to execute. Note that not all algorithms exist for every framework. Valid algorithms to execute in a given framework are currently:

    • compile
    • setup
    • witness
    • prove
    • verify

    If a given algorithm is not specified for the configured framework, the execution will fail.

Logs

In the following sections, we describe the columns in the CSV file for each benchmark category. Note that for "Field Arithmetic" and "Elliptic Curve Group Operations", the logs are generated by the front-end and used only for displaying data in the UI.

Circuits

The following information is recorded for each circuit benchmark:

  • framework: the name of the framework (e.g., gnark)
  • category: the category of the benchmark (i.e., circuit)
  • backend: the backend used (e.g., groth16)
  • curve: the curve used (e.g., bn256)
  • circuit: the circuit being run
  • input: file path of the input used
  • operation: the step being measured -- compile, witness, setup, prove, verify
  • nbConstraints: the number of constraints in the circuit
  • nbSecret: number of secret inputs
  • nbPublic: number of public inputs
  • ram: memory consumed in bytes
  • time: elapsed time in milliseconds
  • proofSize: the size of the proof in bytes -- empty value when operation != prove
  • count: number of times that we run the benchmark

Field Arithmetic

The following information is recorded for each field arithmetic benchmark:

  • framework: the name of the framework (e.g., gnark)
  • category: the category of the benchmark (i.e., arithmetic)
  • curve: the curve of which field we use
  • field: the benchmarked field (base or scalar)
  • operation: the operation performed (add, sub, mul, inv, exp)
  • time: elapsed time in nanoseconds
  • count: number of times that we run the benchmark

Elliptic Curve Group Operations

The following information is recorded for each elliptic curve group operation benchmark:

  • framework: the name of the framework (e.g., gnark)
  • category: the category of the benchmark (i.e., ec)
  • curve: the benchmarked curve
  • operation: the operation performed -- g1-scalar-multiplicationg, g2-scalar-multiplication, g1-multi-scalar-multiplication, g2-multi-scalar-multiplication, pairing
  • input: Provided input
  • time: elapsed time in nanoseconds
  • count: number of times that we run the benchmark

Adding a new framework to the zk-Harness

To integrate a framework, one should follow the following steps:

  1. First, fork the zk-harness repository
  2. Create a frameworks/<framework_name> folder in the root folder of the repository.
  3. Create a custom benchmarking script that (i) reads from the standardized input of the config.json and outputs (ii) the standardized logs.
    • For example, benchmarking for gnark is done through a custom CLI, based on cobra
    • Your script should be able to take a variety of arguments as specified in the config.json, such that benchmarks can be easily executed and extended. For example, a common command in the gnark integration would be ./gnark groth16 --circuit=sha256 --input=_input/circuit/sha256/input_3.json --curve=bn254
  4. Modify the src/reader/process_circuit.py scripts to include your newly created script as described in step 3, which is called if the project field of the respective config contains the <framework_name> of your newly added ZKP framework.
  • The src/reader/process_circuit.py processing python scripts are invoked by __main__.py.
  1. Create a documentation in the frameworks/<framework_name>/README.md to outline how others can include new circuits / benchmarks for another circuit in the framework.
  2. Add config files for running the benchmarks in input/config/<framework_name> and add make rules for the new framework in the Makefile.

If you follow the specified interfaces for config and logs, your framework specific benchmarking should seamlessly integrate into the zk-Harness frontend.

Once finished, please create a Pull Request and assign it to one of the maintainers for review.