Skip to content

Commit

Permalink
fix typos (#184)
Browse files Browse the repository at this point in the history
  • Loading branch information
buldazer23 authored Jan 18, 2024
1 parent a6da47a commit 5d1e8e6
Show file tree
Hide file tree
Showing 4 changed files with 6 additions and 6 deletions.
4 changes: 2 additions & 2 deletions docs/spec/components/indexer.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ The indexer has the function of maintaining accumulators based on events.

An accumulator consists of an object, an event filter, and a mutator. Whenever an event matching the filter is received, the event is fed to the mutator, which updates the object (possibly making calls to the current smart contract state). We can configure how much history of the accumulator to store.

An accumulator can optionally define a custom method for initializing the accumulator from state from an intermediate checkpoint. This includes methods such as pulling state from a smart contract or getting calldata associated with a particular transaction.
An accumulator can optionally define a custom method for initializing the accumulator from state at an intermediate checkpoint. This includes methods such as pulling state from a smart contract or getting calldata associated with a particular transaction.

The indexer is one of the only stateful components of the operator. To avoid reindexing on restarts, the state of the indexer is stored in a database. We will use a schemaless db to avoid migrations.

The indexer must also support reorg resistance. We can achieve simple reorg resilience in the following way:
The indexer must also support reorg resistance. We can achieve simple reorg resilience in the following ways:
- For every accumulator, we make sure to store history long enough that we always have access to a finalized state.
- In the event reorg is detected, we can revert to the most recent finalized state, and then reindex to head.

Expand Down
4 changes: 2 additions & 2 deletions docs/spec/protocol-modules/attestation/attestation.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,10 @@ These requirements result in the following design choices:
- If an attestation is reorged out and if the transaction containing the header of a batch is not present within `BLOCK_STALE_MEASURE` blocks since `referenceBlockNumber` and the block that is `BLOCK_STALE_MEASURE` blocks since `referenceBlockNumber` is finalized, then the disperser should again start a new dispersal with that blob of data. Otherwise, the disperser must not re-submit another transaction containing the header of a batch associated with the same blob of data.
- Payment payloads sent to DA nodes should only take into account finalized attestations.

The first and second decision satisfies requirements 1 and 2. The three decisions together satisfy requirement 3.
The first and second decisions satisfy requirements 1 and 2. The three decisions together satisfy requirement 3.

Whenever the `confirmBatch` method of the [ServiceMananger.sol](../contracts-service-manager.md) is called, the following checks are used to ensure that only finalized registration state is utilized:
- Stake staleness check. The `referenceBlockNumber` is verified to be within `BLOCK_STALE_MEASURE` blocks before the confirmation block.This is to make sure that batches using outdated stakes are not confirmed. It is assured that stakes from within `BLOCK_STALE_MEASURE` blocks before confirmation are valid by delaying removal of stake by `BLOCK_STALE_MEASURE + MAX_DURATION_BLOCKS`.
- Stake staleness check. The `referenceBlockNumber` is verified to be within `BLOCK_STALE_MEASURE` blocks before the confirmation block.This is to make sure that batches using outdated stakes are not confirmed. It is assured that stakes from within `BLOCK_STALE_MEASURE` blocks before confirmation are valid by delaying removal of stakes by `BLOCK_STALE_MEASURE + MAX_DURATION_BLOCKS`.

### Confirmer Permissioning

Expand Down
2 changes: 1 addition & 1 deletion docs/spec/protocol-modules/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The overall security guarantee provided by EigenDA is actually a composite of ma
The main guarantee supported by the attestation module concerns the on-chain conditions under which a batch is able to be confirmed by the EigenDA smart contracts. In particular, the attestation module is responsible for upholding the following guarantee:
- Sufficient stake checking: A blob is only accepted on-chain when signatures from operators having sufficient stake on each quorum are presented.

The Attestation module is largely implemented by the EigenDA smart contracts via bookkeeping of stake and associated checks performed at the batch confirmation phase of the [Disperal Flow](../flows/dispersal.md). For more details, see the [Attestation module documentation](./attestation/attestation.md)
The Attestation module is largely implemented by the EigenDA smart contracts via bookkeeping of stakes and associated checks performed at the batch confirmation phase of the [Disperal Flow](../flows/dispersal.md). For more details, see the [Attestation module documentation](./attestation/attestation.md)

## Storage
The main guarantee supported by the storage module concerns the off-chain conditions which mirror the on-chain conditions of the storage module. In particular, the storage module is responsible for upholding the following guarantee:
Expand Down
2 changes: 1 addition & 1 deletion docs/spec/protocol-modules/storage/assignment.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ where $\gamma = \beta-\alpha$, with $\alpha$ and $\beta$ as defined in the [Stor

This means that as long as an operator has a stake share of at least $1/M_\text{max}$, then the encoded data that they will receive will be within a factor of 2 of their share of stake. Operators with less than $1/M_\text{max}$ of stake will receive no more than a $1/M_\text{max}$ of the encoded data. $M_\text{max}$ represents the maximum number of chunks that the disperser can be required to encode per blob. This limit is included because proving costs scale somewhat super-linearly with the number of chunks.

In the future, additional constraints on chunk length may be added; for instance, the chunk length may be set in order to maintain a fixed number of chunks per blob across all system states. Currently, the protocol does not mandate a specific value for the chunk length, but will accept the range satisfying the above constraint. The `CalculateChunkLength` function is provided as a convenience function which can be used to find a chunk length satisfying the protocol requirements.
In the future, additional constraints on chunk length may be added; for instance, the chunk length may be set in order to maintain a fixed number of chunks per blob across all system states. Currently, the protocol does not mandate a specific value for the chunk length, but will accept the range satisfying the above constraint. The `CalculateChunkLength` function is provided as a convenience function that can be used to find a chunk length satisfying the protocol requirements.



Expand Down

0 comments on commit 5d1e8e6

Please sign in to comment.