⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

Conversation

@niran
Copy link
Contributor

@niran niran commented Jan 12, 2026

Summary

Cache trie nodes computed during flashblock state root calculations and reuse them across bundle metering simulations. This optimization ensures that each bundle simulation only pays the I/O cost for its own state changes, not for all previous flashblock state changes.

Problem

Without caching, every bundle simulation must:

  1. Compute the hashed post-state for all flashblock changes
  2. Read trie nodes from disk for all accounts/storage touched by flashblocks
  3. Then compute the bundle's incremental changes on top

With many bundles being simulated per flashblock, this redundant I/O becomes a bottleneck.

Solution

Introduce FlashblockTrieCache that:

  1. Caches TrieUpdates and HashedPostState from flashblock state root computation
  2. Uses arc_swap::ArcSwap for lock-free reads and atomic updates
  3. Prepends cached data to bundle's TrieInput via prepend_cached()

Changes

  • flashblock_trie_cache.rs: New module implementing FlashblockTrieCache with store() and load() methods.
  • meter.rs: Add FlashblockTrieData struct and cached_flashblock_trie parameter to meter_bundle. Use TrieInput::prepend_cached() when cached data is available.
  • rpc.rs: Integrate with FlashblockTrieCache to load cached trie data for bundle simulations.
  • lib.rs: Export new types.
  • Cargo.toml: Add arc-swap and reth-trie-common dependencies.

Performance Impact

Before: Each bundle simulation reads trie nodes for all flashblock changes
After: Each bundle simulation only reads trie nodes for its own changes

The improvement scales with:

  • Number of accounts/storage slots modified by flashblocks
  • Number of bundles simulated per flashblock

Test Plan

  • Existing metering tests pass
  • State root calculation produces correct results with cached data
  • cargo clippy passes

@niran niran force-pushed the niran/meter-trie-cache branch from abeece0 to d96a3ce Compare January 12, 2026 20:03
@niran niran force-pushed the niran/meter-flashblocks-state-v2 branch from 994978c to 13ef63c Compare January 12, 2026 20:03
@danyalprout danyalprout added this to the v0.3.0 milestone Jan 12, 2026
@niran niran force-pushed the niran/meter-flashblocks-state-v2 branch from 13ef63c to 6817f24 Compare January 12, 2026 22:06
@niran niran force-pushed the niran/meter-trie-cache branch 2 times, most recently from d492e12 to afad0ab Compare January 13, 2026 00:58
@niran niran force-pushed the niran/meter-flashblocks-state-v2 branch 2 times, most recently from 8dadef0 to 724a0c9 Compare January 13, 2026 01:22
@niran niran force-pushed the niran/meter-trie-cache branch from afad0ab to 364f2ee Compare January 13, 2026 01:22
Add a single-entry cache for flashblock trie nodes to avoid redundant I/O
when metering multiple bundles against the same flashblock state.

Key changes:
- Add FlashblockTrieCache with ensure_cached() for lazy trie computation
- Add FlashblockTrieData containing TrieUpdates and HashedPostState
- Update meter_bundle to accept optional cached_flashblock_trie parameter
- Use TrieInput::prepend_cached() to prepend cached trie for state root
- Add arc-swap and reth-trie-common dependencies

When multiple bundles are metered against the same flashblock, the cache
ensures the flashblock's trie is computed only once. Each bundle's state
root calculation then measures only its own incremental I/O.
@niran niran force-pushed the niran/meter-trie-cache branch from 364f2ee to c9a7614 Compare January 13, 2026 01:55
@niran niran force-pushed the niran/meter-flashblocks-state-v2 branch from 724a0c9 to be1c7a5 Compare January 13, 2026 01:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants