⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

Conversation

@alpe
Copy link
Contributor

@alpe alpe commented Nov 27, 2025

Overview

Resolves #2609

The basic idea is to store an additional DAHightHint field within the p2p store.

As SignedHeader and Data are used in other places, too. I added an an DAHeightHintContainer type to wrap the tuple for the store only.
The DAHeight Hint is added by the da_submitter and read in the syncer to fetch the missing DA header/data for the most recent block as required.

Please note: this is a breaking change to the p2p network and store

@claude
Copy link
Contributor

claude bot commented Nov 27, 2025

Claude finished @alpe's task in 3m 53s —— View job


Comprehensive Code Review: DA Hints in P2P

I've completed a thorough review of this PR focusing on code quality, architecture, security, performance, testing, and the migration strategy. Overall, this is a well-designed feature with good test coverage (70.17%), but there are several areas that need attention before merging.


🔴 Critical Issues

1. Error Handling - Silent Failures in DA Hint Updates

Location: block/internal/submitting/da_submitter.go:213-216 and 258-261

The code silently ignores errors when updating DA height hints in the p2p store:

if err := s.headerDAHintAppender.AppendDAHint(ctx, res.Height, hashes...); err != nil {
    s.logger.Error().Err(err).Msg("failed to append da height hint in header p2p store")
    // ignoring error here, since we don't want to block the block submission'
}

Problem: While the comment justifies not blocking submission, this creates a silent failure mode where:

  • DA hints never get stored in p2p
  • The optimization degrades without anyone noticing
  • Syncing nodes fall back to sequential DA scanning, reducing performance

Recommendation:

  1. Add metrics to track DA hint append failures: s.metrics.DAHintAppendFailures.Add(1)
  2. Consider surfacing persistent failures as health check warnings
  3. Document this failure mode in the PR description

2. Concurrency - Race Condition in AsyncDARetriever

Location: block/internal/syncing/async_da_retriever.go:61-76

The inFlight map access has a potential race condition:

func (r *AsyncDARetriever) RequestRetrieval(height uint64) {
    r.mu.Lock()
    defer r.mu.Unlock()
    
    if _, exists := r.inFlight[height]; exists {
        return
    }
    
    select {
    case r.workCh <- height:
        r.inFlight[height] = struct{}{}  // ✅ Protected
        // ...
    default:
        // ❌ Height not marked in-flight, can be re-requested
    }
}

Problem: If the work channel is full (default case), the height is not marked as in-flight, allowing duplicate requests to queue up when capacity becomes available.

Recommendation:

select {
case r.workCh <- height:
    r.inFlight[height] = struct{}{}
    r.logger.Debug().Uint64("height", height).Msg("queued DA retrieval request")
default:
    r.inFlight[height] = struct{}{}  // Mark as "attempted" to prevent immediate retry
    r.logger.Warn().Uint64("height", height).Msg("DA retrieval worker pool full, height marked for later retry")
}

3. Context Handling - Missing Context in Stop

Location: block/internal/syncing/async_da_retriever.go:51-57

func (r *AsyncDARetriever) Stop() {
    if r.cancel != nil {
        r.cancel()
    }
    r.wg.Wait()  // ❌ No timeout, can hang indefinitely
    r.logger.Info().Msg("AsyncDARetriever stopped")
}

Problem: If workers are blocked on network I/O during shutdown, wg.Wait() can hang indefinitely.

Recommendation:

func (r *AsyncDARetriever) Stop() error {
    if r.cancel != nil {
        r.cancel()
    }
    
    done := make(chan struct{})
    go func() {
        r.wg.Wait()
        close(done)
    }()
    
    select {
    case <-done:
        r.logger.Info().Msg("AsyncDARetriever stopped cleanly")
        return nil
    case <-time.After(5 * time.Second):
        r.logger.Warn().Msg("AsyncDARetriever stop timed out, some workers may still be running")
        return fmt.Errorf("stop timed out")
    }
}

⚠️ High Priority Issues

4. Backward Compatibility - Network Protocol Breaking Change

Location: types/p2p_envelope.go and PR description

The PR introduces P2PSignedHeader and P2PData as replacements for the existing SignedHeader and Data types in P2P exchanges. While the proto types use optional uint64 da_height_hint, making them forward-compatible for deserialization, the network exchange is still breaking.

Current Mitigation: Binary compatibility tests exist (TestSignedHeaderBinaryCompatibility, TestDataBinaryCompatibility) showing old→new unmarshaling works.

Remaining Gap: The review comment asks "was this tested on an existing network?" - this is crucial. The migration needs:

  1. Phased Rollout Plan:

    • Phase 1: Deploy nodes that can receive both old and new formats
    • Phase 2: Enable sending new format once >67% nodes upgraded
    • Phase 3: Deprecate old format
  2. Protocol Version Negotiation: Consider adding a protocol version to capability discovery in P2P handshakes.

  3. Monitoring: Add metrics to track:

    • p2p_protocol_version_mismatch_errors
    • p2p_da_hint_received_count vs p2p_da_hint_missing_count

Recommendation: Before merging, either:

  • Test on a live testnet with mixed versions, OR
  • Document a clear upgrade procedure requiring coordinated network upgrade

5. Performance - Unbounded Channel in Syncer

Location: block/internal/syncing/syncer.go:180

heightInCh: make(chan common.DAHeightEvent, 100),

Problem: Buffer size of 100 is arbitrary. Under heavy sync load:

  • If producers (DA worker, P2P worker, pending worker) exceed this rate, they block
  • No backpressure mechanism or metrics to detect this condition

Recommendation:

  1. Make buffer size configurable: config.Sync.HeightEventChannelSize (default 100)
  2. Add metrics: height_event_channel_utilization gauge
  3. Add log warning when buffer >80% full
  4. Consider dynamic backpressure: when channel >90% full, slow down DA worker

6. Error Handling - Ignored Store Errors

Location: pkg/sync/sync_service.go:319-333

func (s *SyncService[H]) AppendDAHint(ctx context.Context, daHeight uint64, hashes ...types.Hash) error {
    entries := make([]H, 0, len(hashes))
    for _, h := range hashes {
        v, err := s.store.Get(ctx, h)
        if err != nil {
            if errors.Is(err, header.ErrNotFound) {
                continue  // ✅ Expected, item not yet in store
            }
            return err  // ❌ But what happens to partial updates?
        }
        // ...
    }
    return s.store.Append(ctx, entries...)
}

Problem: If store.Get() fails for hash[2] out of [0,1,2,3], we:

  1. Return early without updating any hints
  2. Lose the hints for hashes [0,1] that were successfully retrieved
  3. No retry mechanism for transient failures

Recommendation:

// Collect both successes and failures
var entries []H
var failedHashes []types.Hash

for _, h := range hashes {
    v, err := s.store.Get(ctx, h)
    if err != nil {
        if errors.Is(err, header.ErrNotFound) {
            continue
        }
        s.logger.Warn().Err(err).Str("hash", h.String()).Msg("failed to get header for DA hint update")
        failedHashes = append(failedHashes, h)
        continue
    }
    v.SetDAHint(daHeight)
    entries = append(entries, v)
}

// Update what we can
if len(entries) > 0 {
    if err := s.store.Append(ctx, entries...); err != nil {
        return fmt.Errorf("failed to append DA hints for %d entries: %w", len(entries), err)
    }
}

// Report failures for observability
if len(failedHashes) > 0 {
    s.logger.Warn().Int("failed_count", len(failedHashes)).Msg("some DA hints could not be updated")
}

return nil

🟡 Medium Priority Issues

7. Code Quality - Magic Numbers

Location: Multiple locations

asyncDARetriever.go:33:  workCh: make(chan uint64, 100),        // Why 100?
asyncDARetriever.go:43:  for i := 0; i < 5; i++ {               // Why 5 workers?
syncer.go:180:           heightInCh: make(chan common.DAHeightEvent, 100),  // Why 100?
syncer.go:459:           ticker := time.NewTicker(10 * time.Millisecond),   // Why 10ms?

Recommendation: Extract to named constants or config parameters:

const (
    DefaultAsyncDAWorkerPoolSize    = 5
    DefaultAsyncDAWorkChannelBuffer = 100
    DefaultHeightEventChannelBuffer = 100
    DefaultPendingWorkerTickInterval = 10 * time.Millisecond
)

8. Observability - Missing Metrics

Location: block/internal/syncing/async_da_retriever.go

The new AsyncDARetriever has no metrics for:

  • Active workers (gauge)
  • Queue depth (gauge)
  • Request success/failure rates (counter)
  • Retrieval latency (histogram)

Recommendation:

type AsyncDARetrieverMetrics struct {
    ActiveRequests   prometheus.Gauge
    QueueDepth       prometheus.Gauge
    RequestsTotal    *prometheus.CounterVec  // labels: status=success|failure
    RetrievalLatency prometheus.Histogram
}

9. Testing - Missing Edge Cases

Test Coverage Gaps:

  1. async_da_retriever_test.go - Missing cases:

    • What happens when resultCh blocks? (line 106 in async_da_retriever.go)
    • Context cancellation during active retrieval
    • Error handling from RetrieveFromDA
  2. sync_service_test.go - Need to add:

    • Test AppendDAHint with partial failures
    • Test DA hint propagation through store→p2p→network
    • Test backward compatibility: old node receives new format, new node receives old format
  3. Integration test - Missing:

    • Multi-node test with mixed versions (critical for migration confidence)

Recommendation: Add integration test:

// Test DA hints end-to-end: submitter → store → p2p → syncer
func TestDAHintsIntegration(t *testing.T) {
    // 1. Node A submits block to DA at height 100
    // 2. Verify DA hint stored in A's p2p store
    // 3. Node B syncs from Node A via p2p
    // 4. Verify B receives DA hint
    // 5. Verify B uses hint to fetch from DA
    // 6. Verify B does NOT scan DA heights [genesis, 99]
}

10. Resource Management - Channel Leak Risk

Location: block/internal/syncing/syncer.go:273

close(s.heightInCh)

This is only closed in Stop(). If Stop() is never called (e.g., panic), the channel leaks.

Recommendation:

func (s *Syncer) Start(ctx context.Context) error {
    // ...
    s.heightInCh = make(chan common.DAHeightEvent, 100)
    
    // Ensure cleanup even on panic
    go func() {
        <-ctx.Done()
        // Note: Stop() handles the proper draining, but this ensures
        // cleanup if Stop() is never called
        if s.cancel == nil {
            close(s.heightInCh)
        }
    }()
}

🟢 Positive Observations

✅ Excellent Design Decisions

  1. Binary Compatibility: The use of optional proto fields and compatibility tests shows excellent foresight for migration.

  2. Idempotency: AsyncDARetriever.RequestRetrieval properly deduplicates requests using inFlight map (minor race condition noted above).

  3. Worker Pool Pattern: The 5-worker pool in AsyncDARetriever provides bounded concurrency.

  4. Graceful Degradation: DA hint failures don't block submission - the optimization gracefully degrades to sequential scanning.

  5. Comprehensive Logging: Good use of structured logging throughout with relevant context (height, DA height, errors).


📊 Test Coverage Analysis

Current Coverage: 70.17% (68 lines missing)

Breakdown by file:

  • types/p2p_envelope.go: 53.42% ⚠️ (29 missing, 5 partial)
  • block/internal/syncing/syncer.go: 58.62% ⚠️ (12 missing)
  • pkg/sync/sync_service.go: 69.44% ⚠️ (7 missing, 4 partial)
  • block/internal/syncing/async_da_retriever.go: 86.53% ✅ (6 missing, 1 partial)
  • block/internal/submitting/da_submitter.go: 80.95% ✅ (2 missing, 2 partial)

Recommendation: Prioritize coverage for p2p_envelope.go and syncer.go error paths.


🔒 Security Review

✅ No Security Issues Found

  1. No injection risks: DA hints are uint64, no user input parsing
  2. No privilege escalation: DA hints are metadata, don't affect validation
  3. DoS resistance: Worker pool bounds concurrency, channel buffers prevent unbounded memory growth
  4. No secret exposure: No credentials in DA hints

Minor concern: An attacker could send incorrect DA hints to trigger unnecessary DA fetches, but:

  • This wastes bandwidth, not a critical DoS vector
  • Incorrect hints are eventually corrected by sequential scanning
  • Cost to attacker (bandwidth) equals cost to victim

🏗️ Architecture Assessment

Design Quality: ⭐⭐⭐⭐☆ (4/5)

Strengths:

  • Clean separation: DA submitter → p2p store → syncer
  • Type-safe generics for P2PEnvelope[H]
  • Well-defined interfaces (DAHintAppender, EntityWithDAHint)

Improvement Needed:

  • Migration strategy needs documentation
  • Error handling consistency (silent failures vs. returned errors)

🚦 CI Status

Current CI Status: 1 failure, 4 pending, 3 passed

  • Docs Deploy Preview - Failed (unrelated to code changes)
  • ⏳ CI, Rust Tests, PR feat: DA hints in p2p #2891 - In progress
  • ✅ Docs Build, Rust Lint, Semantic PR - Passed

The failing "Docs Deploy Preview" appears to be an infrastructure issue, not related to this PR's changes.


📋 Summary & Recommendations

Before Merging:

  1. Critical: Address concurrency issues in AsyncDARetriever (Peer discovery #2, Transaction gossiping #3)
  2. Critical: Test on live network with mixed versions OR document upgrade procedure (Fraud proofs gossiping #4)
  3. High: Add DA hint failure metrics (ABCI interface #1, Data Availability layer API #8)
  4. High: Fix error handling in AppendDAHint (Fraud proof generation #6)
  5. Medium: Add integration tests (Add design doc to readme #9)

Nice-to-Have:

Overall Assessment:

This is a well-designed feature that will significantly improve sync performance. The binary compatibility approach is smart, and the code quality is generally high. However, the migration strategy is the biggest risk - the PR should not merge without either live network testing OR clear upgrade documentation.

Recommendation: Request changes focusing on #1-6 above.


@github-actions
Copy link
Contributor

github-actions bot commented Nov 27, 2025

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedJan 21, 2026, 11:11 AM

@alpe alpe changed the title Alex/hints feat: DA hints in p2p Nov 28, 2025
@codecov
Copy link

codecov bot commented Nov 28, 2025

Codecov Report

❌ Patch coverage is 66.30435% with 93 lines in your changes missing coverage. Please review.
✅ Project coverage is 58.95%. Comparing base (f70e6da) to head (66b6db8).

Files with missing lines Patch % Lines
pkg/sync/sync_service.go 57.83% 21 Missing and 14 partials ⚠️
types/p2p_envelope.go 53.42% 29 Missing and 5 partials ⚠️
block/internal/syncing/syncer.go 58.62% 12 Missing ⚠️
block/internal/syncing/async_da_retriever.go 86.53% 6 Missing and 1 partial ⚠️
block/internal/submitting/da_submitter.go 80.95% 2 Missing and 2 partials ⚠️
pkg/store/store.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2891      +/-   ##
==========================================
+ Coverage   58.58%   58.95%   +0.37%     
==========================================
  Files         110      112       +2     
  Lines       10396    10625     +229     
==========================================
+ Hits         6090     6264     +174     
- Misses       3662     3698      +36     
- Partials      644      663      +19     
Flag Coverage Δ
combined 58.95% <66.30%> (+0.37%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

alpe added 3 commits November 28, 2025 17:20
* main:
  refactor: omit unnecessary reassignment (#2892)
  build(deps): Bump the all-go group across 5 directories with 6 updates (#2881)
  chore: fix inconsistent method name in retryWithBackoffOnPayloadStatus comment (#2889)
  fix: ensure consistent network ID usage in P2P subscriber (#2884)
cache.SetHeaderDAIncluded(headerHash.String(), res.Height, header.Height())
hashes[i] = headerHash
}
if err := s.headerDAHintAppender.AppendDAHint(ctx, res.Height, hashes...); err != nil {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the DA height is passed to the sync service to update the p2p store

Msg("P2P event with DA height hint, triggering targeted DA retrieval")

// Trigger targeted DA retrieval in background via worker pool
s.asyncDARetriever.RequestRetrieval(daHeightHint)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the "fetch from DA" is triggered for the current block event height

type SignedHeaderWithDAHint = DAHeightHintContainer[*types.SignedHeader]
type DataWithDAHint = DAHeightHintContainer[*types.Data]

type DAHeightHintContainer[H header.Header[H]] struct {
Copy link
Contributor Author

@alpe alpe Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a data container to persist the DA hint together with the block header or data.
types.SignedHeader and types.Data are used all over the place so I did not modify them but added introduced this type for the p2p store and transfer only.

It may make sense to do make this a Proto type. WDYT?

return nil
}

func (s *SyncService[V]) AppendDAHint(ctx context.Context, daHeight uint64, hashes ...types.Hash) error {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stores the DA height hints

@alpe alpe marked this pull request as ready for review December 1, 2025 09:32
@tac0turtle
Copy link
Contributor

if da hint is not in the proto how do other nodes get knowledge of the hint?

also how would an existing network handle using this feature? its breaking so is it safe to upgrade?

"github.com/evstack/ev-node/block/internal/cache"
"github.com/evstack/ev-node/block/internal/common"
"github.com/evstack/ev-node/block/internal/da"
coreda "github.com/evstack/ev-node/core/da"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: gci linter

Copy link
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! It really makes sense.

I share the same concern as @tac0turtle however about the upgrade strategy given it is p2p breaking.

julienrbrt
julienrbrt previously approved these changes Dec 2, 2025
@alpe
Copy link
Contributor Author

alpe commented Dec 2, 2025

if da hint is not in the proto how do other nodes get knowledge of the hint?

The sync_service wraps the header/data payload in a DAHeightHintContainer object that is passed upstream to the p2p layer. When the DA height is known, the store is updated.

also how would an existing network handle using this feature? its breaking so is it safe to upgrade?

It is a breaking change. Instead of signed header or data types, the p2p network exchanges DAHeightHintContainer. This would be incompatible. Also the existing p2p stores would need migration to work.

@julienrbrt
Copy link
Member

julienrbrt commented Dec 4, 2025

Could we broadcast both until every networks are updated? Then for final we can basically discard the previous one.

@alpe
Copy link
Contributor Author

alpe commented Dec 5, 2025

fyi: This PR is missing a migration strategy for the p2p store ( and ideally network)

* main:
  refactor(sequencers): persist prepended batch (#2907)
  feat(evm): add force inclusion command (#2888)
  feat: DA client, remove interface part 1: copy subset of types needed for the client using blob rpc. (#2905)
  feat: forced inclusion (#2797)
  fix: fix and cleanup metrics (sequencers + block) (#2904)
  build(deps): Bump mdast-util-to-hast from 13.2.0 to 13.2.1 in /docs in the npm_and_yarn group across 1 directory (#2900)
  refactor(block): centralize timeout in client (#2903)
  build(deps): Bump the all-go group across 2 directories with 3 updates (#2898)
  chore: bump default timeout (#2902)
  fix: revert default db (#2897)
  refactor: remove obsolete // +build tag (#2899)
  fix:da visualiser namespace  (#2895)
alpe added 3 commits December 15, 2025 10:52
* main:
  chore: execute goimports to format the code (#2924)
  refactor(block)!: remove GetLastState from components (#2923)
  feat(syncing): add grace period for missing force txs inclusion (#2915)
  chore: minor improvement for docs (#2918)
  feat: DA Client remove interface part 2,  add client for celestia blob api   (#2909)
  chore: update rust deps (#2917)
  feat(sequencers/based): add based batch time (#2911)
  build(deps): Bump golangci/golangci-lint-action from 9.1.0 to 9.2.0 (#2914)
  refactor(sequencers): implement batch position persistance (#2908)
github-merge-queue bot pushed a commit that referenced this pull request Dec 15, 2025
<!--
Please read and fill out this form before submitting your PR.

Please make sure you have reviewed our contributors guide before
submitting your
first PR.

NOTE: PR titles should follow semantic commits:
https://www.conventionalcommits.org/en/v1.0.0/
-->

## Overview

Temporary fix until #2891.
After #2891 the verification for p2p blocks will be done in the
background.

ref: #2906

<!-- 
Please provide an explanation of the PR, including the appropriate
context,
background, goal, and rationale. If there is an issue with this
information,
please provide a tl;dr and link the issue. 

Ex: Closes #<issue number>
-->
@alpe
Copy link
Contributor Author

alpe commented Dec 15, 2025

I have added 2 new types for the p2p store that are binary compatible to the types.Data and SignedHeader. With this, we should be able to roll this out without breaking the in-flight p2p data and store.

@alpe alpe requested a review from julienrbrt December 15, 2025 15:00
julienrbrt
julienrbrt previously approved these changes Dec 15, 2025
Copy link
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm! I can see how useful the async retriever will be for force inclusion verification as well. We should have @auricom verify if p2p still works with Eden.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to be really useful for force inclusion checks as well.

* main:
  build(deps): Bump actions/cache from 4 to 5 (#2934)
  build(deps): Bump actions/download-artifact from 6 to 7 (#2933)
  build(deps): Bump actions/upload-artifact from 5 to 6 (#2932)
  feat: DA Client remove interface part 3, replace types with new code (#2910)
  DA Client remove interface: Part 2.5, create e2e test to validate that a blob is posted in DA layer. (#2920)
julienrbrt
julienrbrt previously approved these changes Dec 16, 2025
alpe added 3 commits December 19, 2025 17:00
* main:
  feat: use DA timestamp (#2939)
  chore: improve code comments clarity (#2943)
  build(deps): bump libp2p (#2937)
(cherry picked from commit ad3e21b)
julienrbrt
julienrbrt previously approved these changes Dec 19, 2025
* main:
  fix: make evm_execution more robust (#2942)
  fix(sequencers/single): deterministic queue (#2938)
  fix(block): fix init logic sequencer for da epoch fetching (#2926)
github-merge-queue bot pushed a commit that referenced this pull request Jan 2, 2026
Introduce envelope for headers on DA to fail fast on unauthorized
content.
Similar approach as in #2891 with a binary compatible sibling type that
carries the additional information.
 
* Add DAHeaderEnvelope type to wrap signed headers on DA
  * Binary compatible to `SignedHeader` proto type
  * Includes signature of of the plain content
* DARetriever checks for valid signature early in the process
* Supports `SignedHeader` for legacy support until first signed envelope
read
alpe added 2 commits January 8, 2026 10:06
* main:
  chore: fix some minor issues in the comments (#2955)
  feat: make reaper poll duration configurable (#2951)
  chore!: move sequencers to pkg (#2931)
  feat: Ensure Header integrity on DA (#2948)
  feat(testda): add header support with GetHeaderByHeight method (#2946)
  chore: improve code comments clarity (#2947)
  chore(sequencers): optimize store check (#2945)
@tac0turtle
Copy link
Contributor

ci seems to be having some issues, can these be fixed.

Also was this tested on an existing network? If not, please do that before merging

alpe added 8 commits January 19, 2026 09:46
* main:
  fix: inconsistent state detection and rollback (#2983)
  chore: improve graceful shutdown restarts (#2985)
  feat(submitting): add posting strategies (#2973)
  chore: adding syncing tracing (#2981)
  feat(tracing): adding block production tracing (#2980)
  feat(tracing): Add Store, P2P and Config tracing (#2972)
  chore: fix upgrade test (#2979)
  build(deps): Bump github.com/ethereum/go-ethereum from 1.16.7 to 1.16.8 in /execution/evm/test in the go_modules group across 1 directory (#2974)
  feat(tracing): adding tracing to DA client (#2968)
  chore: create onboarding skill  (#2971)
  test: add e2e tests for force inclusion (part 2) (#2970)
  feat(tracing): adding eth client tracing (#2960)
  test: add e2e tests for force inclusion (#2964)
  build(deps): Bump the all-go group across 4 directories with 10 updates (#2969)
  fix: Fail fast when executor ahead (#2966)
  feat(block): async epoch fetching (#2952)
  perf: tune badger defaults and add db bench (#2950)
  feat(tracing): add tracing to EngineClient (#2959)
  chore: inject W3C headers into engine client and eth client (#2958)
  feat: adding tracing for Executor and added initial configuration (#2957)
* main:
  feat(tracing): tracing part 9 sequencer (#2990)
  build(deps): use mainline go-header (#2988)
* main:
  chore: update calculator for strategies  (#2995)
  chore: adding tracing for da submitter (#2993)
  feat(tracing): part 10 da retriever tracing (#2991)
  chore: add da posting strategy to docs (#2992)
* main:
  build(deps): Bump the all-go group across 5 directories with 5 updates (#2999)
  feat(tracing): adding forced inclusion tracing (#2997)
* main:
  feat(tracing): add store tracing (#3001)
  feat: p2p exchange wrapper  (#2855)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

sync: P2P should provide da inclusion hints

4 participants