Skip to content

feat(net): add P2P message deduplication and length validation#6712

Merged
lvs0075 merged 2 commits into
tronprotocol:developfrom
xxo1shine:feature/p2p-msg-validation-1
May 9, 2026
Merged

feat(net): add P2P message deduplication and length validation#6712
lvs0075 merged 2 commits into
tronprotocol:developfrom
xxo1shine:feature/p2p-msg-validation-1

Conversation

@xxo1shine
Copy link
Copy Markdown
Collaborator

@xxo1shine xxo1shine commented Apr 28, 2026

What does this PR do?

Add P2P message deduplication and length validation:

  • FetchInvDataMsgHandler: reject messages with duplicate hashes
  • TransactionsMsgHandler: reject messages with duplicate transactions
  • SyncBlockChainMsgHandler: reject blockIds list exceeding 30 entries
  • Add MAX_SYNC_CHAIN_IDS = 30 constant to NetConstants
  • Add unit tests covering duplicate rejection and boundary values

All violations throw P2pException(BAD_MESSAGE), triggering peer disconnect via existing P2pEventHandlerImpl error path.

Fixes #6667

Why are these changes required?

This PR has been tested by:

  • Unit Tests
  • Manual Testing

Follow up

Extra details

@github-actions github-actions Bot requested a review from 317787106 April 28, 2026 07:30
@halibobo1205 halibobo1205 added the topic:net p2p net work, synchronization label Apr 28, 2026
@halibobo1205 halibobo1205 added this to the GreatVoyage-v4.8.2 milestone Apr 28, 2026
public static final int MSG_CACHE_DURATION_IN_BLOCKS = 5;
public static final int MAX_BLOCK_FETCH_PER_PEER = 100;
public static final int MAX_TRX_FETCH_PER_PEER = 1000;
public static final int MAX_SYNC_CHAIN_IDS = 30;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice to have a named constant here instead of a magic number! 👍

Quick question — how was 30 chosen? Observed from real traffic, or estimated from getBlockChainSummary()'s log-step algorithm?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the chain digest algorithm, using a binary search from the solidified block to the head block, 30 blocks can generate 2^30 block digests, which is theoretically large enough, and 30 blocks will not put pressure on the system.

@lvs0075 lvs0075 requested a review from waynercheung April 30, 2026 04:22
Copy link
Copy Markdown
Collaborator

@waynercheung waynercheung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

if (hashList.size() != new HashSet<>(hashList).size()) {
throw new P2pException(TypeEnum.BAD_MESSAGE,
"FetchInvData contains duplicate hashes, size: " + hashList.size());
}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good check. Validation has already been added for FetchInvDataMessage, SyncBlockChainMessage, and TransactionsMessage. Why not also add deduplication checks for InventoryMessage and ChainInventoryMessage?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inventory Messages have rate limits and perform deduplication during processing, and they don't consume CPU, so there's no need to perform a second check before processing. ChainInventoryMessages have stricter checks than deduplication, while also ensuring that there are no duplicate lists.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ChainInventoryMessages's block number's order must be increasing, we can ignore it. But although inventory messages don't need consume CPU, deduplication checks are still needed to keep consistency with other message handling logic and to defend against malicious peers.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

InventoryMessages does not constitute an attack, and the lack of duplicate checks has no impact on the system.

@xxo1shine xxo1shine force-pushed the feature/p2p-msg-validation-1 branch 2 times, most recently from 5713d02 to e023c2e Compare May 8, 2026 12:59
- FetchInvDataMsgHandler: reject messages with duplicate hashes
- TransactionsMsgHandler: reject messages with duplicate transactions
- SyncBlockChainMsgHandler: reject blockIds list exceeding 30 entries
- Add MAX_SYNC_CHAIN_IDS = 30 constant to NetConstants
- Add unit tests covering duplicate rejection and boundary values

All violations throw P2pException(BAD_MESSAGE), triggering peer disconnect via existing P2pEventHandlerImpl error path.
@xxo1shine xxo1shine force-pushed the feature/p2p-msg-validation-1 branch from e023c2e to 805f5b5 Compare May 9, 2026 03:12
…validation-1

# Conflicts:
#	framework/src/test/java/org/tron/core/net/messagehandler/TransactionsMsgHandlerTest.java
@lvs0075 lvs0075 merged commit af8a84d into tronprotocol:develop May 9, 2026
12 checks passed
@github-project-automation github-project-automation Bot moved this to Done in java-tron May 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

topic:net p2p net work, synchronization

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Add deduplication and length checks for p2p messages

7 participants