Conversation
Manual Deploy AvailableYou can trigger a manual deploy of this PR branch to testnet: Alternative: Comment
Comment updated automatically when the PR is synchronized. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new crate Assessment against linked issues
Assessment against linked issues: Out-of-scope changes(No out-of-scope functional code changes identified.) Suggested reviewers
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
magicblock-chainlink/src/chainlink/fetch_cloner/tests.rs (1)
246-255: 🧹 Nitpick | 🔵 TrivialAdd at least one
FetchCloner::new(..., Some(risk_service))test path.These updates correctly match the new signature, but all touched call sites keep
risk_serviceasNone, so this file still doesn’t exercise AML-gated behavior directly.Also applies to: 1763-1772, 1837-1846, 1910-1919
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@magicblock-chainlink/src/chainlink/fetch_cloner/tests.rs` around lines 246 - 255, Add a test path that passes Some(risk_service) into FetchCloner::new so AML-gated behavior is exercised: create a lightweight mock or stub that implements the same trait/interface the production Risk service uses (e.g., a MockRiskService or TestRiskService), instantiate it and pass Some(mock_risk_service) into FetchCloner::new in at least one of the tests that currently pass None, then assert the expected AML-related behavior (e.g., transactions blocked/filtered or specific error/results) to prove the gate works; apply the same change to the other FetchCloner::new call sites in this test file that still use None.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@config.example.toml`:
- Line 272: Replace the misleading literal value for the config key api-key with
a clearly non-sensitive placeholder or comment it out so users won't mistakenly
paste real credentials; locate the api-key entry (symbol "api-key") in the
example config and either prefix it with a comment marker or change the value to
an obvious placeholder like "<REPLACE_WITH_YOUR_API_KEY>" or
"YOUR_API_KEY_HERE".
- Around line 259-261: The env var prefixes in the [chainlink.risk] section are
incorrect: replace occurrences of MBV_CHAINLINK__RANGE_RISK__* with
MBV_CHAINLINK__RISK__* (e.g., update the comment lines that currently reference
MBV_CHAINLINK__RANGE_RISK__ENABLED and the other entries in the same section) so
they match the TOML section name and the test expectations; apply the same
replacement to all other env var comments in that section (the subsequent
entries referenced in the comment block).
In `@magicblock-aml/src/lib.rs`:
- Around line 167-179: The is_high_risk logic in assessment_from_value is
inverted: change the comparison so addresses with risk_score >=
self.risk_score_threshold are marked high risk. Update the construction of
AddressRiskAssessment in assessment_from_value to set is_high_risk based on
self.risk_score_threshold <= score (i.e. score meets or exceeds the threshold)
so it matches the threshold semantics and the behavior used elsewhere (e.g., the
earlier cache/write logic).
- Around line 73-79: The CREATE TABLE for address_risk_cache is invalid because
the PRIMARY KEY references network which is not declared; fix by making the
schema and all SQL usages consistent: add a network TEXT NOT NULL column to the
address_risk_cache definition (alongside address, risk_score, fetched_at_unix_s)
and keep PRIMARY KEY (network, address), then update the related
INSERT/UPSERT/ON CONFLICT statements (the SQL at the locations around lines
127-130 and 155-161) to target the same conflict key (network, address);
alternatively, if you intend a Solana-only cache, remove network from the
PRIMARY KEY and change those ON CONFLICT targets to just address—ensure
address_risk_cache, the columns (address, network, risk_score,
fetched_at_unix_s), and the ON CONFLICT keys are all consistent.
- Around line 45-52: The SQLite operations in read_cache and write_cache are
blocking and currently run on the async executor inside check_address; change
those DB accesses to run on a blocking thread: make read_cache and write_cache
async helpers that call tokio::task::spawn_blocking (or
tokio::runtime::Handle::spawn_blocking) and perform all rusqlite queries inside
the spawned closure. Update RiskService so the Connection is not locked on the
async runtime (replace the parking_lot::Mutex<Connection> usage by either
storing a DB path/handle and opening/using the Connection inside spawn_blocking,
or keep the Connection behind a sync-only lock used only inside the blocking
closure), and call await on the spawn_blocking join handle from check_address;
ensure function signatures for read_cache and write_cache are updated
accordingly and reference these symbols (RiskService, check_address, read_cache,
write_cache).
- Around line 74-76: The CREATE TABLE schema for address_risk_cache uses
risk_score REAL but the code writes and reads u64, so update the SQL in the
create statement to declare risk_score INTEGER NOT NULL; ensure any
insert/update that uses params![address, assessment.risk_score, ...] keeps
passing an integer type and the retrieval that does let risk_score: u64 =
row.get(0)? will then match the INTEGER column type; adjust the CREATE TABLE
string surrounding address_risk_cache and verify any related queries reference
the same column type.
In `@magicblock-chainlink/src/chainlink/fetch_cloner/mod.rs`:
- Around line 530-545: The extraction of signer addresses from
delegation_actions is verbose; simplify the nested filter_map/if by using
filter_map with a conditional expression so it directly returns the pubkey when
meta.is_signer is true. Replace the inner closure in the flat_map over
delegation_actions.iter() (which currently uses
instruction.accounts.iter().filter_map(|meta| { if meta.is_signer {
Some(meta.pubkey) } else { None } })) with a concise form such as
instruction.accounts.iter().filter_map(|meta| meta.is_signer.then(||
meta.pubkey)) and keep collecting into the BTreeSet as before.
- Around line 522-559: The validation currently calls risk_service.check_address
sequentially inside validate_post_delegation_action_addresses which can be slow;
change it to run the checks concurrently by collecting the unique addresses
(from the addresses BTreeSet) into a Vec and then spawning async tasks (e.g.,
with futures::future::try_join_all or a bounded stream like
futures::stream::FuturesUnordered with a concurrency limit) to call
risk_service.check_address(&addr.to_string()).await for each address, then
inspect the returned assessments and return the same
ChainlinkError::RiskyDelegationActionAddress if any assessment.is_high_risk
(preserving address and risk_score) otherwise return Ok(()). Ensure you await
the combined future and propagate the first error or convert high-risk
assessments into an Err as before.
In `@magicblock-chainlink/src/chainlink/mod.rs`:
- Around line 142-146: The call to RiskService::try_from_config is passing
&ledger_path which produces a &&Path unnecessarily; change the invocation to
pass ledger_path directly (i.e.,
RiskService::try_from_config(&chainlink_config.risk, ledger_path)?) so you pass
a &Path instead of a double reference, then keep the subsequent .map(Arc::new)
unchanged; locate this in the block constructing risk_service that references
chainlink_config.risk, ledger_path, and RiskService::try_from_config.
---
Outside diff comments:
In `@magicblock-chainlink/src/chainlink/fetch_cloner/tests.rs`:
- Around line 246-255: Add a test path that passes Some(risk_service) into
FetchCloner::new so AML-gated behavior is exercised: create a lightweight mock
or stub that implements the same trait/interface the production Risk service
uses (e.g., a MockRiskService or TestRiskService), instantiate it and pass
Some(mock_risk_service) into FetchCloner::new in at least one of the tests that
currently pass None, then assert the expected AML-related behavior (e.g.,
transactions blocked/filtered or specific error/results) to prove the gate
works; apply the same change to the other FetchCloner::new call sites in this
test file that still use None.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: d681ffca-6dbe-4551-950e-dcbac7e31975
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.locktest-integration/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
Cargo.tomlconfig.example.tomlmagicblock-aml/Cargo.tomlmagicblock-aml/src/lib.rsmagicblock-api/src/magic_validator.rsmagicblock-chainlink/Cargo.tomlmagicblock-chainlink/src/chainlink/errors.rsmagicblock-chainlink/src/chainlink/fetch_cloner/mod.rsmagicblock-chainlink/src/chainlink/fetch_cloner/tests.rsmagicblock-chainlink/src/chainlink/mod.rsmagicblock-chainlink/tests/utils/test_context.rsmagicblock-config/src/config/chain.rsmagicblock-config/src/config/mod.rsmagicblock-config/src/consts.rsmagicblock-config/src/tests.rstest-integration/test-chainlink/src/ixtest_context.rstest-integration/test-chainlink/src/test_context.rs
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (2)
config.example.toml (1)
272-272: 🧹 Nitpick | 🔵 TrivialAPI key placeholder in example config.
Including
api-key = "your-api-key"could be confusing. Consider commenting it out or using a more obviously placeholder format like# api-key = "YOUR_RANGE_API_KEY".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config.example.toml` at line 272, Update the example config line that currently reads api-key = "your-api-key" to make it an unambiguous placeholder or commented out; replace it with a clearer placeholder like a commented entry (e.g., # api-key = "YOUR_RANGE_API_KEY") or change the value to an all-caps sentinel so users won't accidentally paste a real key, ensuring the unique config key api-key is the one modified.magicblock-aml/src/lib.rs (1)
167-192:⚠️ Potential issue | 🟠 MajorBlocking SQLite operations on async executor.
read_cacheandwrite_cacheperform blockingrusqliteI/O while holding atokio::sync::Mutexlock on the async runtime. Under load, this can block Tokio worker threads and degrade throughput. Consider usingtokio::task::spawn_blockingfor the database operations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@magicblock-aml/src/lib.rs` around lines 167 - 192, read_cache (and similarly write_cache) is performing blocking rusqlite I/O while holding an async tokio::sync::Mutex which can block the Tokio runtime; wrap the entire DB access and query logic inside tokio::task::spawn_blocking so the blocking work runs on a blocking thread, avoid holding the async Mutex across await points by either moving the DB connection out of the async mutex before spawn_blocking or by using a synchronous lock inside spawn_blocking, and return the query result via .await; update read_cache and write_cache to clone/own any parameters (like addresses) passed into spawn_blocking and propagate rusqlite errors back through the RiskResult.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@magicblock-aml/src/lib.rs`:
- Around line 174-191: The current query uses WHERE address IN (?1) with
addresses.join(",") which passes a single string and never matches individual
addresses, and the returned rows aren’t mapped back to the input order; change
the logic to build a query with the correct number of placeholders (e.g., "WHERE
address IN (?1,?2,?3...)" based on addresses.len()) or run per-address queries,
then collect the result rows as address → (risk_score, fetched_at_unix_s) map
and finally produce a Vec<Option<u64>> in the same order as the input addresses;
update the code that currently prepares stmt via conn.prepare and that consumes
addresses.join(",") to instead generate placeholders and bind each address (or
loop addresses with a single-address prepared statement) and use the
fetched_at_unix_s timestamp to decide None vs Some(risk_score) for each input
address.
- Around line 80-87: The CREATE TABLE SQL in conn.execute_batch for
address_risk_cache uses invalid PRIMARY KEY syntax; update the statement in the
conn.execute_batch call that creates the address_risk_cache table to use
parentheses around the primary key column name (i.e., change PRIMARY KEY address
to PRIMARY KEY (address)) so SQLite accepts the schema when initializing the
cache.
In `@magicblock-chainlink/src/chainlink/errors.rs`:
- Around line 55-56: Remove the unused error variant
RiskyDelegationActionAddress from the ChainlinkError enum (it is never
constructed), or if you intended per-address errors, change
validate_post_delegation_action_signers to map RiskError::HighRiskAddresses
returned by risk_service.check_addresses() into RiskyDelegationActionAddress
instances instead of relying on the #[from] conversion to RangeRisk;
specifically either delete the RiskyDelegationActionAddress variant and any
references to it, or update the error conversion logic in
validate_post_delegation_action_signers to iterate the Vec<String> from
RiskError::HighRiskAddresses and return RiskyDelegationActionAddress { address:
parsed_pubkey, risk_score } for each offending address so the enum variant is
actually used.
In `@magicblock-chainlink/src/chainlink/fetch_cloner/mod.rs`:
- Around line 531-543: The current collection into signers uses Vec::dedup()
which only removes consecutive duplicates, so change the logic that builds
signers (the block referencing delegation_actions and signers.dedup()) to
produce a true set of unique signer strings; for example, collect the flat_map
results into a std::collections::HashSet<String> (or BTreeSet<String> if
deterministic ordering is needed) and then convert back to a Vec, or
alternatively sort the Vec and then call dedup() — update the code that builds
signers and remove the existing signers.dedup() accordingly.
---
Duplicate comments:
In `@config.example.toml`:
- Line 272: Update the example config line that currently reads api-key =
"your-api-key" to make it an unambiguous placeholder or commented out; replace
it with a clearer placeholder like a commented entry (e.g., # api-key =
"YOUR_RANGE_API_KEY") or change the value to an all-caps sentinel so users won't
accidentally paste a real key, ensuring the unique config key api-key is the one
modified.
In `@magicblock-aml/src/lib.rs`:
- Around line 167-192: read_cache (and similarly write_cache) is performing
blocking rusqlite I/O while holding an async tokio::sync::Mutex which can block
the Tokio runtime; wrap the entire DB access and query logic inside
tokio::task::spawn_blocking so the blocking work runs on a blocking thread,
avoid holding the async Mutex across await points by either moving the DB
connection out of the async mutex before spawn_blocking or by using a
synchronous lock inside spawn_blocking, and return the query result via .await;
update read_cache and write_cache to clone/own any parameters (like addresses)
passed into spawn_blocking and propagate rusqlite errors back through the
RiskResult.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: ffc10579-4b80-4bab-ba4b-e89bc5b078a3
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.locktest-integration/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
config.example.tomlmagicblock-aml/Cargo.tomlmagicblock-aml/src/lib.rsmagicblock-chainlink/src/chainlink/errors.rsmagicblock-chainlink/src/chainlink/fetch_cloner/mod.rsmagicblock-chainlink/src/chainlink/mod.rs
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@magicblock-aml/src/lib.rs`:
- Around line 106-147: The cold-cache path in the AML check (read_cache ->
try_join_all -> write_cache) can cause request stampedes; implement per-address
in-flight deduplication and a bounded concurrency limit before firing RPCs.
Modify the logic around read_cache/uncached_addresses and the try_join_all call:
maintain an in-memory map (e.g., HashMap<String, JoinHandle/SharedFuture> or an
in_flight registry) keyed by address to return an existing pending fetch for the
same address instead of launching a new request, and use a bounded
semaphore/pool to limit concurrent client.get calls (the block that builds the
request to "{}/risk/address" and awaits .send()). Ensure resolved scores are
written via write_cache once fetched and that callers awaiting the same
in-flight future receive the same result.
- Around line 174-176: The mutex lock calls in the spawn_blocking closures
(e.g., the cache.lock().expect(...) at the shown block and the similar calls
around lines 212-213) must not panic; add a new RiskError variant (e.g.,
CachePoisoned with message "Sqlite cache mutex was poisoned") and replace the
.expect()/.unwrap() usages with handling that maps a poisoned lock into
Err(RiskError::CachePoisoned) so the task returns a Result error instead of
panicking; update the closure return types and any callers to propagate/handle
the Result<R, RiskError> accordingly.
- Around line 62-71: Reject invalid risk_score_threshold values in
try_from_config by validating RiskConfig.risk_score_threshold is within 0..=10
and returning a new RiskError variant when out of range; add an enum variant
like InvalidRiskScoreThreshold(u64) to RiskError, then in try_from_config (after
extracting api_key and before creating the Range/checking comparisons) check the
u64 threshold and return Err(RiskError::InvalidRiskScoreThreshold(value)) if
value > 10 so out-of-range thresholds no longer silently disable blocking.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: d42dac4d-3b48-4ab0-afcf-3f47bce17c9c
📒 Files selected for processing (4)
magicblock-aml/Cargo.tomlmagicblock-aml/src/lib.rsmagicblock-chainlink/src/chainlink/errors.rsmagicblock-chainlink/src/chainlink/fetch_cloner/mod.rs
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
magicblock-aml/src/lib.rs (1)
146-156:⚠️ Potential issue | 🟠 MajorThe miss path can still stampede Range.
try_join_all()still fires every unique cache miss immediately, and Line 156 drops the in-flight entry before the later cache write. A caller that arrives in that gap can miss both the cache and the dedupe map and issue a second request for the same address. Please bound fetch concurrency and keep the in-flight entry until persistence is done; the current two-caller test will not catch this late-arrival window.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@magicblock-aml/src/lib.rs` around lines 146 - 156, The current logic launches all uncached fetches with try_join_all and calls remove_from_in_flight(&uncached_addresses) immediately, which allows a late-arriving caller to miss both cache and in-flight dedupe; bound the concurrent fetches (e.g., replace try_join_all with a stream/iter + buffer_unordered or FuturesUnordered with a max concurrency) so you don't start unlimited parallel requests for the same addresses, and only call remove_from_in_flight for an address after its fetch+cache persistence has fully completed (i.e., ensure the future returned by get_or_insert_in_flight/address fetch only resolves when the write/persist is done, and move the remove_from_in_flight call to after that completion for each address rather than removing all entries immediately).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@magicblock-aml/src/lib.rs`:
- Around line 157-172: The code currently calls self.write_cache(&scores).await?
before checking for high-risk addresses, which lets cache persistence errors
mask a confirmed risk verdict; change the flow in the function that handles
fetch_result and write_cache so you compute risky_addresses from scores (using
self.risk_score_threshold and producing RiskError::HighRiskAddresses when
non-empty) before propagating any error from write_cache, and make write_cache a
best-effort operation (catch/handle its Err and log/ignore it rather than
returning it) so cache failures do not override the threshold check in the
branch that returns HighRiskAddresses.
---
Duplicate comments:
In `@magicblock-aml/src/lib.rs`:
- Around line 146-156: The current logic launches all uncached fetches with
try_join_all and calls remove_from_in_flight(&uncached_addresses) immediately,
which allows a late-arriving caller to miss both cache and in-flight dedupe;
bound the concurrent fetches (e.g., replace try_join_all with a stream/iter +
buffer_unordered or FuturesUnordered with a max concurrency) so you don't start
unlimited parallel requests for the same addresses, and only call
remove_from_in_flight for an address after its fetch+cache persistence has fully
completed (i.e., ensure the future returned by get_or_insert_in_flight/address
fetch only resolves when the write/persist is done, and move the
remove_from_in_flight call to after that completion for each address rather than
removing all entries immediately).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 8289da89-8462-4e5a-b352-c14a88bb61f5
📒 Files selected for processing (1)
magicblock-aml/src/lib.rs
| # ------------------------------------------------------------------------------ | ||
| # Optional: Range Risk API validation for post-delegation actions | ||
| # ------------------------------------------------------------------------------ | ||
| # When enabled, all addresses referenced by post-delegation actions are checked |
There was a problem hiding this comment.
Nit: Tightening this to say that screening only applies to signer accounts from post-delegation actions
thlorenz
left a comment
There was a problem hiding this comment.
I gave this a quick skim. The code changes look OK to me, but to be honest, I don't have enough context to verify the correctness of implementation..
| } | ||
| })?; | ||
| conn.execute_batch( | ||
| "CREATE TABLE IF NOT EXISTS address_risk_cache ( |
There was a problem hiding this comment.
Does this database have to survive across restarts and how do we guarantee we don't lose this information if so?
There was a problem hiding this comment.
The database is only used to reduce API usage (about 20c/requests), losing it is not a problem, we will just refetch
| return Ok(()); | ||
| }; | ||
|
|
||
| let mut signers = delegation_actions |
There was a problem hiding this comment.
Let's also have early exit logic if DelegationActions is empty.
Otherwise we will succesfully enter in risk_service.check_addresses and there's also bunch of logic that is hard to evaluate how expensive it is
| self.remove_from_in_flight(&uncached_addresses).await; | ||
| let scores = fetch_result?; | ||
|
|
||
| self.write_cache(&scores).await?; |
There was a problem hiding this comment.
Do we need to wait for write here? This is IO that will slow down cloning
There was a problem hiding this comment.
Addressed in 3d0ffcc.
But I thought it would be clearer using a... channel between the service and a long running cache writer task
| Ok::<_, RiskError>((address.to_string(), score)) | ||
| })) | ||
| .await; | ||
| self.remove_from_in_flight(&uncached_addresses).await; |
There was a problem hiding this comment.
Inflights shouldn't be removed until the scores are wirtten as there's a race-condition:
- task A calls
remove_from_in_flightand those are removed. - task B enters and locks DB faster then A. B calls
read_cache(addresses).await?with the same set of addresses as in A - task B requests again
- task A writes to DB.
- task B writes duplicates
| #[derive(Deserialize, Serialize, Debug, Clone)] | ||
| #[serde(default, rename_all = "kebab-case", deny_unknown_fields)] | ||
| pub struct RiskConfig { | ||
| /// Enables post-delegation address risk checks. |
There was a problem hiding this comment.
Will this be used only for post-delegation actions?
There was a problem hiding this comment.
Yes: sending transactions to the PER is already gated by the middleware, which requires passing AML checks, but delegation actions could execute transactions without passing this check
Summary
Compatibility
No breaking change, but new optional config fields:
Summary by CodeRabbit
New Features
Configuration
Tests
Chores