protocol-zkcircuitjourneyprotocol-horizontal_scalabilityprotocol-upsprooftreeprotocol-nodearchitectureprotocol-realmgutagadgetsprotocol-fullflowprotocol-circuitsprotocol-coordinatorgadgetsprotocol-upsgadgetsprotocol-gadgetsprotocol-provingjobslanguage-introductionlanguage-design_philosophylanguage-featureslanguage-before_we_beginlanguage-setup_your_idelanguage-setting_up_shell_completionslanguage-hello_worldlanguage-basic_syntaxlanguage-operatorslanguage-structs_and_implslanguage-arrays_and_tupleslanguage-control_flowlanguage-loopslanguage-functionslanguage-storage_and_contractslanguage-built_in_functionslanguage-contract_deploymentlanguage-closureslanguage-modules_and_visibilitylanguage-traits_and_genericslanguage-testinglanguage-real_world_exampleslanguage-dargosdk-overviewsdk-rustsdk-typescriptsdkeys-overviewsdkeys-signature-schemessdkeys-wallet-managementsdkeys-advanced-circuitsminer-setupminer-configurationminer-optimizationnode-architecturenode-installationnode-configurationnode-getting_startedrpc-userclivm-bytecodevm-executionrpc-userclirpc-realmrpcrpc-coordinatorrpcrpc-apiservicesrpc-proverproxylanguage-appendix_a_glossarylanguage-appendix_b_reserved_keywordslanguage-appendix_c_publicationslanguage-appendix_d_contributinglanguage-appendix_e_acknowledgements

Psy ZK Circuit Journey: Tracing Proofs and Assumptions

This document provides a detailed walkthrough of the Zero-Knowledge proof lifecycle in Psy, starting from a user's local actions to the final, globally verifiable block proof. We meticulously track what each circuit proves and, critically, the assumptions it makes and how those assumptions are discharged by subsequent circuits.

Goal: Illustrate the flow of cryptographic guarantees and the progressive reduction of trust assumptions, culminating in a block proof dependent only on the previous block's established state.

Key:

  • Circuit: The specific ZK circuit being executed.
  • High-Level Purpose: Why this circuit exists in the overall architecture.
  • Proves (Technical Detail): Specific cryptographic guarantees and state relations enforced by the circuit's constraints.
  • How: Key gadgets, hashing, and constraint mechanisms employed.
  • Assumes [A_]: Inputs or states treated as correct before this circuit's verification.
  • Discharges [R_]: Assumptions remaining from previous steps that are verified by this circuit.
  • Remaining [R_]: Assumptions still held true after this circuit's verification, passed to the next stage.

Phase 1: User Proving Session (UPS) - Local Execution

(User executes transactions and builds a local proof chain)

Step 1: Start Session

  • Circuit: UPSStartSessionCircuit
  • High-Level Purpose: To establish a cryptographically verified starting point for the user's transaction batch, ensuring it begins from a consistent and valid state relative to the last finalized global block. Prevents users from initiating proofs based on invalid or outdated personal states.
  • Proves:
    • The provided UserProvingSessionHeader witness (ups_header) contains an internally consistent session_start_context and current_state.
    • session_start_context.checkpoint_tree_root matches the root of the verified checkpoint_tree_proof witness.
    • session_start_context.checkpoint_leaf_hash matches the value of the checkpoint_tree_proof.
    • session_start_context.checkpoint_id matches the index of the checkpoint_tree_proof.
    • The hash of the provided checkpoint_leaf witness matches session_start_context.checkpoint_leaf_hash.
    • The hash of the provided state_roots witness (global_chain_root) matches checkpoint_leaf.global_chain_root.
    • state_roots.user_tree_root matches the root of the verified user_tree_proof witness.
    • session_start_context.start_session_user_leaf.user_id matches the index of the user_tree_proof.
    • The hash of session_start_context.start_session_user_leaf matches the value of the user_tree_proof.
    • current_state.user_leaf matches session_start_context.start_session_user_leaf except last_checkpoint_id is updated to session_start_context.checkpoint_id.
    • current_state.deferred_tx_debt_tree_root == EMPTY_TREE_ROOT.
    • current_state.inline_tx_debt_tree_root == EMPTY_TREE_ROOT.
    • current_state.tx_count == 0.
    • current_state.tx_hash_stack == ZERO_HASH.
  • How: UPSStartStepGadget uses MerkleProofGadgets to verify paths, Psy...Leaf/RootsGadgets to hash witnesses and check consistency, direct comparisons and constant checks. Public inputs calculated via compute_tree_aware_proof_public_inputs.
  • Assumes:
    • [A1.1] The root hash used in witness Merkle proofs (checkpoint_tree_root in ups_header.session_start_context) accurately reflects the globally finalized state of the previous block.
    • [A1.4] The constant empty_ups_proof_tree_root used for the tree-aware public inputs is correct for this session's start.
    • [A1.5] The constant ups_step_circuit_whitelist_root embedded in the output header is the correct root for allowed UPS circuits.
    • (Initial correctness of witness data like proofs and leaves is assumed, then verified internally).
  • Discharges: Internal consistency checks discharge assumptions about the relationships between the provided witness components (e.g., leaf data matches proof value).
  • Remaining:
    • [R1.1] = [A1.1] (Correctness of previous block's CHKP root).
    • [R1.4] = [A1.4] (Correctness of session's empty_ups_proof_tree_root).
    • [R1.5] = [A1.5] (Correctness of ups_step_circuit_whitelist_root).

Step 1.5: Execute Contract Function Circuit (CFC)

  • Circuit: DapenContractFunctionCircuit (Specific instance per function)
  • High-Level Purpose: To execute the specific smart contract logic defined by the developer for a single transaction call, locally generating a proof that this execution instance faithfully followed the code, given its specific inputs and assumed context. This decouples logic execution from state transition verification.
  • Proves:
    • The sequence of internal operations matches the compiled DPNFunctionCircuitDefinition (fn_def).
    • Given the assumed tx_ctx_header witness (containing start states like start_contract_state_tree_root, start_deferred_tx_debt_tree_root, call arguments hash/length) and circuit inputs:
      • The simulated state commands (reads/writes to CST, debt tree interactions via StateReaderGadget) produce the end_contract_state_tree_root and end_deferred_tx_debt_tree_root recorded in tx_ctx_header.transaction_end_ctx.
      • The computed outputs_hash and outputs_length match those recorded in tx_ctx_header.transaction_end_ctx.
      • All assertions within the fn_def hold true.
    • The public inputs hash (combining session_proof_tree_root and tx_ctx_header hash) is correctly computed.
  • How: PsyContractFunctionBuilderGadget interprets fn_def, simulating execution using SimpleDPNBuilder and StateReaderGadget. Connects computed vs witnessed values in tx_ctx_header.
  • Assumes:
    • [A1.5.1] The DapenContractFunctionCircuitInput witness (esp. tx_input_ctx) accurately reflects the state before this CFC execution (derived from the previous UPS step's output) and the correct function inputs/outputs.
    • [A1.5.3] The session_proof_tree_root witness correctly represents the root of the user's recursive proof tree at this point.
  • Discharges: Internal consistency of the execution trace vs. the code definition (fn_def).
  • Remaining:
    • [R1.5.1] Correctness of the assumed tx_input_ctx (start state, inputs/outputs).
    • [R1.5.3] Correctness of the assumed session_proof_tree_root.
  • Output: A CFC Proof object.

Step 2: Verify CFC & Process UPS Delta

  • Circuit: UPSCFCStandardTransactionCircuit
  • High-Level Purpose: To integrate the result of a local CFC execution (Step 1.5) into the user's main proof chain. It verifies the CFC was executed correctly and that its claimed start/end states correctly link the previous UPS state to the next UPS state, while also proving the previous UPS step itself was valid.
  • Proves:
    • Previous Step Validity: Proof N-1 was valid & used a whitelisted UPS circuit ([R(N-1).5] discharged). Public inputs match header hash.
    • CFC Validity: CFC proof (from Step 1.5) is valid & exists in the same UPS proof tree as Proof N-1 ([R1.5.3] discharged). CFC function is registered globally (GCON/CFT check linked via [R(N-1).1]).
    • Context Link: The inner public inputs hash from the verified CFC proof matches the hash of the UPSCFCStandardStateDeltaInput witness used to calculate state changes ([R1.5.1] discharged). This is the critical link ensuring the state delta matches the verified computation.
    • State Delta Correctness: The UPS header transition from step N-1 to N is valid:
      • UCON root updated correctly based on user_contract_tree_update_proof.
      • Debt tree roots updated correctly based on deferred/inline_tx_debt_pivot_proofs (starting from prev step's end state).
      • tx_count incremented; tx_hash_stack updated correctly.
  • How: VerifyPreviousUPSStepProofInProofTreeGadget, UPSVerifyCFCStandardStepGadget connects cfc_inner_public_inputs_hash between UPSVerifyCFCProofExistsAndValidGadget and UPSCFCStandardStateDeltaGadget. Delta/pivot proofs verified.
  • Assumes:
    • [A2.1] Witness data for this step (attestations, state delta proofs) is correct initially.
    • [R(N-1).1] (Prev Step) Last block's CHKP root correctness (used for CFC inclusion context).
    • [R(N-1).4] (Prev Step) Session's empty_ups_proof_tree_root correctness (defines proof tree base).
  • Discharges: [R(N-1).5] (Prev UPS whitelist), [R1.5.1] (CFC Context), [R1.5.3] (CFC Proof Tree Root).
  • Remaining:
    • [RN.1] = [R(N-1).1] (Last block's CHKP root).
    • [RN.4] = [R(N-1).4] (Session's empty_ups_proof_tree_root).
    • [RN.5] (New) Current header's ups_step_circuit_whitelist_root correctness.

(Repeat Step 1.5 & 2, or deferred variant, for all user transactions)

Step 3: End Session

  • Circuit: UPSStandardEndCapCircuit
  • High-Level Purpose: To securely conclude the user's local proving session, producing a single proof that attests to the validity of the entire sequence of transactions, authorized by the user's signature, and ready for network submission. It ensures the session ends in a clean state (no outstanding debts).
  • Proves:
    • Last UPS step proof valid & used correct whitelisted circuit ([R_Last.5] discharged against constant known_ups_circuit_whitelist_root).
    • ZK Signature proof valid & in same proof tree ([R_Last.4] discharged).
    • Signature corresponds to user's key & authenticates PsyUserProvingSessionSignatureDataCompact derived from final UPS header state.
    • Nonce incremented correctly.
    • last_checkpoint_id updated correctly in final UserLeaf.
    • Final debt tree roots are empty.
    • (Optional) UPS proof tree aggregation used correct circuits (known_proof_tree_circuit_whitelist_root).
    • Public Outputs (end_cap_result_hash, guta_stats_hash) correctly computed.
  • How: UPSEndCapFromProofTreeGadget orchestrates verification of last step & signature, UPSEndCapCoreGadget enforces final constraints. Optional VerifyAggProofGadget.
  • Assumes:
    • [A3.1] Witness data correct initially.
    • [A3.2] Constant known_ups_circuit_whitelist_root.
    • [A3.3] Constant known_proof_tree_circuit_whitelist_root (if used).
    • [R_Last.1] (Last tx step) Correctness of CHKP root used as session basis.
  • Discharges: [R_Last.5], [R_Last.4]. Potentially UPS tree agg assumptions.
  • Remaining: [R3.1] = [R_Last.1] (Correctness of initial CHKP root).

Output of Phase 1: End Cap Proof + Public Inputs + State Deltas from user


Phase 2: Network Aggregation - Parallel Execution (Realms & Coordinators)

(The Proving Network receives End Cap proofs + state deltas from users and aggregate them to prove the change in the blockchain state root)

Step 4: Process End Cap Proof(s) (GUTA Entry - Realm)

  • Circuit(s):
    • GUTAVerifySingleEndCapCircuit (Handles a single End Cap, e.g., an odd leaf in aggregation)
    • GUTAVerifyTwoEndCapCircuit (Handles pairs of End Cap proofs, typical base case)
  • High-Level Purpose: To securely ingest user End Cap proofs into the GUTA process, verify their validity against the protocol and historical state, and translate them into the standard GlobalUserTreeAggregatorHeader format needed for recursive aggregation.
  • Proves:
    • Input End Cap proof(s) (proof_target, child_a/b_proof) are valid ZK proofs.
    • The End Cap proof(s) were generated by the official End Cap circuit (fingerprint checked against known_end_cap_fingerprint).
    • The public inputs of the End Cap proof(s) correctly match the claimed result/stats (end_cap_result, guta_stats) provided as witness.
    • The checkpoint_tree_root claimed by the user(s) existed historically in the CHKP tree (verified via checkpoint_historical_merkle_proof).
    • (For TwoEndCap): The Nearest Common Ancestor (NCA) logic correctly combines the two individual user state transitions (start_leaf -> end_leaf at user indices) into a single state transition at their parent node in the GUSR tree. Statistics are correctly summed.
    • Outputs a GlobalUserTreeAggregatorHeader representing the state transition for the node processed (either a single user leaf or the NCA parent) and the combined stats.
  • How:
    • VerifyEndCapProofGadget: Used internally (once or twice) to perform core End Cap proof verification, fingerprint check, public input matching, and historical checkpoint validation.
    • TwoNCAStateTransitionGadget (in GUTAVerifyTwoEndCapCircuit): Combines the two GUSR leaf transitions (derived from End Cap results) using an NCA proof witness.
    • GUTAStatsGadget.combine_with: Sums stats (in GUTAVerifyTwoEndCapCircuit).
    • Constructs the output GlobalUserTreeAggregatorHeader.
  • Assumes:
    • [A4.1] Witness data (End Cap proof(s), results, stats, historical proofs, NCA proof if applicable) is correct initially.
    • [A4.2] Constant known_end_cap_fingerprint_hash is correct.
    • [A4.3] Public Input guta_circuit_whitelist_root_hash is correct.
    • [R3.1] (Implicit in End Cap) User session(s) based on valid past CHKP root.
  • Discharges:
    • [R3.1] (via VerifyEndCapProofGadget's historical proof check).
    • Validity of the input End Cap proof(s).
  • Remaining:
    • [R4.1] (New) Correctness of the current block's checkpoint_tree_root (established by checkpoint_historical_merkle_proof.current_root and passed consistently upwards).
    • [R4.3] = [A4.3] (Correctness of guta_circuit_whitelist root).

Step 5: Aggregate GUTA Proofs (Recursive - Realm/Coordinator)

  • Circuit(s):
    • GUTAVerifyTwoGUTACircuit (Aggregates two GUTA sub-proofs)
    • GUTAVerifyLeftGUTARightEndCapCircuit (Aggregates GUTA sub-proof and a new End Cap proof)
    • GUTAVerifyLeftEndCapRightGUTACircuit (Aggregates an End Cap proof and a GUTA sub-proof)
  • High-Level Purpose: To recursively combine verified state transitions within the GUTA hierarchy. These circuits take proofs representing changes in subtrees (either previous GUTA aggregations or newly processed End Caps) and merge them into a proof for the parent node, typically using NCA logic.
  • Proves:
    • Both input proofs (Type A and Type B, where Type can be GUTA or EndCap) are valid ZK proofs.
    • Input Proof A (if GUTA) used a whitelisted GUTA circuit ([R(A).3] discharged via VerifyGUTAProofGadget).
    • Input Proof A (if EndCap) used the whitelisted EndCap circuit ([A5.EndCapFingerprint] check via VerifyEndCapProofGadget).
    • Input Proof B processed similarly ([R(B).3] or [A5.EndCapFingerprint] discharged).
    • Both input proofs' headers/results reference the same checkpoint_tree_root ([R(A).1] and [R(B).1] verified to be equal, discharging one, remaining [RN.1]).
    • Both input proofs' headers reference the same guta_circuit_whitelist root ([R(A).3] and [R(B).3] verified to be equal, discharging one, remaining [RN.3]).
    • Public inputs of each input proof match their respective headers/results.
    • The NCA logic (TwoNCAStateTransitionGadget) correctly combines the state transitions (SubTreeNodeStateTransition from input GUTA headers or derived from EndCap results) based on the NCA proof witness.
    • Statistics are correctly summed (GUTAStatsGadget.combine_with).
    • Outputs a GlobalUserTreeAggregatorHeader for the parent NCA node.
  • How:
    • VerifyGUTAProofGadget (for GUTA inputs).
    • VerifyEndCapProofGadget (for EndCap inputs).
    • TwoNCAStateTransitionGadget: Core aggregation logic using NCA proof witness.
    • Connections ensuring consistency of checkpoint_tree_root and guta_circuit_whitelist between inputs.
  • Assumes:
    • [A5.1] Witness data (input proofs, headers/results, whitelist proofs, NCA proof) correct initially.
    • [A5.EndCapFingerprint] Constant known_end_cap_fingerprint_hash (if applicable).
    • [R(A).1], [R(B).1] (from inputs) CHKP root correctness.
    • [R(A).3], [R(B).3] (from inputs) GUTA whitelist correctness.
  • Discharges: Validity/consistency of input proofs, their whitelist usage ([R(A/B).3]), and consistency of their assumed CHKP roots ([R(A).1] confirmed equal to [R(B).1]).
  • Remaining:
    • [RN.1] = [R(A).1] (Common CHKP root correctness).
    • [RN.3] = [R(A).3] (Common GUTA whitelist correctness).

Step 5.5: Propagate GUTA Proof Upwards (Line Proof)

  • Circuit: GUTAVerifyGUTAToCapCircuit (May be used within Realm or by Coordinator)
  • High-Level Purpose: To efficiently propagate a verified state transition from a lower node up a direct path in the tree (where no merging is needed) to a higher level (e.g., the Realm root or the global root).
  • Proves:
    • The input GUTA proof is valid and used a whitelisted GUTA circuit ([R_In.3] discharged).
    • The input proof references the correct CHKP root ([R_In.1]).
    • The state transition is correctly recalculated from the input proof's node level up to the target level (e.g., level 0) using the provided top_line_siblings witness.
    • Outputs a GlobalUserTreeAggregatorHeader with the state transition reflecting the change at the target level.
  • How: VerifyGUTAProofToLineGadget (uses VerifyGUTAProofGadget and GUTAHeaderLineProofGadget which uses SubTreeNodeTopLineGadget).
  • Assumes:
    • [A5.5.1] Witness data (input proof, header, whitelist proof, top_line_siblings) correct initially.
    • [R_In.1], [R_In.3] (from input proof).
  • Discharges: [R_In.3] (GUTA whitelist). Validity of input proof relative to [R_In.1].
  • Remaining:
    • [R5.5.1] = [R_In.1] (Common CHKP root correctness).
    • [R5.5.3] = [R_In.3] (Common GUTA whitelist correctness).

(Steps 5/5.5 repeat recursively until Realm roots are reached)

Step 5.N: Handle No GUTA Changes

  • Circuit: GUTANoChangeCircuit
  • High-Level Purpose: To allow the aggregation process to incorporate the latest checkpoint root even if no user activity (relevant to GUTA) occurred in a particular block or subtree. Maintains checkpoint consistency across the aggregation structure.
  • Proves:
    • A specific checkpoint_leaf exists in the checkpoint_tree at the previous checkpoint_id.
    • The GUSR root remained unchanged between the previous and current checkpoint (new_guta_header.state_transition shows old_node_value == new_node_value at level 0).
    • Statistics are zero.
    • Outputs a GlobalUserTreeAggregatorHeader referencing the current checkpoint_tree_root but indicating no GUSR change.
  • How: GUTANoChangeGadget (uses MerkleProofGadget for checkpoint proof, constructs no-op transition).
  • Assumes:
    • [A5.N.1] Witness data correct initially.
    • [A5.N.3] Public Input guta_circuit_whitelist root is correct.
  • Discharges: Internal consistency of checkpoint proof/leaf.
  • Remaining:
    • [R5.N.1] (New) Correctness of the current checkpoint_tree_root (from the witness proof).
    • [R5.N.3] = [A5.N.3] (GUTA whitelist correctness).

Output of Phase 2: Aggregated GUTA proof(s) from each active Realm (or a NoChange proof), valid relative to [R_Realm.1] and [R_Realm.3].


Phase 3: Coordinator Level Aggregation - Network Execution

(Coordinator combines proofs from Realms and global tree updates)

Step 6: Process User Registrations

  • Circuit: BatchAppendUserRegistrationTreeCircuit
  • Proves: Correct batch append to URT (output root valid given input root [A6.3]). Respects register_users_circuit_whitelist ([A6.2]).
  • Assumes: [A6.x] (Witness, whitelist, input URT root).
  • Discharges: Internal append proof consistency.
  • Remaining: [R6.2] (Whitelist), [R6.3] (Input URT root correctness).

Step 7: Process Contract Deployments

  • Circuit: BatchDeployContractsCircuit
  • Proves: Correct batch append to GCON (output root valid given input root [A7.3]). Witness leaves match hashes. Respects deploy_contract_circuit_whitelist ([A7.2]).
  • How: BatchDeployContractsGadget.
  • Assumes: [A7.x] (Witness, whitelist, input GCON root).
  • Discharges: Internal append/leaf consistency.
  • Remaining: [R7.2] (Whitelist), [R7.3] (Input GCON root correctness).

Step 8: Aggregate Part 1 (Combine UserReg + Deploy + GUTA)

  • Circuit: VerifyAggUserRegistartionDeployContractsGUTACircuit
  • Proves: Input proofs (Agg UserReg, Agg Deploy, Agg GUTA) valid & used respective whitelisted circuits ([R6.2], [R7.2], [R_GUTA.3] discharged). All inputs based on same CHKP root ([R_GUTA.1] verified across inputs). Output header correctly combines state transitions.
  • How: VerifyAggUserRegistartionDeployContractsGUTAGadget.
  • Assumes:
    • [A8.1] Witness data correct initially.
    • [R_GUTA.1] (Implicit common CHKP root from inputs).
    • [R6.3] (Input URT root correctness).
    • [R7.3] (Input GCON root correctness).
  • Discharges: Whitelists ([R6.2], [R7.2], [R_GUTA.3]). Input proof validity. Consistency of CHKP root [R_GUTA.1].
  • Remaining:
    • [R8.1] = [R_GUTA.1] (CHKP root correctness).
    • [R8.3] = [R6.3] (Input URT root correctness).
    • [R8.4] = [R7.3] (Input GCON root correctness).

Output of Phase 3: Single "Part 1" proof, valid relative to [R8.1], [R8.3], [R8.4].


Phase 4: Final Block Proof - Network Execution

Step 9: Final Block Transition

  • Circuit: PsyCheckpointStateTransitionCircuit
  • High-Level Purpose: To generate the definitive proof for the block, verifying all aggregated work and cryptographically linking the block to its predecessor, thereby discharging all temporary assumptions made during parallel processing.
  • Proves:
    • Part 1 Agg proof (Step 8) valid & used correct circuit.
    • New CHKP Leaf computed correctly from Part 1 outputs (new global roots for URT ([R8.3] discharged), GCON ([R8.4] discharged), GUSR), stats, time, randomness.
    • CHKP tree append operation correct, transitioning from previous_checkpoint_proof.root ([R8.1]) to new_checkpoint_tree_root.
    • Final Chain Link: previous_checkpoint_proof.root matches the Public Input previous_block_chkp_root.
  • How: CheckpointStateTransitionChildProofsGadget, CheckpointStateTransitionCoreGadget.
  • Assumes:
    • [A9.1] Witness data correct initially.
    • [A9.2] Public Input previous_block_chkp_root == previous block's finalized CHKP root.
  • Discharges: [R8.1] (CHKP root correctness discharged against public input [A9.2]). [R8.3] (URT root correctness), [R8.4] (GCON root correctness) implicitly discharged by relying on the verified Part 1 proof output.
  • Remaining Assumptions: None.

Output: Final Block Proof. Its verification confirms the entire block's state transition is valid, contingent only on the validity of the previous block's state hash ([A9.2]) and ZKP soundness.

Psy: Horizontally Scalable Blockchain via PARTH and ZK Proofs

Introduction: The Serial Bottleneck

Traditional blockchains suffer from a serial execution bottleneck. They process transactions sequentially within a single state machine, meaning adding more nodes doesn't increase overall throughput (TPS). Parallel execution attempts often lead to race conditions and inconsistencies. Psy overcomes this fundamental limitation with its novel PARTH architecture and end-to-end Zero-Knowledge Proof (ZKP) system.

The PARTH Architecture: Foundation for Parallelism

PARTH (Parallelizable Account-based Recursive Transaction History) redefines blockchain state organization:

  1. Granular State: Instead of one global state tree, Psy maintains a hierarchy of Merkle trees, most notably:

    • Per-User, Per-Contract State (CSTATE): Each user has a separate Merkle tree (CSTATE) representing their specific state within each smart contract they interact with.
    • User Contract Tree (UCON): Aggregates all CSTATE roots for a single user, representing their overall state across all contracts.
    • Global User Tree (GUSR): Aggregates all UCON roots, representing the state of all users.
    • Global Contract Tree (GCON): Represents the global state related to contract code/definitions.
    • Checkpoint Tree (CHKP): The top-level tree whose root hash represents a verifiable snapshot of the entire blockchain state at a given block.
  2. Controlled Interaction Rules (Key to Scalability):

    • Write Locally: A transaction initiated by a user can only modify (write to) the state within that specific user's own trees (primarily their CSTATE and UCON trees).
    • Read Globally (Previous State): A transaction can read the state from any other user's trees (or global trees like GCON), but critically, it only sees the state as it was finalized at the end of the previous block.
  3. Enabling Parallelism: Because write operations are isolated to the sender's state trees, and read operations access the immutable, finalized state of the last block, transactions from different users within the same block operate independently. They cannot conflict or cause race conditions. This independence is the architectural breakthrough that allows for massive parallel processing.

The End-to-End ZK Proof Process: Securing Parallelism

Psy uses a sophisticated system of ZK circuits and recursive proofs to verify all state transitions securely, even when processed in parallel.

Step 1: Local Transaction Execution & Proving (User Level)

  • Action: A user interacts with a dApp, triggering one or more smart contract function calls.
  • Execution: The logic defined in the specific Contract Function Circuit (CFC) associated with the smart contract is executed locally (or by a delegated prover).
  • State Update: This execution modifies the user's CSTATE tree for that contract. The Merkle root of the user's UCON tree is also updated to reflect this change.
  • Proof Generation: The user generates ZK proofs using User Proving Session (UPS) circuits. These proofs attest that:
    • The contract logic (CFC) was executed correctly.
    • The user's CSTATE and UCON trees transitioned correctly from a known previous state root to a new state root.
    • Assumptions: These proofs rely on public inputs that declare assumptions about the state of the blockchain before the transaction (e.g., the user's previous UCON root, the relevant GCON root, and crucially, the CHKP root of the last finalized block which guarantees the validity of any global state read).
  • Recursion: If the user performs multiple actions, recursive proofs compress them into a single proof representing the net state change for that user within the block.

Step 2: Parallel Proof Aggregation (Decentralized Proving Network + Realms)

  • Submission: Users submit their final proofs and delta merkle proofs (representing their state transitions for the block) to their corresponding realm node on the Psy network. (see handle_recv_end_cap_from_user)
  • Parallel Processing: Thanks to the PARTH architecture ensuring user-transaction independence, the Decentralized Proving Network can take proofs from thousands or millions of users and begin verifying and aggregating them in parallel. Realms are in charge of sending the proofs to the queue to be aggregated by the proof workers
  • Once all the update proofs are aggregated in a realm into a single proof, the realm sends that proof off to the block coordinator to be aggregated into a single proof of all realm updates at the end of the block.
  • Hierarchical Aggregation: Specialized Aggregation Circuits (e.g., for aggregating user contract trees, or global user trees) recursively verify batches of proofs from the layer below within the PARTH structure.
    • Each aggregation circuit checks the validity of the input proofs it receives.
    • It enforces the assumptions declared in the public inputs of the lower-level proofs. For example, a circuit aggregating user states ensures all input proofs correctly referenced the same previous global user state root.
    • It outputs a single, smaller proof representing the validity of the entire batch it processed.

Step 3: Final Block Proof Generation (Consensus / Block Leader)

  • Final Aggregation: The parallel aggregation process continues up the hierarchy, culminating in the Checkpoint Tree "Block" Circuit. This final circuit aggregates proofs from the top-level trees (like GUSR, GCON).
  • Inductive Verification: Assumptions made by lower circuits are checked and discharged by higher circuits during the recursive process. By the time execution reaches the final "Block" circuit, the only remaining external assumption is the root hash of the previous block's Checkpoint Tree (CHKP root).
  • Trustless Link: This final circuit verifies this previous CHKP root against the public input provided (which is the known, finalized root from the preceding block). This check creates a cryptographic, trustless link between consecutive blocks.
  • Proof Singularity: The output is a single, succinct ZK proof for the entire block, proving the validity of all transactions and the overall state transition from the previous CHKP root to the new CHKP root, without needing to reveal individual transaction details or re-execute anything.

Step 4: Verification and Finalization

  • The final block proof is broadcast and verified by consensus nodes (or potentially bridge contracts on other L1s).
  • Verification is extremely fast as it only involves checking the single ZK proof and the link to the previous block's state.
  • Once verified, the new CHKP root becomes the finalized, canonical state snapshot for that block.

The Role of the Checkpoint Tree (CHKP) and Circuits

  • Global State Snapshot (CHKP): The CHKP root acts as a universal anchor. It captures a full snapshot of the entire chain and allows anyone to efficiently verify any piece of state information (user balance, contract variable, etc.) from any block by providing a Merkle proof linking that data back to the validated CHKP root of that block.
  • Circuit Enforcement: Each tree update within the PARTH structure is strictly controlled. Updates are only permitted if accompanied by a valid ZK proof generated by a pre-approved, whitelisted circuit designed for that specific tree transition. This constraint, enforced by the aggregation circuits above, ensures state changes adhere precisely to the defined rules (contract logic, protocol rules).

Conclusion: Achieving Horizontal Scalability

Psy achieves true horizontal scalability through the synergy of:

  1. PARTH Architecture: Its granular state and specific read/write rules eliminate conflicts between transactions from different users within a block.
  2. Parallel Proving: This independence allows the computationally intensive task of ZK proof generation and aggregation to be massively parallelized across the Decentralized Proving Network.
  3. Recursive ZK Proofs: Efficiently compress and verify vast amounts of computation and state changes into a single, easily verifiable proof for each block, secured by the inductive verification up to the final Block Circuit linking to the previous state.

By processing user transactions independently and in parallel, secured by end-to-end ZK verification, Psy breaks the serial bottleneck and offers a path to blockchain performance potentially orders of magnitude higher than traditional architectures, truly scaling horizontally as more proving resources are added.

Key terms: DPN - Dapen, the code name for the Psy rust smart contract language UPS - User Proving Session, the process by which users prove transactions locally and generate the delta merkle proofs to be sent off to chain End Cap - The last recursive proof which checks the signature and state deltas for a user GUTA - Global User Tree Aggregation

The Psy User Proving Session (UPS) Proof Tree: Efficient Recursive Verification

1. Introduction: The Challenge of Recursive Verification Costs

The User Proving Session (UPS) relies on recursive ZK proofs, where each step cryptographically verifies the previous one. A naive approach might involve embedding the entire verification logic of the previous step's circuit inside the current step's circuit. However, this leads to several problems:

  1. Variable Circuit Complexity: Different UPS steps might verify different underlying circuits (standard CFC, deferred CFC, start session), each with varying verification costs. This makes creating fixed-size, efficient recursive circuits difficult.
  2. Computational Bloat: Repeatedly verifying complex proofs within recursion significantly increases the computational cost and proving time for each step.
  3. Limited Parallelism: Tightly coupling verification logic hinders the potential to parallelize the proving work, even if the witness generation is serial.

The UPS Proof Tree architecture elegantly solves these issues by deferring the direct verification cost and replacing it with cheap cryptographic commitments and existence checks.

2. The UPS Proof Tree: Commit, Don't Verify (Yet)

2.1 Core Concept: A Merkle Tree of Proof Commitments

Instead of fully verifying the previous proof within the current step's circuit, the UPS system commits each generated proof to a Merkle tree specific to the session.

  • Leaf Content: Each leaf i stores a hash committing to the proof's identity and context:

    • Standard Proofs (like ZK Signature): LeafValue_i = Hash(ProofFingerprint_i, PublicInputsHash_i)
    • Tree-Aware Proofs (UPS Steps): LeafValue_i = Hash(ProofFingerprint_i, TreeAwarePublicInputsHash_i)
      • Where TreeAwarePublicInputsHash_i = Hash(ProofTreeRoot_i-1, InnerPublicInputsHash_i)
  • Append-Only Construction: As each proof (CFC execution proof, UPS step proof, signature proof) is generated locally, its corresponding LeafValue is computed and appended to the tree. The tree root updates with each addition.

2.2 Cheap In-Circuit Checks: Attesting Existence and Linkage

The core innovation lies in using specialized gadgets instead of full verifiers within the recursive steps:

  • Gadget 1: AttestProofInTreeGadget (For non-tree-aware proofs like Signatures)

    • Function: Proves that a commitment Hash(Fingerprint, PublicInputsHash) exists as a leaf within a tree identified by AttestedProofTreeRoot.
    • Mechanism: Takes Fingerprint, PublicInputsHash, and a Merkle inclusion_proof witness. Computes the expected leaf hash and verifies the Merkle proof against it.
    • Cost: Primarily the cost of Merkle proof verification (logarithmic hashing) and a few hashes/comparisons – significantly cheaper than verifying the actual ZK proof.
    • Assumption: The ZK proof corresponding to the Fingerprint and PublicInputsHash is itself valid. This gadget only proves commitment existence.
  • Gadget 2: AttestTreeAwareProofInTreeGadget (For tree-aware UPS step proofs)

    • Function: Proves that a commitment for a tree-aware proof exists in the tree, linking it to the correct historical state of the tree.
    • Mechanism: Takes Fingerprint, InnerPublicInputsHash, an inclusion_proof (in the current tree), and a historical_root_proof (pivot proof showing Root_N-2 -> Root_N-1). It computes the expected tree-aware leaf hash Hash(Fingerprint, Hash(Root_N-1, InnerPublicInputsHash)) and verifies the inclusion_proof against it and the current root. It also verifies the historical_root_proof.
    • Cost: Cost of two Merkle proof verifications (inclusion and historical pivot) plus hashing/comparisons – still much cheaper than full ZK proof verification.
    • Assumption: The ZK proof corresponding to the Fingerprint and InnerPublicInputsHash (relative to Root_N-1) is itself valid.

2.3 How Recursion Works with the Tree: Deferral in Action

Consider UPS Step N verifying Step N-1:

  1. Step N-1 Runs: Generates its ZK proof (Proof_N-1) and computes its tree-aware public inputs hash TAPH_N-1 = Hash(Root_N-2, InnerPubHash_N-1). Its commitment LeafValue_N-1 = Hash(Fingerprint_N-1, TAPH_N-1) is added to the tree, updating the root to Root_N-1.
  2. Step N Circuit Runs:
    • Does NOT Verify Proof_N-1 directly.
    • Instead, it uses AttestTreeAwareProofInTreeGadget.
    • Assumes: The Proof_N-1 (whose commitment is being checked) is valid.
    • Takes Witness: Fingerprint_N-1, InnerPubHash_N-1, inclusion_proof (for Leaf N-1 in tree N-1), historical_root_proof (Root N-2 -> Root N-1).
    • Verifies: That the commitment to Proof_N-1 exists correctly linked to Root_N-2 within the tree state Root_N-1. This cryptographically confirms the sequential link.
    • Computes: Its own output header hash InnerPubHash_N.
    • Generates: Its own ZK proof (Proof_N) whose public inputs commit to Hash(Root_N-1, InnerPubHash_N).

Crucially, the circuit for Step N has a fixed, relatively low complexity, dominated by the Merkle proof gadgets, regardless of the complexity of the circuit used for Step N-1. It only deals with commitments to proofs, not the proofs themselves.

3. Discharging the Assumptions: The End Cap and Beyond

Throughout the UPS, the core assumption accumulates: "All ZK proofs committed to the tree are valid."

  • The End Cap's Role: The UPSStandardEndCapCircuit is where this primary assumption begins to be addressed for the UPS internal proofs.

    • It verifies the last UPS step proof's commitment using AttestTreeAwareProofInTreeGadget.
    • It verifies the ZK Signature proof's commitment using AttestProofInTreeGadget.
    • It ensures both verifications reference the same final proof tree root, linking the signature to the end state of the transaction sequence.
    • Optional (but recommended): It often includes a VerifyAggProofGadget (or VerifyAggRootGadget). This gadget takes a ZK proof (generated outside the UPS circuit) that recursively verifies all the actual proofs committed to the UPS Proof Tree. This aggregation proof essentially proves the core assumption: "All proofs committed to the tree with root FinalRoot are valid". By verifying this single aggregation proof, the End Cap circuit discharges the validity assumption for all internal UPS steps.
  • Deferred Verification: The actual computationally expensive work of verifying all the individual UPS step proofs and the signature proof is bundled into generating the separate UPS Proof Tree aggregation proof. This aggregation can happen:

    • Locally: After witness generation but before End Cap proving.
    • Remotely/Parallel: In a future design, the witness data and tree commitments could be sent to parallel provers. They generate proofs for each step and the final aggregation proof. The End Cap circuit then only needs to verify the single aggregation proof.

4. Benefits Summary

  1. Constant Recursive Step Cost: The cost of verifying the previous step inside the current step's circuit is fixed and low (Merkle checks), independent of the previous step's circuit complexity. This allows for efficient recursion with consistent circuit degrees.
  2. Deferral of Computation: The heavy lifting of verifying the actual ZK proofs is pushed out of the main recursive path and consolidated into a single (optional but recommended) aggregation proof verified at the end.
  3. Enables Parallel Proving: By using the tree commitments as interfaces, the proving of individual UPS steps (and the final tree aggregation) can be parallelized across multiple workers, even if witness generation is serial. Workers only need the relevant witnesses and the expected tree roots/proofs to verify their step's link, without needing to run the entire previous circuit's verification logic.
  4. Architectural Flexibility: The system clearly separates committing to a proof's existence/context from verifying the proof's internal validity, allowing different strategies for when and how the full verification occurs.

In essence, the UPS Proof Tree acts as a highly efficient cryptographic ledger within the proving session, allowing circuits to cheaply attest to the existence and sequential linkage of prior computational steps, while deferring the bulk verification cost to a final, potentially parallelizable, aggregation step.

Psy Architecture: Horizontally Scalable Blockchain via PARTH and ZK Proofs

1. Introduction: Beyond Sequential Limits

The evolution of blockchain technology has been marked by a persistent challenge: scalability. Traditional designs, processing transactions sequentially within a monolithic state machine, hit a throughput ceiling that cannot be overcome simply by adding more network nodes. Psy represents a fundamental leap forward, tackling this bottleneck through a revolutionary state architecture known as PARTH and a meticulously designed, end-to-end Zero-Knowledge Proof (ZKP) system. This architecture unlocks true horizontal scalability, enabling unprecedented transaction processing capacity while maintaining rigorous cryptographic security.

2. The PARTH Architecture: A Foundation for Parallelism

PARTH (Parallelizable Account-based Recursive Transaction History) dismantles the concept of a single, conflict-prone global state. Instead, it establishes a granular, hierarchical structure where state modifications are naturally isolated, paving the way for massive parallel processing.

2.1 The Hierarchical State Forest

Psy's state is not a single tree, but a "forest" of interconnected Merkle trees, each with a specific domain:

graph TD
    subgraph "Global State Snapshot (Block N)"
        CHKP(CHKP Root);
    end

    subgraph "Global Trees (Referenced by CHKP)"
      GUSR(GUSR Root);
      GCON(GCON Root);
      URT(URT Root);
      GDT(GDT Root);
      GWT(GWT Root);
      STATS(Block Stats Hash);
    end

    subgraph "User-Specific Trees (Leaf in GUSR)"
      ULEAF{User Leaf};
      UCON(UCON Root);
    end

    subgraph "Contract Definition (Leaf in GCON)"
       CLEAF{Contract Leaf};
       CFT(CFT Root);
    end

    subgraph "User+Contract Specific State (Leaf in UCON)"
      CST(CSTATE Root);
    end

    CHKP --> GUSR;
    CHKP --> GCON;
    CHKP --> URT;
    CHKP --> GDT;
    CHKP --> GWT;
    CHKP --> STATS;

    GUSR -- User ID --> ULEAF;
    ULEAF -- Contains --> UCON;
    ULEAF -- Contains --> UserMetadata[PK Hash, Bal, Nonce, etc.];

    GCON -- Contract ID --> CLEAF;
    CLEAF -- Contains --> CFT;
    CLEAF -- Contains --> ContractMetadata[Deployer Hash, CST Height];

    UCON -- Contract ID --> CST;
    CFT -- Function ID --> FuncFingerprint{Function Fingerprint};

    style CHKP fill:#f9f,stroke:#333,stroke-width:2px,font-weight:bold
    style GUSR fill:#ccf,stroke:#333,stroke-width:1px
    style GCON fill:#cfc,stroke:#333,stroke-width:1px
    style URT fill:#fec,stroke:#333,stroke-width:1px
    style ULEAF fill:#fcc,stroke:#333,stroke-width:1px,font-style:italic
    style CLEAF fill:#fcc,stroke:#333,stroke-width:1px,font-style:italic
    style UCON fill:#cff,stroke:#333,stroke-width:1px
    style CST fill:#ffc,stroke:#333,stroke-width:1px
    style CFT fill:#eef,stroke:#333,stroke-width:1px
  • CHKP (Checkpoint Tree): The ultimate source of truth for a given block. Its root immutably represents the entire state snapshot, committing to the roots of all major global trees and block statistics. Verifying the CHKP root transitively verifies the entire state.
  • GUSR (Global User Tree): Aggregates all registered users. Each leaf (ULEAF) corresponds to a user ID and contains their public key commitment, balance, nonce, last synchronized checkpoint ID, and, crucially, the root of their personal UCON tree.
  • UCON (User Contract Tree): A per-user tree mapping Contract IDs to the roots of the user's corresponding CSTATE trees. This tree represents the user's state footprint across all contracts they've interacted with.
  • CSTATE (Contract State Tree): The most granular level. This tree is specific to a single user AND a single contract. It holds the actual state variables (storage slots) pertinent to that user within that contract. This is where smart contract logic primarily operates.
  • GCON (Global Contract Tree): Stores global information about deployed contracts via CLEAF nodes (Contract Leaf).
  • CLEAF (Contract Leaf): Contains the deployer's identifier hash, the root of the contract's Function Tree (CFT), and the required height (size) for its associated CSTATE trees.
  • CFT (Contract Function Tree): Per-contract tree whitelisting executable functions. Maps Function IDs to the ZK circuit fingerprint of the corresponding DapenContractFunctionCircuit, ensuring only verified code can be invoked.
  • URT (User Registration Tree): Commits to user public keys during the registration process, ensuring uniqueness and linking registrations to cryptographic identities.
  • Other Trees: Dedicated global trees handle deposits (GDT), withdrawals (GWT), event data (EDATA - conceptually), etc.

2.2 PARTH Interaction Rules: Enabling Concurrency

The genius of PARTH lies in its strict state access rules:

  1. Localized Writes: A transaction initiated by User A can only modify state within User A's own trees. This typically involves changes within one or more of User A's CSTATE trees, which then requires updating the corresponding leaves in User A's UCON tree, ultimately updating User A's ULEAF in the GUSR. Crucially, User A cannot directly alter User B's CSTATE, UCON, or ULEAF.
  2. Historical Global Reads: A transaction can read any state from the blockchain (e.g., User B's balance stored in their ULEAF, a variable in User B's CSTATE via User B's UCON root, or global contract data from GCON). However, these reads always access the state as it was finalized in the previous block's CHKP root. The current block's ongoing, parallel state changes are invisible to concurrent transactions.

2.3 The Scalability Breakthrough: Conflict-Free Parallelism

These rules eliminate the core bottleneck of traditional blockchains:

  • No Write Conflicts: Since users only write to their isolated state partitions, transactions from different users within the same block cannot conflict.
  • No Read-Write Conflicts: Reading only the previous, immutable block state prevents race conditions where one transaction's read is invalidated by another's concurrent write.
  • Massively Parallel Execution & Proving: The PARTH architecture guarantees that the execution of transactions (CFCs) and the generation of their initial proofs (UPS) for different users are independent processes that can run entirely in parallel without requiring locks or complex synchronization.

3. End-to-End ZK Proof System: Securing Parallelism

Psy employs a multi-layered, recursive ZK proof system to cryptographically guarantee the integrity of every state transition, even those occurring concurrently.

3.1 Contract Function Circuits (CFCs) & Dapen (DPN)

  • Role: Encapsulate the verifiable logic of smart contracts. They define the allowed state transitions within a user's CSTATE for that contract.
  • Technology: Developed using high-level languages (TypeScript/JavaScript) and compiled into ZK circuits (DapenContractFunctionCircuit) via the Dapen (DPN) toolchain.
  • Execution: run locally during a User Proving Session (UPS). A ZK proof is generated for each CFC execution, attesting that the logic was followed correctly given the inputs and starting state provided in its context.

3.2 User Proving Session (UPS)

  • Role: Enables users (or their delegates) to locally process a sequence of their transactions for a block, generating a single, compact "End Cap" proof that summarizes and validates their entire session activity.
  • Process:
    1. Initialization (UPSStartSessionCircuit): Securely anchors the session's starting state to the last globally finalized CHKP root and the user's corresponding ULEAF.
    2. Transaction Steps (UPSCFCStandardTransactionCircuit, etc.): For each transaction:
      • Verifies the ZK proof of the locally executed CFC (from step 3.1).
      • Verifies the ZK proof of the previous UPS step (ensuring recursive integrity).
      • Proves that the UPS state delta (changes to UCON root, debt trees, tx count/stack) correctly reflects the verified CFC's outcomes.
    3. Finalization (UPSStandardEndCapCircuit): Verifies the last UPS step, verifies the user's ZK signature authorizing the session, checks final conditions (e.g., all debts cleared), and outputs the net state change (start_user_leaf_hash -> end_user_leaf_hash) and aggregated statistics (GUTAStats).
  • Scalability Impact: Drastically reduces the on-chain verification burden. Instead of verifying every transaction individually, the network only needs to verify one aggregate End Cap proof per active user per block.

3.3 Network Aggregation (Realms, Coordinators, GUTA)

The network takes potentially millions of End Cap proofs and efficiently aggregates them in parallel using a hierarchy of specialized ZK circuits.

  • Realms:

    • Role: Distributed ingestion and initial aggregation points for user state changes (GUSR), sharded by user ID ranges.
    • Function:
      1. Receive End Cap proofs from users within their range.
      2. Verify these proofs using circuits like GUTAVerifySingleEndCapCircuit (for individual proofs) or GUTAVerifyTwoEndCapCircuit (for pairs). These circuits use the VerifyEndCapProofGadget internally to check the End Cap proof validity, fingerprint, and historical checkpoint link, outputting a standardized GlobalUserTreeAggregatorHeader.
      3. Recursively aggregate the resulting GUTA headers using circuits like GUTAVerifyTwoGUTACircuit, GUTAVerifyLeftGUTARightEndCapCircuit, or GUTAVerifyLeftEndCapRightGUTACircuit. These employ VerifyGUTAProofGadget to check sub-proofs and TwoNCAStateTransitionGadget (or line proof logic) to combine state transitions.
      4. If necessary, use GUTAVerifyGUTAToCapCircuit (which uses VerifyGUTAProofToLineGadget) to bring a proof up to the Realm's root level.
      5. Handle periods of inactivity using GUTANoChangeCircuit.
      6. Submit the final aggregated GUTA proof for their user segment (representing the net change at the Realm's root node in GUSR) to the Coordinator layer.
    • Scalability Impact: Distributes the initial proof verification and GUSR aggregation load.
  • Coordinators:

    • Role: Higher-level aggregators combining proofs across Realms and across different global state trees.
    • Function:
      1. Verify aggregated GUTA proofs from multiple Realms (using GUTAVerifyTwoGUTACircuit or similar, employing VerifyGUTAProofGadget).
      2. Verify proofs for global operations:
        • User Registrations (BatchAppendUserRegistrationTreeCircuit).
        • Contract Deployments (BatchDeployContractsCircuit).
      3. Combine these different types of state transitions using aggregation circuits like VerifyAggUserRegistartionDeployContractsGUTACircuit, ensuring consistency relative to the same checkpoint.
      4. Prepare the final inputs for the block proof circuit (PsyCheckpointStateTransitionCircuit).
    • Scalability Impact: Manages the convergence of parallel proof streams from different state components and realms.
  • Proving Workers:

    • Role: The distributed computational workforce of the network. They execute the intensive ZK proof generation tasks requested by Realms and Coordinators.
    • Function: Stateless workers that fetch proving jobs (circuit type, input witness ID, dependency proof IDs) from a queue, retrieve necessary data from the Node State Store, generate the required ZK proof, and write the result back to the store.
    • Scalability Impact: The core engine of computational scalability. The network's proving capacity can be scaled horizontally simply by adding more (potentially permissionless) Proving Workers.

3.4 Final Block Proof Generation

  • Role: Creates the single, authoritative ZK proof for the entire block.
  • Circuit: PsyCheckpointStateTransitionCircuit.
  • Function: Takes the final aggregated state transition proofs from the Coordinator layer (representing net changes to GUSR, GCON, URT, etc.). Verifies these proofs. Computes the new global state roots and combines them with aggregated block statistics (PsyCheckpointLeafStats) to form the new PsyCheckpointLeaf. Proves the correct update of the CHKP tree by appending this new leaf hash. Critically, it verifies that the entire process correctly transitioned from the state defined by the previous block's finalized CHKP root (provided as a public input).
  • Output: A highly succinct ZK proof whose public inputs are the previous CHKP root and the new CHKP root.

4. Node State Architecture: Redis & KVQ Backend

Supporting this massive parallelism requires a high-performance, shared backend infrastructure.

  • Core Technology: Psy leverages Redis, a distributed in-memory key-value store known for its speed and scalability, as the primary backend. Redis Clusters allow horizontal scaling of storage and throughput.
  • Abstraction Layer (KVQ): A custom Rust library providing traits and adapters (KVQSerializable, KVQStandardAdapter, model types like KVQFixedConfigMerkleTreeModel) for structured, type-safe interaction with Redis. It simplifies key generation, serialization, and potentially caching.
  • Logical Components:
    • Proof Store (ProofStoreFred, implements QProofStore... traits): Stores ZK proofs and input witnesses, keyed by QProvingJobDataID. Uses Redis Hashes (HSET, HGET) and potentially atomic counters (HINCRBY) for managing job dependencies.
    • State Store (Models implementing PsyCoordinatorStore..., PsyRealmStore... traits): Stores the canonical blockchain state, primarily Merkle tree nodes (KVQMerkleNodeKey) and leaf data (UserLeaf, ContractLeaf, etc.). Uses standard Redis keys managed via KVQ models.
    • Queues (CheckpointDrainQueue, CheckpointHistoryQueue, WorkerEventQueue traits): Implement messaging between components. Uses Redis Lists (LPUSH, LPOP/BLPOP, LRANGE) for job queues and potentially Pub/Sub or simple keys/sorted sets for history tracking and notifications. ProofStoreFred often implements these queue interaction traits.
    • Local Caching (PsyCmdStoreWithCache, used within PsyLocalProvingSessionStore): Provides an in-memory cache layer for frequently accessed state data (e.g., contract definitions, user leaves from the previous block) during local UPS execution or within Realm/Coordinator nodes, reducing load on the central Redis cluster.
  • Scalability:
    • Redis Performance: Provides low-latency access required for coordinating many workers.
    • Horizontal Scaling: Redis clusters can scale to handle increased load.
    • Concurrency: Redis handles concurrent connections from numerous DPN nodes.
    • Decoupling: Proving computation (Workers) is separated from state storage and coordination (Redis + Control Nodes), allowing independent scaling.
graph TB
    %% External Network
    Internet[🌐 Internet] --> IGW[Internet Gateway]
    
    %% VPC Container
    subgraph VPC["🏢 VPC (10.0.0.0/16)"]
        IGW --> ALB[🔀 Application Load Balancer<br/>Ports: 8545, 8546, 8547]
        
        %% Availability Zone A
        subgraph AZ1["🏛️ Availability Zone A"]
            subgraph PubSub1["🌍 Public Subnet 1<br/>(10.0.0.0/24)"]
                ALB
            end
            
            subgraph PrivSub1["🔒 Private Subnet 1<br/>(10.0.2.0/24)"]
                ECS_Coord[🐳 ECS Task<br/>Coordinator]
                ECS_R0[🐳 ECS Task<br/>Realm 0]
                ECS_R1[🐳 ECS Task<br/>Realm 1]
                
                Redis_Coord[🔴 Redis<br/>Coordinator]
                Redis_R0[🔴 Redis<br/>Realm 0]
                Redis_R1[🔴 Redis<br/>Realm 1]
                
                EFS_Mount1[📁 EFS Mount Target 1]
            end
        end
        
        %% Availability Zone B
        subgraph AZ2["🏛️ Availability Zone B"]
            subgraph PubSub2["🌍 Public Subnet 2<br/>(10.0.1.0/24)"]
                ALB_HA[🔀 ALB HA Deployment]
            end
            
            subgraph PrivSub2["🔒 Private Subnet 2<br/>(10.0.3.0/24)"]
                EFS_Mount2[📁 EFS Mount Target 2]
            end
        end
        
        ALB -.-> ALB_HA
    end
    
    %% External AWS Services
    subgraph AWS_Services["☁️ AWS Services"]
        ECR[📦 ECR Repository<br/>psy-protocol]
        S3[🪣 S3 Bucket<br/>Artifacts Storage]
        CloudWatch[📊 CloudWatch Logs<br/>/ecs/psy-protocol]
        EFS[💾 EFS FileSystem<br/>LMDBX Storage]
        ServiceDiscovery[🔍 Service Discovery<br/>psy.local]
    end
    
    %% ECS Cluster
    subgraph ECS_Cluster["🚢 ECS Cluster psy-cluster"]
        ECS_Coord
        ECS_R0
        ECS_R1
    end
    
    %% Connection Relationships - ALB to ECS
    ALB -->|Port 8545| ECS_Coord
    ALB -->|Port 8546| ECS_R0
    ALB -->|Port 8547| ECS_R1
    
    %% ECS to Redis Connections
    ECS_Coord <--> Redis_Coord
    ECS_R0 <--> Redis_R0
    ECS_R1 <--> Redis_R1
    
    %% EFS Connections
    EFS --> EFS_Mount1
    EFS --> EFS_Mount2
    ECS_Coord <--> EFS_Mount1
    ECS_R0 <--> EFS_Mount1
    ECS_R1 <--> EFS_Mount1
    
    %% ECS to External Services
    ECS_Coord <--> S3
    ECS_R0 <--> S3
    ECS_R1 <--> S3
    
    ECR --> ECS_Coord
    ECR --> ECS_R0
    ECR --> ECS_R1
    
    ECS_Coord --> CloudWatch
    ECS_R0 --> CloudWatch
    ECS_R1 --> CloudWatch
    
    ServiceDiscovery <--> ECS_Coord
    ServiceDiscovery <--> ECS_R0
    ServiceDiscovery <--> ECS_R1
    
    %% Style Definitions
    classDef vpc fill:#e1f5fe,stroke:#01579b,stroke-width:3px
    classDef publicSubnet fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px
    classDef privateSubnet fill:#ffebee,stroke:#c62828,stroke-width:2px
    classDef ecs fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
    classDef redis fill:#ffcdd2,stroke:#d32f2f,stroke-width:2px
    classDef storage fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
    classDef loadbalancer fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
    classDef external fill:#f5f5f5,stroke:#424242,stroke-width:2px
    
    class VPC vpc
    class PubSub1,PubSub2 publicSubnet
    class PrivSub1,PrivSub2 privateSubnet
    class ECS_Coord,ECS_R0,ECS_R1 ecs
    class Redis_Coord,Redis_R0,Redis_R1 redis
    class EFS,EFS_Mount1,EFS_Mount2,S3 storage
    class ALB,ALB_HA loadbalancer
    class ECR,CloudWatch,ServiceDiscovery external

5. Security Guarantees

Psy's security rests on multiple pillars:

  1. ZK Proof Soundness: Mathematical guarantee that invalid computations or state transitions cannot produce valid proofs.
  2. Circuit Whitelisting: State trees (GUSR, GCON, CFT, etc.) can only be modified by proofs generated from circuits whose fingerprints are present in designated whitelist Merkle trees. This prevents unauthorized code execution. Aggregation circuits enforce these checks recursively.
  3. Recursive Verification: Each layer of aggregation cryptographically verifies the proofs from the layer below.
  4. Checkpoint Anchoring: The final block circuit explicitly links the new state to the previous block's verified CHKP root, creating an unbroken chain of state validity.

6. Conclusion: A New Era of Blockchain Scalability

Psy's architecture is a fundamental departure from sequential blockchain designs. By leveraging the PARTH state model for conflict-free parallel execution and securing it with an end-to-end recursive ZKP system, Psy achieves true horizontal scalability. The intricate dance between local user proving (UPS/CFC), distributed network aggregation (Realms/Coordinators/GUTA), and a scalable backend (Redis/KVQ) allows the network's throughput to grow with the addition of computational resources (Proving Workers), paving the way for decentralized applications demanding high performance and robust security.


Realm & GUTA Gadgets

These gadgets are used within the circuits run by the network's Realm and Coordinator nodes for aggregating proofs.


GUTAStatsGadget

  • File: guta_stats_rs.txt
  • Purpose: Represents and aggregates key statistics during GUTA processing (fees, operations counts, slots modified).
  • Technical Function: Data structure holding targets for stats. Provides combine_with method for additive aggregation and to_hash for commitment.
  • Inputs/Witness: Targets for fees_collected, user_ops_processed, total_transactions, slots_modified.
  • Outputs/Computed: Combined stats (via combine_with), hash of stats (to_hash).
  • Constraints: combine_with uses addition constraints. to_hash uses packing/hashing.
  • Assumptions: Assumes input target values are correct.
  • Role: Tracks operational metrics through the aggregation tree.

GlobalUserTreeAggregatorHeaderGadget

  • File: guta_header_rs.txt
  • Purpose: Defines the standard public input structure for all GUTA-related aggregation circuits. Encapsulates the result of an aggregation step.
  • Technical Function: Data structure holding guta_circuit_whitelist root, checkpoint_tree_root, the state_transition (SubTreeNodeStateTransitionGadget) for the GUSR tree segment covered, and aggregated stats (GUTAStatsGadget). Provides to_hash method.
  • Inputs/Witness: Component targets/gadgets.
  • Outputs/Computed: Hash of the header (to_hash).
  • Constraints: to_hash combines hashes of components.
  • Assumptions: Assumes input components are correctly formed/verified.
  • Role: Standardizes the interface between recursive GUTA circuits, ensuring consistent information propagation and verification.

VerifyEndCapProofGadget

  • File: verify_end_cap_rs.txt
  • Purpose: Verifies a user's submitted End Cap proof (output of UPS Phase 1) at the entry point of the GUTA aggregation (typically within a Realm node circuit).
  • Technical Function: Verifies the End Cap ZK proof, checks its fingerprint against the known constant, matches public inputs against witness data (result/stats), verifies the user's claimed checkpoint root against a historical checkpoint proof, and translates the result into a GlobalUserTreeAggregatorHeaderGadget.
  • Inputs/Witness:
    • end_cap_result_gadget, guta_stats: Witness for claimed outputs.
    • checkpoint_historical_merkle_proof: Witness proving user's checkpoint_tree_root_hash was valid historically.
    • verifier_data, proof_target: The End Cap proof itself and its verifier data.
    • known_end_cap_fingerprint_hash: Constant parameter.
  • Outputs/Computed: Implements ToGUTAHeader to output a GlobalUserTreeAggregatorHeaderGadget.
  • Constraints:
    • Verifies proof_target using verifier_data.
    • Computes fingerprint from verifier_data, connects to known_end_cap_fingerprint_hash.
    • Computes expected public inputs hash from end_cap_result_gadget and guta_stats, connects to proof_target.public_inputs.
    • Verifies checkpoint_historical_merkle_proof using HistoricalRootMerkleProofGadget.
    • Connects historical_proof.historical_root to end_cap_result.checkpoint_tree_root_hash.
    • Constructs output GUTA header using historical_proof.current_root as the checkpoint_tree_root, deriving the state transition from end_cap_result (leaf hashes and user ID), and using the verified guta_stats.
  • Assumptions: Assumes witness data is valid initially. Assumes known_end_cap_fingerprint_hash and input default_guta_circuit_whitelist are correct.
  • Role: Securely ingests a user's proven session result into the GUTA aggregation, validating it against global rules and historical state before converting it to the standard GUTA format.

VerifyGUTAProofGadget

  • File: verify_guta_proof_rs.txt
  • Purpose: Verifies a GUTA proof generated by a lower level in the aggregation hierarchy (e.g., verifying a Realm's proof at the Coordinator level, or verifying sub-realm proofs within a Realm).
  • Technical Function: Verifies the input GUTA ZK proof, checks its fingerprint against the GUTA circuit whitelist, and ensures its public inputs match the claimed GUTA header witness.
  • Inputs/Witness:
    • guta_proof_header_gadget: Witness for the claimed header of the proof being verified.
    • guta_whitelist_merkle_proof: Witness proving the sub-proof's circuit fingerprint is in the GUTA whitelist.
    • verifier_data, proof_target: The GUTA proof and its verifier data.
  • Outputs/Computed: The verified guta_proof_header_gadget.
  • Constraints:
    • Verifies proof_target using verifier_data.
    • Computes fingerprint from verifier_data.
    • Verifies guta_whitelist_merkle_proof.
    • Connects guta_proof_header.guta_circuit_whitelist to whitelist_proof.root.
    • Computes expected public inputs hash from guta_proof_header, connects to proof_target.public_inputs.
    • Connects whitelist_proof.value to computed fingerprint.
  • Assumptions: Assumes witness data is valid initially.
  • Role: The core recursive verification step for GUTA aggregation circuits. Ensures that only valid proofs generated by allowed GUTA circuits are incorporated into higher levels of aggregation.

TwoNCAStateTransitionGadget

  • File: two_nca_state_transition_rs.txt
  • Purpose: Combines the GUSR state transitions from two independent GUTA proofs (A and B) that modify different branches of the tree, computing the resulting state transition at their Nearest Common Ancestor (NCA).
  • Technical Function: Uses UpdateNearestCommonAncestorProofOptGadget to verify the NCA proof witness against the state transitions provided by the input GUTA headers (a_header, b_header). Combines statistics and outputs a new GUTA header for the NCA node.
  • Inputs/Witness:
    • a_header, b_header: Verified GUTA headers from the two child proofs.
    • UpdateNearestCommonAncestorProof: Witness containing NCA proof data (partial or full).
  • Outputs/Computed: new_guta_header representing the transition at the NCA.
  • Constraints:
    • Connects a_header.checkpoint_tree_root == b_header.checkpoint_tree_root.
    • Connects a_header.guta_circuit_whitelist == b_header.guta_circuit_whitelist.
    • Connects a_header.state_transition fields (old/new value, index, level) to update_nca_proof_gadget.child_a fields.
    • Connects b_header.state_transition fields to update_nca_proof_gadget.child_b fields.
    • Computes new_stats = a_header.stats.combine_with(b_header.stats).
    • Constructs new_guta_header using whitelist/checkpoint from children, the NCA state transition details from update_nca_proof_gadget, and new_stats.
  • Assumptions: Assumes input headers a_header and b_header have already been verified. Assumes the NCA proof witness is valid initially.
  • Role: Enables efficient parallel aggregation by merging results from independent subtrees using cryptographic proofs of their combined effect at the parent node.

GUTAHeaderLineProofGadget

  • File: guta_line_rs.txt
  • Purpose: Propagates a GUTA state transition upwards along a direct path in the GUSR tree (when there's only one child updating that path segment).
  • Technical Function: Uses SubTreeNodeTopLineGadget to recompute the Merkle root hash from the child's transition level up to a specified higher level (e.g., Realm root or global root), using sibling hashes provided as witness.
  • Inputs/Witness:
    • child_proof_header: The verified GUTA header from the lower level.
    • siblings: Witness array of Merkle sibling hashes for the path.
    • Height parameters.
  • Outputs/Computed: new_guta_header with the state transition updated to reflect the higher level.
  • Constraints: Relies on SubTreeNodeTopLineGadget's internal Merkle hashing constraints.
  • Assumptions: Assumes child_proof_header is verified. Assumes siblings witness is correct.
  • Role: Efficiently moves a verified state transition up the tree hierarchy when merging (NCA) is not required.

VerifyGUTAProofToLineGadget

  • File: verify_guta_proof_to_line_rs.txt
  • Purpose: Combines verifying a lower-level GUTA proof with immediately propagating its state transition upwards using a line proof.
  • Technical Function: Orchestrates VerifyGUTAProofGadget followed by GUTAHeaderLineProofGadget.
  • Inputs/Witness: Combines witnesses for both sub-gadgets (proof, header, whitelist proof, siblings).
  • Outputs/Computed: new_guta_header at the top of the line.
  • Constraints: Instantiates and connects the two sub-gadgets.
  • Assumptions: Relies on sub-gadget assumptions.
  • Role: A common pattern gadget simplifying the verification and upward propagation of a single GUTA proof branch.

(Coordinator/GUTA Gadgets related to User Registration: GUTARegisterUserCoreGadget, GUTARegisterUserFullGadget, GUTARegisterUsersGadget, GUTAOnlyRegisterUsersGadget, GUTARegisterUsersBatchGadget - Descriptions remain largely the same as the previous detailed version, focusing on GUSR tree updates and User Registration Tree checks.)


GUTANoChangeGadget

  • File: guta_no_change_gadget_rs.txt
  • Purpose: Creates a GUTA header signifying that the GUSR tree state did not change for this block/subtree, while still potentially updating the referenced checkpoint_tree_root.
  • Technical Function: Verifies a checkpoint proof to get the current checkpoint_tree_root and the corresponding user_tree_root from the checkpoint leaf. Constructs a GUTA header with a "no-op" state transition (old=new=user_tree_root at level 0) and zero stats.
  • Inputs/Witness:
    • guta_circuit_whitelist: Input constant/parameter.
    • checkpoint_tree_proof: Witness proving checkpoint_leaf existence.
    • checkpoint_leaf_gadget: Witness for the PsyCheckpointLeafCompactWithStateRoots.
  • Outputs/Computed: new_guta_header (indicating no GUSR change).
  • Constraints: Verifies checkpoint proof (MerkleProofGadget). Verifies consistency between proof value and leaf hash. Constructs header with no-op transition and zero stats.
  • Assumptions: Assumes witness proof/leaf data is valid initially. Assumes input guta_circuit_whitelist is correct.
  • Role: Allows the GUTA aggregation structure to remain consistent and synchronized with the main Checkpoint Tree advancement even during periods where no user state relevant to GUTA was modified.

Psy Protocol Wiki: Achieving Scalability with PARTH and ZK Proofs

1. Introduction: The Scalability Challenge and Psy's Solution

Traditional blockchains often face a serial execution bottleneck. Transactions are typically processed one after another within a single state machine context. Adding more validator nodes doesn't necessarily increase the overall transaction processing capacity (TPS), as they all work on the same sequential task list. Attempts at parallel execution often introduce complexity, race conditions, and potential state inconsistencies.

Psy (Quantum Entangled Data) tackles this fundamental limitation through two core innovations:

  1. PARTH (Parallelizable Account-based Recursive Transaction History) Architecture: A novel way of organizing blockchain state that inherently allows for parallel processing of transactions from different users within the same block.
  2. End-to-End Zero-Knowledge Proofs (ZKPs): A sophisticated system of ZK circuits that rigorously verify every state transition, ensuring the security and consistency of the parallel execution enabled by PARTH.

This wiki page details the journey of a transaction from user initiation to final block inclusion, focusing on the ZK circuits involved, the assumptions they make, what they prove, and how these assumptions are systematically verified and discharged throughout the process.

2. The PARTH Architecture: Foundation for Parallelism

PARTH reorganizes blockchain state into a hierarchy of Merkle trees, enabling fine-grained state access and modification control:

  • Per-User, Per-Contract State (CSTATE): Each user maintains a separate Merkle tree (CSTATE) for their specific state within each smart contract they interact with.
  • User Contract Tree (UCON): Aggregates all CSTATE roots for a single user, representing their state across all contracts.
  • Global User Tree (GUSR): Aggregates all UCON roots, representing the state of all users.
  • Global Contract Tree (GCON): Represents the global state related to contract code definitions and metadata.
  • User Registration Tree: Tracks registered users and their public keys (relevant for GUTARegisterUserCircuit).
  • Checkpoint Tree (CHKP): The top-level tree. Its root hash serves as a cryptographic snapshot (checkpoint) of the entire verifiable blockchain state at a specific block height.

Key PARTH Rules Enabling Scalability:

  1. Write Locally: A transaction initiated by User A can only modify (write to) the state within User A's own trees (their various CSTATEs and subsequently their UCON root).
  2. Read Globally (Previous State): A transaction can read state from any tree (User A's, User B's, GCON, etc.), but it reads the state as it was finalized at the end of the previous block (anchored by the previous block's CHKP root).

Because write operations are isolated and reads access immutable past state, transactions from different users within the same block operate independently and cannot conflict. This architectural design is the cornerstone of Psy's horizontal scalability.

3. The Big Picture: End-to-End ZK Proof Flow

The process of validating transactions and building a block involves three main phases, each relying on specific ZK circuits:

  1. Phase 1: User Proving Session (UPS) - Local Execution & Proving: The user (or a delegated prover) executes their transactions locally and generates a chain of recursive ZK proofs culminating in a single "End Cap" proof for their activity within the block.
  2. Phase 2: Global User Tree Aggregation (GUTA) - Parallel Network Execution: The Psy network's Decentralized Proving Network (DPN) takes End Cap proofs (and other GUTA-related proofs like registrations) from many users and aggregates them in parallel using specialized GUTA circuits.
  3. Phase 3: Final Block Proof Generation: A final aggregation step combines the top-level GUTA proof (representing all user state changes) with proofs for other global state changes (like contract deployments) into a single block proof anchored to the previous block's state.

4. Phase 1: User Proving Session (UPS) Circuits

This phase occurs locally, building a user-specific proof chain.

4.1. UPSStartSessionCircuit

  • Purpose: Initializes the proving session for a user, establishing a secure starting point based on the last globally finalized block state.
  • Core Gadget: UPSStartStepGadget
  • Input Data: UPSStartStepInput (contains the target starting UserProvingSessionHeader, the corresponding PsyCheckpointLeaf and PsyCheckpointGlobalStateRoots from the last block, and Merkle proofs (checkpoint_tree_proof, user_tree_proof) linking them together).
  • What it Proves:
    • Consistency of Initial State: The provided UserProvingSessionHeader.session_start_context is consistent with the provided PsyCheckpointLeaf, PsyCheckpointGlobalStateRoots, and the user's leaf (start_session_user_leaf) within the user_tree_root (part of PsyCheckpointGlobalStateRoots). It verifies that the user leaf, global roots, and checkpoint leaf all correctly correspond to the provided checkpoint_tree_root via the Merkle proofs.
    • Correct Initialization: The UserProvingSessionHeader.current_state is correctly initialized based on the session_start_context (e.g., user_leaf.last_checkpoint_id is updated, deferred_tx_debt_tree_root and inline_tx_debt_tree_root are empty hashes, tx_count is zero, tx_hash_stack is an empty hash).
    • Header Integrity: The hash of the entire ups_header is correctly computed.
    • Proof Tree Anchor: The final public output hash combines the ups_header hash with the empty_ups_proof_tree_root (a known constant representing the start of this session's proof tree), using compute_tree_aware_proof_public_inputs.
  • Assumptions Made:
    1. The witness data (UPSStartStepInput) provided by the user (fetched from querying the last finalized block state) is accurate at the start of verification.
    2. The globally known constant empty_ups_proof_tree_root (hash of an empty Merkle tree of height UPS_SESSION_PROOF_TREE_HEIGHT) is correct.
    3. The previous block's checkpoint_tree_root (implicitly contained within the input witness) is valid (this is the key assumption passed into the entire UPS).
  • How Assumptions are Discharged:
    • Assumption 1 is checked by the circuit's constraints, ensuring internal consistency between the header, leaf data, global roots, and Merkle proofs. If the witness data is inconsistent, proof generation fails.
    • Assumption 2 is a system parameter.
    • Assumption 3 is not discharged here; it's carried forward implicitly by using the state derived from that root as the starting point.
  • Contribution to Horizontal Scalability: Provides a secure, independent starting point for each user's session based on the common, finalized global state, allowing many users to start their sessions in parallel.
  • High-Level Functionality: Securely begins a user's batch of transactions for the current block context.

4.2. UPSCFCStandardTransactionCircuit

  • Purpose: Processes a standard Contract Function Call (CFC) transaction, extending the user's recursive proof chain. Executed potentially multiple times per session.
  • Core Gadgets: VerifyPreviousUPSStepProofInProofTreeGadget, UPSVerifyCFCStandardStepGadget
  • Input Data: UPSCFCStandardTransactionCircuitInput (contains info about the previous UPS step's proof and header (verify_previous_ups_step), and details for the current CFC step (standard_cfc_step)).
  • What it Proves:
    • Previous Step Validity: The ZK proof (previous_step_proof) provided for the previous UPS step (either the UPSStartSessionCircuit or another UPSCFC...Circuit) is valid.
    • Previous Step Whitelist: The previous step's proof was generated by a circuit whose fingerprint is present in the ups_circuit_whitelist_root specified in the previous_step_header.
    • Previous Step Linkage: The public inputs hash of the previous_step_proof correctly corresponds to the provided previous_step_header and the previous_proof_tree_root.
    • CFC Proof Validity & Inclusion: The ZK proof for the current CFC execution (verify_cfc_proof_input) is valid and is correctly included in the current current_proof_tree_root.
    • Contract Function Validity: The CFC proof corresponds to a function whose details (cfc_inclusion_proof) are correctly included in the Global Contract Tree (GCON) root referenced within the session's checkpoint context (checkpoint_state).
    • State Transition Correctness: Executing the CFC logically transitions the state represented by previous_step_header.current_state to the state represented by new_header_gadget.current_state. This includes updates to the user's user_leaf (specifically the user_contract_tree_root), potentially the deferred_tx_debt_tree_root or inline_tx_debt_tree_root, the tx_hash_stack, and incrementing the tx_count. The session_start_context and ups_step_circuit_whitelist_root remain unchanged.
    • Proof Tree Update: The output public inputs correctly combine the hash of the new_header_gadget with the current_proof_tree_root.
  • Assumptions Made:
    1. The witness data (UPSCFCStandardTransactionCircuitInput) for this specific step (including the previous step's proof/header, the CFC proof/details, state delta witnesses) is accurate initially.
    2. The current_proof_tree_root provided as input correctly represents the root of the session's proof tree after including the current CFC proof.
    3. The assumption about the original starting checkpoint_tree_root (from UPSStartSessionCircuit) is still carried forward implicitly.
  • How Assumptions are Discharged:
    • Assumption 1 is checked by verifying the previous step's proof, the CFC proof, the contract inclusion proof, and the state delta transition logic based on the witness.
    • Assumption 2 is not discharged here; it's passed implicitly to the next UPS step or the End Cap circuit, forming part of the recursive structure.
    • Assumption 3 remains implicitly carried forward.
  • Contribution to Horizontal Scalability: Allows users to process their transactions sequentially locally, independent of other users' activities within the same block. The proof chain maintains self-consistency.
  • High-Level Functionality: Securely executes and proves a single smart contract interaction within the user's session.

4.3. UPSCFCDeferredTransactionCircuit

  • Purpose: Processes a transaction that first settles a deferred debt item and then executes the main CFC logic.
  • Core Gadgets: VerifyPreviousUPSStepProofInProofTreeGadget, UPSVerifyPopDeferredTxStepGadget
  • Input Data: UPSCFCDeferredTransactionCircuitInput
  • What it Proves: Everything proven by UPSCFCStandardTransactionCircuit, plus:
    • Debt Removal: A specific deferred transaction was correctly removed from the deferred_tx_debt_tree (verified via ups_pop_deferred_tx_proof, a DeltaMerkleProofCore). The state transition reflects this removal before the main CFC logic is applied.
    • Debt-CFC Link: The data associated with the removed debt item matches the parameters required by the subsequent CFC execution.
  • Assumptions Made: Same as UPSCFCStandardTransactionCircuit, plus assumes the witness for the DeltaMerkleProofCore (debt removal) is accurate initially.
  • How Assumptions are Discharged: Same as standard, plus verifies the debt removal proof and its consistency with the CFC witness data. The starting state assumption and proof tree root assumption are carried forward.
  • Contribution to Horizontal Scalability: Local serial execution, same as standard.
  • High-Level Functionality: Enables settlement of asynchronous/deferred transaction calls within the user's proving flow.

4.4. UPSStandardEndCapCircuit

  • Purpose: Finalizes the user's entire proving session for the block, verifying the signature and packaging the net result.
  • Core Gadgets: UPSEndCapFromProofTreeGadget (which uses VerifyPreviousUPSStepProofInProofTreeGadget, AttestProofInTreeGadget, UPSEndCapCoreGadget, PsyUserProvingSessionSignatureDataCompactGadget)
  • Input Data: UPSEndCapFromProofTreeGadgetInput (contains info about the last UPS step, the ZK signature proof (verify_zk_signature_proof_input), signature parameters (user_public_key_param, nonce), and slots_modified).
  • What it Proves:
    • Last Step Validity: The proof for the last UPS transaction step is valid and used an allowed circuit (verify_previous_ups_step_gadget).
    • Signature Proof Validity & Inclusion: The ZK proof for the user's signature (verify_zk_signature_proof_gadget) is valid and is correctly included in the final UPS proof tree root (current_proof_tree_root).
    • Signature Authentication: The public key (user_public_key_param) used to verify the signature matches the public key stored in the previous_step_header.current_state.user_leaf.
    • Signature Payload Integrity: The ZK signature proof attests to a specific payload hash. This circuit proves that this payload hash correctly corresponds to a PsyUserProvingSessionSignatureDataCompact structure containing:
      • start_user_leaf_hash: Matches the start_session_user_leaf_hash from the previous_step_header.session_start_context.
      • end_user_leaf_hash: Matches the hash of the previous_step_header.current_state.user_leaf.
      • checkpoint_leaf_hash: Matches the checkpoint_leaf_hash from the previous_step_header.session_start_context.
      • tx_stack_hash: Matches the tx_hash_stack from the previous_step_header.current_state.
      • tx_count: Matches the tx_count from the previous_step_header.current_state.
    • Nonce Validity: The nonce used in the signature matches the nonce in the previous_step_header.current_state.user_leaf. (The state delta within the End Cap implicitly increments the nonce in the output UPSEndCapResultCompact).
    • Session Completion: The deferred_tx_debt_tree_root and inline_tx_debt_tree_root in the previous_step_header.current_state are both empty hashes (verified by end_cap_core_gadget.enforce_signature_constraints).
    • Checkpoint Consistency: The last_checkpoint_id in the previous_step_header.current_state.user_leaf matches the checkpoint_id from the session_start_context.
    • Final Output Structure: The public inputs of the End Cap proof hash a UPSEndCapResultCompact structure containing the start_user_leaf_hash, end_user_leaf_hash, checkpoint_tree_root_hash (all from the final previous_step_header), and the user_id.
  • Assumptions Made:
    1. The witness data (UPSEndCapFromProofTreeGadgetInput) provided is accurate initially.
    2. System constants like network_magic, DEFERRED_TRANSACTION_TREE_HEIGHT, INLINE_TRANSACTION_TREE_HEIGHT (for deriving empty roots) are correct.
    3. The assumption about the validity of the starting checkpoint_tree_root (from UPSStartSessionCircuit) is still carried forward implicitly into the UPSEndCapResultCompact output.
  • How Assumptions are Discharged:
    • Assumption 1 is checked by verifying the last step proof, the signature proof, and ensuring consistency between the signature payload data and the final UPS header state.
    • Assumption 2 relies on correct system setup.
    • Assumption 3 is not discharged. The End Cap proof essentially certifies: "Assuming the starting checkpoint X was valid, User Y correctly transitioned their state to Z and authorized it". All internal UPS assumptions (like correct proof tree construction, valid intermediate steps, correct CFC execution) have been verified and compressed into this final proof.
  • Contribution to Horizontal Scalability: Packages the user's entire block activity into a single, efficiently verifiable proof. This proof can be submitted to the network and processed by the GUTA layer in parallel with End Cap proofs from countless other users.
  • High-Level Functionality: Securely concludes and authorizes a user's transaction batch, preparing it for network-level aggregation.

5. Phase 2: Global User Tree Aggregation (GUTA) Circuits

The DPN receives End Cap proofs and potentially other state change proofs (like user registrations) and aggregates them in parallel using GUTA circuits. These circuits operate on GlobalUserTreeAggregatorHeader structures, which track state transitions within the Global User Tree (GUSR).

(Note: The exact circuit implementations are inferred from the gadgets. These circuits follow standard recursive aggregation patterns.)

5.1. GUTAProcessEndCapCircuit (Hypothetical)

  • Purpose: Integrates a validated user End Cap proof into the GUTA hierarchy.
  • Core Gadget: VerifyEndCapProofGadget
  • Input: An UPSStandardEndCapCircuit proof and its associated UPSEndCapResultCompact data. Also needs witness for checkpoint_historical_merkle_proof and guta_stats.
  • What it Proves:
    • End Cap Proof Validity: The provided End Cap proof is valid and was generated by the correct circuit (known_end_cap_fingerprint_hash).
    • End Cap Result Consistency: The public inputs hash of the End Cap proof matches the hash of the provided UPSEndCapResultCompact data.
    • Historical Checkpoint Validity: The checkpoint_tree_root_hash claimed in the UPSEndCapResultCompact existed as a historical root within the checkpoint_historical_merkle_proof.current_root (which represents the CHKP root being targeted by this GUTA aggregation step).
    • GUTA Header Output: Correctly constructs a GlobalUserTreeAggregatorHeader where:
      • guta_circuit_whitelist is set (likely from input/config).
      • checkpoint_tree_root matches checkpoint_historical_merkle_proof.current_root.
      • state_transition reflects the change from start_user_leaf_hash to end_user_leaf_hash at the correct user_id index (level GLOBAL_USER_TREE_HEIGHT).
      • stats are populated based on input witness.
  • Assumptions Made:
    1. Witness data (End Cap proof, result, stats, historical proof) is accurate initially.
    2. The known_end_cap_fingerprint_hash constant is correct.
    3. The guta_circuit_whitelist root provided (implicitly or explicitly) is correct for this GUTA context.
    4. The checkpoint_historical_merkle_proof.current_root provided accurately represents the target CHKP root for this aggregation level. (This implicitly carries forward the assumption about the original starting checkpoint root's validity).
  • How Assumptions are Discharged:
    • Assumption 1 is checked by verifying the End Cap proof and the historical checkpoint proof.
    • Assumption 2 is a system parameter.
    • Assumptions 3 and 4 are not discharged; they are bundled into the output GlobalUserTreeAggregatorHeader and passed upwards to the next GUTA aggregation level.
  • Contribution to Horizontal Scalability: Allows individual user session results (already proven locally) to be independently verified against historical state and prepared as standardized GUTA inputs for parallel aggregation.
  • High-Level Functionality: Validates and incorporates user end-of-session proofs into the global state aggregation process.

5.2. GUTARegisterUserCircuit (Hypothetical)

  • Purpose: Processes the registration of new users, updating the GUSR tree.
  • Core Gadgets: GUTAOnlyRegisterUsersGadget, GUTARegisterUsersGadget, GUTARegisterUserFullGadget, GUTARegisterUserCoreGadget
  • Input: Data for users being registered, including proofs linking their public keys to a user_registration_tree_root, and delta proofs for GUSR updates. Also needs the target guta_circuit_whitelist and checkpoint_tree_root.
  • What it Proves:
    • Valid Registration: Each user's provided public_key is present in the user_registration_tree_root.
    • Correct GUSR Insertion: For each user, the GUSR tree correctly transitioned at the user_id index from an empty hash to the new PsyUserLeaf hash (initialized with the verified public_key and default_user_state_tree_root). This is verified using delta proofs.
    • GUTA Header Output: Correctly constructs a GlobalUserTreeAggregatorHeader representing the combined state transition for all registered users. stats are typically zero for pure registrations. The header uses the input guta_circuit_whitelist and checkpoint_tree_root.
  • Assumptions Made:
    1. Witness data (registration proofs, user data, delta proofs) is accurate initially.
    2. The input guta_circuit_whitelist and checkpoint_tree_root are correct for this context.
    3. The default_user_state_tree_root constant is correct.
    4. The underlying assumption about the validity of the input checkpoint_tree_root is carried forward.
  • How Assumptions are Discharged:
    • Assumption 1 is checked by verifying registration proofs and GUSR delta proofs.
    • Assumption 2 & 4 are passed upwards via the output header.
    • Assumption 3 is a system parameter.
  • Contribution to Horizontal Scalability: Allows batches of user registrations to be processed, potentially in parallel branches of the GUTA aggregation tree.
  • High-Level Functionality: Securely adds new users to the global system state.

5.3. GUTAAggregationCircuit (Hypothetical - Multiple Variants Possible)

  • Purpose: The workhorse of parallel aggregation. Combines the results (headers) from two lower-level GUTA proofs (which could originate from End Caps, registrations, or previous aggregations).
  • Core Gadgets: VerifyGUTAProofGadget, TwoNCAStateTransitionGadget (for combining different branches), GUTAHeaderLineProofGadget (for propagating up a single branch), GUTAStatsGadget
  • Input: Two input GUTA proofs (ProofWithPublicInputs) and their corresponding GlobalUserTreeAggregatorHeader data. Witness for NCA proofs if needed.
  • What it Proves:
    • Input Proof Validity: Both input GUTA proofs are valid.
    • Input Proof Whitelist: Both input proofs were generated by circuits listed in the same guta_circuit_whitelist (taken from their headers).
    • Input Proof Checkpoint Consistency: Both input proofs reference the same checkpoint_tree_root (taken from their headers).
    • Input Header Consistency: The public inputs hash of each input proof matches the hash of its corresponding provided GlobalUserTreeAggregatorHeader.
    • State Transition Combination Logic: The state_transition in the output GUTA header correctly combines the state_transitions from the two input headers. This involves:
      • Verifying NCA proofs if combining transitions on different branches (TwoNCAStateTransitionGadget).
      • Ensuring direct linkage (old_root matches new_root) if combining transitions on the same path.
      • Correctly propagating the transition upwards if only one input represents a change (GUTAHeaderLineProofGadget).
    • Stats Combination: The stats in the output header are the correct sum of the stats from the input headers (GUTAStatsGadget.combine_with).
    • GUTA Header Output: Correctly constructs the output GlobalUserTreeAggregatorHeader using the combined state transition, combined stats, and the common guta_circuit_whitelist and checkpoint_tree_root from the inputs.
  • Assumptions Made:
    1. Witness data (input proofs, headers, NCA/sibling proofs) is accurate initially.
    2. The assumptions about the validity of the common guta_circuit_whitelist and checkpoint_tree_root carried by the input proofs are implicitly carried forward.
  • How Assumptions are Discharged:
    • Assumption 1 is checked by verifying the input proofs, their linkage to the input headers, and the logic for combining state transitions and stats.
    • Assumption 2 is not discharged; these common contextual assumptions are passed upwards in the output header.
  • Contribution to Horizontal Scalability: This is the core mechanism enabling parallel processing. Thousands of these circuits can run concurrently on the DPN, merging branches of the GUTA proof tree structure, dramatically speeding up aggregation compared to sequential processing.
  • High-Level Functionality: Securely and recursively combines verified state changes from multiple independent sources into larger, aggregated proofs.

5.4. GUTANoChangeCircuit

  • Purpose: Handles intervals where no relevant user state changed (GUSR root remains the same) but the network needs to acknowledge the advancement of the checkpoint_tree_root.
  • Core Gadget: GUTANoChangeGadget
  • Input: Proof (checkpoint_tree_proof) that a specific checkpoint_leaf exists in the target checkpoint_tree_root. The target guta_circuit_whitelist root.
  • What it Proves:
    • Checkpoint Validity: The provided checkpoint_leaf hash matches the value proven to be in the checkpoint_tree_proof.
    • GUSR Consistency: The global_state_roots.user_tree_root within the checkpoint_leaf is used as both the old_node_value and new_node_value in the output header's state_transition.
    • GUTA Header Output: Correctly constructs a GlobalUserTreeAggregatorHeader with the input guta_circuit_whitelist, the input checkpoint_tree_root, a state transition showing no change in the GUSR root (at index 0, level 0), and zeroed stats.
  • Assumptions Made:
    1. Witness data (checkpoint proof, leaf) is accurate initially.
    2. The input guta_circuit_whitelist root is correct for this context.
    3. The assumption about the validity of the input checkpoint_tree_root is carried forward.
  • How Assumptions are Discharged:
    • Assumption 1 is checked by verifying the checkpoint proof.
    • Assumptions 2 and 3 are passed upwards via the output header.
  • Contribution to Horizontal Scalability: Ensures the GUTA aggregation process can produce valid proofs even for periods of inactivity in user state changes, keeping the aggregation synchronized with the progressing checkpoint tree.
  • High-Level Functionality: Allows the GUTA layer to represent a "no-op" state transition correctly anchored to an updated checkpoint.

6. Phase 3: Final Block Proof Generation

6.1. Checkpoint Tree "Block" Circuit (Top-Level Aggregation)

  • Purpose: The final circuit in the chain. It aggregates the ultimate proofs from the top-level state trees (the final GUTA proof for GUSR, proofs for GCON updates, etc.) and verifies the transition against the previous block's finalized state.
  • Core Logic: Likely uses variants of VerifyStateTransitionProofGadget or similar aggregation gadgets tailored for combining the top-level tree proofs. It takes the previous block's CHKP root as a crucial public input.
  • Input: The final aggregated proof from the GUTA layer (representing all GUSR changes), potentially proofs for GCON changes (e.g., new contract deployments via VerifyAggUserRegistrationDeployGuta logic), and the previous block's finalized CHKP root hash.
  • What it Proves:
    • Top-Level Proof Validity: The input proofs (e.g., final GUTA proof) are valid and adhere to their respective whitelists and checkpoint contexts.
    • State Root Consistency: The final roots of GUSR, GCON, etc., derived from the input proofs are correctly assembled into the PsyCheckpointGlobalStateRoots for the new block.
    • New Checkpoint Leaf Construction: The new PsyCheckpointLeaf is correctly constructed using the new global state roots and other block metadata (e.g., block number, timestamp).
    • New Checkpoint Root Computation: The new CHKP root hash is correctly computed based on the new leaf and its position in the Checkpoint Tree.
    • Final State Link: The critical verification: it proves that the state transitions represented by the input proofs (originating from potentially millions of user transactions) correctly and validly transform the state anchored by the input previous_block_chkp_root into the state represented by the computed new_chkp_root.
  • Assumptions Made:
    1. The only remaining significant external assumption is that the public input previous_block_chkp_root hash is indeed the valid, finalized root hash of the immediately preceding block.
  • How Assumptions are Discharged:
    • All assumptions related to internal consistency, circuit whitelists, correct state transitions, user signatures, etc., have been recursively verified and discharged by the preceding UPS and GUTA circuit layers.
    • Assumption 1 is discharged by the consensus mechanism or the verifier. They know the hash of the last finalized block and check if the proof's public input matches it. If it matches, the proof demonstrates a valid state transition between the two blocks.
  • Contribution to Horizontal Scalability: This final step takes the output of the massively parallel GUTA aggregation and produces a single, constant-size ZK proof for the entire block's validity. Verifying this proof is extremely fast, regardless of how many transactions were processed in parallel.
  • High-Level Functionality: Creates the final, succinct, and efficiently verifiable proof for the entire block, cryptographically linking it to the previous block and enabling trustless verification of the entire chain's state transition.

7. Assumption Reduction Summary: The Journey to Trustlessness

The Psy circuit flow demonstrates a progressive reduction and discharge of assumptions:

  1. Start UPS: Assumes initial state fetched from the last block is correct locally and that the last block's CHKP root was valid. Verifies local consistency.
  2. UPS Transactions: Assumes previous step was valid, assumes current step's witness is correct. Verifies previous proof, verifies current CFC/debt logic, verifies state delta. Passes on proof tree root assumption and the original starting CHKP root assumption.
  3. UPS End Cap: Assumes last step was valid, assumes signature proof/witness correct. Verifies last step, verifies signature proof, verifies consistency between final UPS state and signature payload. Discharges all internal UPS assumptions (proof tree validity, intermediate step validity). Outputs a proof carrying only the assumption about the original starting CHKP root.
  4. GUTA (Process End Cap/Register User): Assumes End Cap/Registration proof is valid, assumes historical checkpoint/whitelist context. Verifies input proof against context. Outputs a GUTA header carrying the context assumptions upwards.
  5. GUTA (Aggregation): Assumes input GUTA proofs are valid and share context. Verifies input proofs, verifies combination logic. Outputs an aggregated GUTA header carrying the common context assumptions upwards.
  6. Final Block Circuit: Assumes top-level proofs (like final GUTA) are valid. Takes the previous block's CHKP root as the only major external assumption. Verifies input proofs, verifies construction of the new CHKP root based on inputs. Discharges all remaining contextual assumptions (whitelists, checkpoints derived from the input proofs). The final proof stands as: "IF the previous CHKP root was X, THEN the new CHKP root is validly Y".

This journey transforms broad initial assumptions about state correctness into a single, verifiable dependency on the previously accepted state, underpinned by the mathematical certainty of ZK proofs.

8. Conclusion: Scalability Through Parallelism and Proofs

Psy achieves true horizontal scalability by combining:

  1. PARTH Architecture: Isolates user state modifications, enabling conflict-free parallel transaction execution within a block.
  2. User Proving Sessions (UPS): Allows users to locally prove their own transaction sequences, offloading initial proving work.
  3. Parallel ZK Aggregation (GUTA & DPN): Enables the network to verify and combine proofs from millions of users concurrently, overcoming the limitations of sequential processing.
  4. Recursive Proofs: Compresses vast amounts of computation into succinct, fixed-size proofs, making final block verification extremely efficient.

This document describes the end-to-end flow of circuits involved in processing user transactions and aggregating them into a final block proof within the Psy system. It highlights the assumptions made at each stage and how they are progressively verified, ultimately enabling horizontal scalability.

Phase 1: User Proving Session (UPS) - Local Execution

This phase happens locally on the user's device (or via a delegated prover). The user builds a recursive chain of proofs for their transactions within a single block context.

1. UPSStartSessionCircuit

  • Purpose: Initializes the proving session for a user based on the last finalized blockchain state.
  • What it Proves:
    • The starting UserProvingSessionHeader is valid.
    • This header is correctly anchored to a specific checkpoint_leaf_hash which exists at checkpoint_id within the checkpoint_tree_root from the last finalized block.
    • The session_start_context within the header accurately reflects the user's state (start_session_user_leaf_hash, user_id, etc.) as found in the user_tree_root associated with the starting checkpoint.
    • The current_state within the starting header is correctly initialized (user leaf last_checkpoint_id updated, debt trees empty, tx count/stack zero).
  • Assumptions:
    • The witness data (UPSStartStepInput) provided by the user (fetching state from the last block) is correct initially. Constraints verify its consistency.
    • The constant empty_ups_proof_tree_root (representing the start of the recursive proof tree for this session) is correct.
  • How Assumptions are Discharged: Internal consistency checks verify the relationships between the provided header, checkpoint leaf, state roots, user leaf, and the Merkle proofs linking them. The assumption about the previous block's checkpoint_tree_root being correct is implicitly carried forward, as this circuit uses it as the basis for initialization.
  • Contribution to Horizontal Scalability: Establishes a user-specific, isolated starting point based on globally finalized state, allowing this session to proceed independently of other users' sessions within the same new block.
  • High-Level Functionality: Securely starts a user's transaction batch processing.

2. UPSCFCStandardTransactionCircuit (Executed potentially multiple times)

  • Purpose: Processes a single standard transaction (contract call) within the user's ongoing session, extending the recursive proof chain.
  • What it Proves:
    • The ZK proof for the previous UPS step is valid.
    • The previous step's proof was generated by a circuit listed in the ups_circuit_whitelist_root specified in the previous step's header.
    • The public inputs (header hash) of the previous step's proof match the provided previous_step_header.
    • The ZK proof for the current Contract Function Call (CFC) exists within the current current_proof_tree_root.
    • This CFC proof corresponds to a function registered in the GCON tree (via checkpoint context).
    • Executing this CFC correctly transitions the state from the previous_step_header to the new_header_gadget state (updating CSTATE->UCON root, debt trees, tx count/stack).
  • Assumptions:
    • The witness data (UPSCFCStandardTransactionCircuitInput) for this specific step (CFC proof, state delta witnesses, previous step proof info) is correct initially.
    • The current_proof_tree_root provided matches the actual root of the recursive proof tree being built.
  • How Assumptions are Discharged:
    • Verifies the previous step's proof using VerifyPreviousUPSStepProofInProofTreeGadget. This discharges the assumption about the previous step's validity and its public inputs.
    • Verifies the CFC proof and its link to the contract state using UPSVerifyCFCStandardStepGadget.
    • Verifies the state delta logic, ensuring the transition is correct based on witness data.
    • The assumption about the current_proof_tree_root is passed implicitly to the next step or the End Cap circuit.
  • Contribution to Horizontal Scalability: User processes transactions locally and serially for themselves, maintaining self-consistency without interacting with other users' current block activity.
  • High-Level Functionality: Securely executes and proves individual smart contract interactions locally.

3. UPSCFCDeferredTransactionCircuit (Executed if applicable)

  • Purpose: Processes a transaction that settles a deferred debt, then executes the main CFC logic.
  • What it Proves: Similar to the standard circuit, but additionally proves:
    • A specific deferred transaction item was removed from the deferred_tx_debt_tree.
    • The item removed corresponds exactly to the call data of the CFC being executed.
    • The subsequent CFC state transition starts from the state after the debt was removed.
  • Assumptions: Same as the standard circuit, plus assumes the witness for the deferred transaction removal proof (DeltaMerkleProofGadget) is correct initially.
  • How Assumptions are Discharged: Verifies previous step proof. Verifies the debt removal proof and its consistency with the CFC call data. Verifies the subsequent state delta.
  • Contribution to Horizontal Scalability: Same as standard transaction circuit (local serial execution).
  • High-Level Functionality: Enables settlement of asynchronous transaction debts within the local proving flow.

4. UPSStandardEndCapCircuit

  • Purpose: Finalizes the user's entire proving session for the block.
  • What it Proves:
    • The proof for the last UPS transaction step is valid and used an allowed circuit.
    • The ZK proof for the user's signature (authorizing the session) is valid and exists in the same UPS proof tree.
    • The signature corresponds to the user's registered public key (derived from signature proof parameters).
    • The signature payload (PsyUserProvingSessionSignatureDataCompact) correctly reflects the session's start/end user leaves, checkpoint, final tx stack, and tx count.
    • The nonce used in the signature is valid (incremented).
    • The final UPS state shows both deferred_tx_debt_tree_root and inline_tx_debt_tree_root are empty (all debts settled).
    • The last_checkpoint_id in the final user leaf matches the session's checkpoint_id and has progressed correctly.
    • (If aggregation proof verification included): The UPS proof tree itself was constructed using circuits from a known proof_tree_circuit_whitelist_root.
  • Assumptions:
    • Witness data (UPSEndCapFromProofTreeGadgetInput, potentially agg proof witness) is correct initially.
    • Known constants (network_magic, empty debt roots, known_ups_circuit_whitelist_root, known_proof_tree_circuit_whitelist_root) are correct.
  • How Assumptions are Discharged:
    • Verifies the last UPS step proof.
    • Verifies the ZK signature proof.
    • Connects signature data to the final UPS header state.
    • Checks nonce, checkpoint ID, empty debt trees.
    • Verifies proofs against whitelists using provided roots.
    • The output of this circuit (the End Cap proof) now implicitly carries the assumption that the starting checkpoint_tree_root (used in the UPSStartSessionCircuit) was correct. All internal UPS assumptions have been discharged.
  • Contribution to Horizontal Scalability: Creates a single, verifiable proof representing all of a user's activity for the block. This proof can now be processed in parallel with proofs from other users by the GUTA layer.
  • High-Level Functionality: Securely concludes a user's transaction batch, authorizes it, and packages it for network aggregation.

Phase 2: Global User Tree Aggregation (GUTA) - Parallel Network Execution

The Decentralized Proving Network (DPN) takes End Cap proofs (and potentially other GUTA proofs like user registrations) from many users and aggregates them in parallel. This involves specialized GUTA circuits.

(Note: The provided files focus heavily on UPS and GUTA gadgets. The exact structure of the GUTA circuits using these gadgets is inferred but follows standard recursive proof aggregation patterns.)

Example GUTA Circuits (Inferred):

5. GUTAProcessEndCapCircuit (Hypothetical)

  • Purpose: To take a user's validated UPSStandardEndCapCircuit proof and integrate its state change into the GUTA proof hierarchy.
  • Core Logic: Uses VerifyEndCapProofGadget.
  • What it Proves:
    • The End Cap proof is valid and used the correct circuit (known_end_cap_fingerprint_hash).
    • The checkpoint_tree_root claimed by the user in the End Cap result existed historically.
    • Outputs a standard GlobalUserTreeAggregatorHeader representing the user's GUSR tree state transition (start leaf hash -> end leaf hash at the user's ID index) and stats.
  • Assumptions:
    • Witness (End Cap proof, result, stats, historical proof) is correct initially.
    • The known_end_cap_fingerprint_hash constant is correct.
    • A default_guta_circuit_whitelist root is provided or known.
  • How Assumptions are Discharged: Verifies the End Cap proof and historical checkpoint proof. Packages the result into a standard GUTA header. The assumption about the default_guta_circuit_whitelist is passed upwards. The assumption about the current checkpoint_tree_root (from the historical proof) is passed upwards.
  • Contribution to Horizontal Scalability: Allows individual user session results to be verified independently and prepared for parallel aggregation.
  • High-Level Functionality: Validates and incorporates user end-of-session proofs into the global aggregation process.

6. GUTARegisterUserCircuit (Hypothetical)

  • Purpose: To process the registration of one or more new users.
  • Core Logic: Uses GUTAOnlyRegisterUsersGadget (which uses GUTARegisterUsersGadget, GUTARegisterUserFullGadget, GUTARegisterUserCoreGadget).
  • What it Proves:
    • For each registered user, their public_key was correctly inserted at their user_id index in the GUSR tree (transitioning from zero hash to the new user leaf hash).
    • The public_key used matches an entry in the user_registration_tree_root.
    • Outputs a GlobalUserTreeAggregatorHeader representing the aggregate GUSR state transition for all registered users, with zero stats.
  • Assumptions:
    • Witness (registration proofs, user count) is correct initially.
    • guta_circuit_whitelist and checkpoint_tree_root inputs are correct for this context.
    • default_user_state_tree_root constant is correct.
  • How Assumptions are Discharged: Verifies delta proofs for GUSR insertion and Merkle proofs against the registration tree. Outputs a standard GUTA header, passing assumptions about whitelist/checkpoint upwards.
  • Contribution to Horizontal Scalability: User registration can be batched and potentially processed in parallel branches of the GUTA tree.
  • High-Level Functionality: Securely adds new users to the system state.

7. GUTAAggregationCircuit (Hypothetical - Multiple Variants)

  • Purpose: To combine the results (headers) from two or more lower-level GUTA proofs (which could be End Cap results, registrations, or previous aggregations).
  • Core Logic:
    • Verifies each input GUTA proof using VerifyGUTAProofGadget.
    • Ensures all input proofs used circuits from the same guta_circuit_whitelist and reference the same checkpoint_tree_root.
    • Combines the state_transitions from the input proofs:
      • If transitions are on different branches, uses TwoNCAStateTransitionGadget with an NCA proof.
      • If transitions are on the same branch (e.g., one input is a line proof output), connects them directly (old_root of current matches new_root of previous).
      • If only one input, uses GUTAHeaderLineProofGadget to propagate upwards.
    • Combines the stats from input proofs using GUTAStatsGadget.combine_with.
    • Outputs a single GlobalUserTreeAggregatorHeader representing the combined state transition and stats.
  • What it Proves: That given valid input GUTA proofs operating under the same whitelist and checkpoint context, the combined state transition and stats represented by the output header are correct.
  • Assumptions:
    • Witness (input proofs, headers, NCA/sibling proofs) is correct initially.
  • How Assumptions are Discharged: Verifies input proofs and their headers. Verifies the logic of combining state transitions (NCA/Line/Direct). Passes the common whitelist/checkpoint root assumptions upwards.
  • Contribution to Horizontal Scalability: This is the core of parallel aggregation. Multiple instances of this circuit run concurrently across the DPN, merging proof branches in a tree structure (like MapReduce).
  • High-Level Functionality: Securely and recursively combines verified state changes from multiple sources into larger, aggregated proofs.

8. GUTANoChangeCircuit (Hypothetical)

  • Purpose: To handle cases where no user state changed but the checkpoint advanced.
  • Core Logic: Uses GUTANoChangeGadget.
  • What it Proves: That given a new checkpoint_leaf verified to be in the checkpoint_tree_proof, the GUSR tree root remains unchanged, and stats are zero. Outputs a GUTA header reflecting this.
  • Assumptions: Witness (checkpoint proof, leaf) is correct initially. Input guta_circuit_whitelist is correct.
  • How Assumptions are Discharged: Verifies checkpoint proof. Outputs a standard GUTA header passing assumptions upward.
  • Contribution to Horizontal Scalability: Allows the aggregation process to stay synchronized with the checkpoint tree even during periods of inactivity for certain state trees.
  • High-Level Functionality: Advances the aggregated checkpoint state reference.

Phase 3: Final Block Proof

9. Checkpoint Tree "Block" Circuit (Top-Level Aggregation)

  • Purpose: The final aggregation circuit that combines proofs from the roots of all major state trees (like GUSR via the top-level GUTA proof, GCON, etc.) for the block.
  • Core Logic:
    • Verifies the top-level GUTA proof (and proofs for other top-level trees if applicable).
    • Takes the previous block's finalized CHKP root as a public input.
    • Constructs the new CHKP leaf based on the newly computed roots of GUSR, GCON, etc., and other block metadata.
    • Computes the new CHKP root.
    • The only external assumption verified here is that the input previous_block_chkp_root matches the actual finalized root of the last block.
  • What it Proves: That the entire state transition for the block, represented by the change from the previous_block_chkp_root to the new_chkp_root, is valid, having recursively verified all constituent user transactions and aggregations according to protocol rules and circuit whitelists.
  • Assumptions: The only remaining input assumption is the hash of the previous block's CHKP root.
  • How Assumptions are Discharged: All assumptions from lower levels (circuit whitelists, internal state consistencies) have been verified recursively. The final link to the previous block state is checked against the public input.
  • Contribution to Horizontal Scalability: Represents the culmination of the massively parallel aggregation process, producing a single, succinct proof for the entire block's validity.
  • High-Level Functionality: Creates the final, verifiable proof of state transition for the entire block, linking it cryptographically to the previous block. This proof can be efficiently verified by any node or light client.

Coordinator Gadgets

These gadgets are components used within circuits run by the Coordinator nodes.

BatchAppendUserRegistrationTreeGadget

  • File: append_user_registration_tree.rs (Gadget definition)
  • Purpose: Aggregates multiple "Spiderman" append proofs sequentially for the User Registration Tree (URT). Handles padding for a fixed maximum number of sub-tree appends.
  • Key Inputs/Witness:
    • user_registration_tree_height, batch_sub_tree_height, max_sub_trees: Parameters.
    • SpidermanUpdateProof[]: Array witness containing the append proofs for each sub-tree batch being added.
  • Key Outputs/Computed Values:
    • old_root: The root of the URT before all appends in this gadget instance.
    • new_root: The root of the URT after all appends in this gadget instance.
  • Core Logic/Constraints:
    • Instantiates max_sub_trees instances of SpidermanAppendProofGadget.
    • Connects the new_root of one gadget to the old_root of the next in sequence.
    • Handles witness padding by setting dummy proofs for unused slots.
  • Assumptions: Assumes witness SpidermanUpdateProof array is valid initially (constraints verify internal consistency). Assumes old_root of the first gadget matches the tree state before this operation.
  • Role: Allows efficient batching of user registration appends into a single ZK proof step for the Coordinator.

BatchDeployContractsGadget

  • File: deploy_contract.rs (Gadget definition)
  • Purpose: Handles the proof logic for appending a batch of new contracts to the Global Contract Tree (GCON). Verifies one Spiderman append proof and ensures the provided contract leaf data matches the appended hashes.
  • Key Inputs/Witness:
    • contract_tree_height, batch_sub_tree_height: Parameters.
    • SpidermanUpdateProof: Witness for the batch append operation on GCON.
    • PsyContractLeaf[]: Array witness containing the data for each deployed contract leaf in the batch.
  • Key Outputs/Computed Values:
    • old_root, new_root: Start and end roots of the GCON tree for this batch append (from spiderman_gadget).
  • Core Logic/Constraints:
    • Instantiates SpidermanAppendProofGadget.
    • Instantiates PsyContractLeafGadget for each potential leaf slot in the batch.
    • For each leaf slot marked as added (is_added from Spiderman proof):
      • Computes the hash of the corresponding PsyContractLeafGadget witness.
      • Asserts this computed hash matches the new_leaves[i] value from the Spiderman proof.
    • Handles witness padding for unused leaf slots.
  • Assumptions: Assumes witness SpidermanUpdateProof and PsyContractLeaf array are valid initially. Assumes old_root of the Spiderman gadget matches the GCON state before this operation.
  • Role: Securely proves the batch addition of new contracts to the global contract tree, verifying consistency between the state update and the provided contract metadata.

VerifyAggUserRegistartionDeployContractsGUTAHeaderGadget

  • File: verify_agg_user_registration_deploy_guta.rs
  • Purpose: Represents the combined state transitions resulting from aggregating User Registrations, Contract Deployments, and GUTA proofs. Acts as the core data structure within the Part 1 Aggregation circuit.
  • Key Inputs/Witness: (Typically derived from verified sub-proofs)
    • user_registration_tree_delta: AggStateTransitionGadget for URT.
    • global_contract_tree_delta: AggStateTransitionGadget for GCON.
    • global_user_tree_delta: GlobalUserTreeAggregatorHeaderGadget for GUSR.
  • Key Outputs/Computed Values:
    • combined_hash: A single hash representing the start/end states of all three trees and the GUTA header.
  • Core Logic/Constraints: Primarily a data structure. get_combined_hash defines the specific hashing scheme to commit to all input state transitions.
  • Assumptions: Assumes the input transition/header gadgets are correctly derived from verified proofs.
  • Role: Standardizes the output structure of the Part 1 aggregation step, providing a single hash commitment for verification by the final block circuit.

VerifyAggUserRegistartionDeployContractsGUTAGadget

  • File: verify_agg_user_registration_deploy_guta.rs
  • Purpose: The core gadget within the Part 1 Aggregation circuit. Verifies the aggregated proofs for User Registrations, Contract Deployments, and GUTA, ensuring they are valid, used whitelisted circuits, and reference the same checkpoint state.
  • Key Inputs/Witness:
    • Parameters and configuration for verifying each of the three input proofs (common data, whitelist/fingerprint configs, GUTA params).
    • Proof objects and verifier data for each of the three input proofs.
    • Witnesses for state transitions/headers corresponding to each proof.
    • GUTA whitelist Merkle proof.
  • Key Outputs/Computed Values:
    • header: A VerifyAggUserRegistartionDeployContractsGUTAHeaderGadget containing the verified state transitions.
  • Core Logic/Constraints:
    • Instantiates VerifyStateTransitionProofGadget for User Registrations, verifying the proof against its config/fingerprint.
    • Instantiates VerifyStateTransitionProofGadget for Contract Deployments similarly.
    • Instantiates VerifyGUTAProofGadget for the GUTA proof, verifying it against its config/fingerprint and the GUTA whitelist root.
    • Connects the checkpoint_tree_root from the GUTA header to ensure consistency (implicitly assumes UserReg/Deploy proofs are for the same checkpoint, which should be enforced by job planning). Correction: This gadget doesn't directly connect checkpoint roots; that consistency is usually handled by the job system ensuring proofs for the same checkpoint are aggregated.
    • Constructs the output header from the verified transition gadgets.
  • Assumptions: Assumes witness proofs, headers, and verifier data are valid initially. Assumes input configuration (common data, fingerprint configs, whitelist root) is correct.
  • Role: Securely combines the results of the three major parallel state update processes (User Reg, Deploy Contract, GUTA) into a single verifiable unit, discharging assumptions about their individual validity and circuit usage.

PsyPart1StateDeltaResultGadget

  • File: checkpoint_state_transition_proofs.rs
  • Purpose: Takes the verified combined header from the Part 1 aggregation (VerifyAggUserRegistartionDeployContractsGUTAHeaderGadget) and combines it with previous block stats and new block info (time, randomness) to calculate the new Checkpoint Leaf state.
  • Key Inputs/Witness:
    • part_1_header: Output from the Part 1 aggregation gadget.
    • old_stats: PsyCheckpointLeafStatsGadget witness for the previous block's stats.
    • block_time: Target witness for the current block's timestamp.
    • final_random_seed_contribution: Hash witness for randomness.
  • Key Outputs/Computed Values:
    • old_state_roots, new_state_roots: Derived directly from part_1_header.
    • new_stats: Computed by combining GUTA stats with time, randomness, etc.
    • old_checkpoint_leaf: Constructed from old_state_roots and old_stats.
    • new_checkpoint_leaf: Constructed from new_state_roots and new_stats.
  • Core Logic/Constraints:
    • Constructs old_state_roots and new_state_roots gadgets.
    • Computes new_stats based on inputs (copying GUTA stats, hashing random seed, setting time, zeroing PM/DA stats for now).
    • Constructs old_checkpoint_leaf and new_checkpoint_leaf gadgets.
    • Asserts block_time > old_stats.block_time.
  • Assumptions: Assumes part_1_header is correctly verified. Assumes old_stats, block_time, final_random_seed_contribution witnesses are correct.
  • Role: Calculates the state transition specifically for the Checkpoint Leaf data based on the aggregated results from the rest of the block's activities.

CheckpointStateTransitionChildProofsGadget

  • File: checkpoint_state_transition_proofs.rs
  • Purpose: Verifies the "Part 1" aggregation proof within the final block circuit and instantiates the gadget (PsyPart1StateDeltaResultGadget) that calculates the Checkpoint Leaf transition.
  • Key Inputs/Witness:
    • Parameters for verifying the Part 1 proof (common data, cap height, known fingerprint).
    • Part 1 proof object and verifier data.
    • Witnesses needed by PsyPart1StateDeltaResultGadget (old_stats, block_time, random_seed).
  • Key Outputs/Computed Values:
    • state_delta_gadget: The instantiated PsyPart1StateDeltaResultGadget.
  • Core Logic/Constraints:
    • Verifies the part_1_proof_target against part_1_verifier_data.
    • Checks the fingerprint of part_1_verifier_data against known_part_1_fingerprint.
    • Instantiates PsyPart1StateDeltaResultGadget.
    • Computes the expected public inputs hash for the Part 1 proof using the state_delta_gadget.part_1_header.
    • Asserts this computed hash matches the actual public inputs from part_1_proof_target.
  • Assumptions: Assumes witness proofs, verifier data, and state delta inputs are correct initially. Assumes known_part_1_fingerprint constant is correct.
  • Role: Securely incorporates the aggregated result of UserReg/Deploy/GUTA processing (the Part 1 proof) into the final block transition calculation.

CheckpointStateTransitionCoreGadget

  • File: checkpoint_state_transition.rs
  • Purpose: Handles the core Merkle proof logic for updating the Checkpoint Tree (CHKP) itself. Verifies the append operation for the new checkpoint leaf and its consistency with the previous checkpoint leaf.
  • Key Inputs/Witness:
    • checkpoint_tree_height: Parameter.
    • append_checkpoint_tree_proof: DeltaMerkleProofCore witness for appending the new leaf.
    • previous_checkpoint_proof: MerkleProofCore witness proving the existence of the previous checkpoint leaf.
  • Key Outputs/Computed Values:
    • old_checkpoint_tree_root, new_checkpoint_tree_root: Roots before/after append.
    • old_checkpoint_leaf_hash, new_checkpoint_leaf_hash: Leaf hashes involved.
  • Core Logic/Constraints:
    • Instantiates DeltaMerkleProofGadget for the append proof and MerkleProofGadget for the previous proof.
    • Asserts append_checkpoint_tree_proof.old_value is zero hash (ensures append).
    • Asserts append_checkpoint_tree_proof.old_root matches previous_checkpoint_proof.root (ensures continuity).
    • Asserts append_checkpoint_tree_proof.index == previous_checkpoint_proof.index + 1.
  • Assumptions: Assumes witness Merkle proofs are valid initially.
  • Role: Enforces the append-only nature and sequential integrity of the main Checkpoint Tree, linking the current block's update directly to the previous block's verified state.

User Proving Session (UPS) Gadgets

These gadgets are primarily used within the circuits executed locally by users (or their delegates) to prove their transaction sequences.


CorrectUPSHeaderHashesGadget

  • File: correct_header_hashes_rs.txt
  • Purpose: A data structure gadget to hold potentially modified starting debt tree roots for a UPS step. Used when a transaction (like debt repayment) needs to alter the context before the main state delta logic of that same step runs.
  • Technical Function: Stores previous_step_deferred_tx_debt_tree_root and previous_step_inline_tx_debt_tree_root. These values can override the corresponding roots from the actual previous step's header when passed into gadgets like UPSCFCStandardStateDeltaGadget.
  • Inputs/Witness: Takes a reference to the previous step's UserProvingSessionHeaderGadget.
  • Outputs/Computed: The overridden hash targets.
  • Constraints: None internally; constraints are applied where its output values are consumed.
  • Assumptions: Assumes the input previous_step header is correctly formed (though its values might be overridden).
  • Role: Enables flexible ordering of operations within a single UPS step, specifically allowing debt tree modifications (like popping an item) to be accounted for before the main transaction logic uses those trees.

UPSVerifyCFCProofExistsAndValidGadget

  • File: ups_cfc_verify_inclusion_rs.txt
  • Purpose: Verifies two critical aspects of a Contract Function Call (CFC) within a UPS: (1) That the ZK proof for the CFC execution exists and is valid within the user's current UPS proof tree, and (2) That the specific function being called is officially registered within the contract's definition on the blockchain (via checkpoint context).
  • Technical Function: Combines proof attestation within the UPS tree with function inclusion verification against global contract state.
  • Inputs/Witness:
    • checkpoint_state_gadget: Witness for PsyCheckpointLeafCompactWithStateRoots providing context (global roots).
    • verify_cfc_proof_gadget: Witness (AttestTreeAwareProofInTreeInput) for the CFC proof itself, including its Merkle proof within the session_proof_tree.
    • cfc_inclusion_proof_gadget: Witness (PsyContractFunctionInclusionProof) proving the function's fingerprint exists in the contract's function tree (CFT).
    • ups_session_proof_tree_height: Parameter.
  • Outputs/Computed:
    • attested_proof_tree_root: Root of the UPS session proof tree containing the CFC proof.
    • checkpoint_leaf_hash: Hash of the contextual checkpoint leaf.
    • cfc_fingerprint: The verified fingerprint (verifier data hash) of the CFC circuit.
    • cfc_inner_public_inputs_hash: Hash of the CFC proof's original public inputs (before tree awareness wrapping).
    • cfc_contract_id, cfc_method_id, cfc_num_inputs, cfc_num_outputs: Metadata extracted from the function inclusion proof.
  • Constraints:
    • Verifies CFC proof validity and inclusion in the session tree via AttestTreeAwareProofInTreeGadget.
    • Verifies function inclusion in the contract's CFT via PsyContractFunctionInclusionProofGadget.
    • Connects cfc_inclusion_proof.contract_inclusion_proof.contract_tree_merkle_proof.root to checkpoint_state_gadget.global_state_roots.contract_tree_root (ensuring CFC inclusion is checked against the correct global contract state).
    • Connects cfc_inclusion_proof.function_verifier_fingerprint to verify_cfc_proof_gadget.fingerprint (ensuring the proven function matches the verified CFC circuit).
  • Assumptions: Assumes witness data (proofs, state, inclusion proofs) is valid initially. Assumes ups_session_proof_tree_height is correct.
  • Role: Acts as a crucial security gate within UPS, ensuring that the user is proving the execution of a legitimate, registered smart contract function and that this execution proof is part of their current, consistent session proof sequence.

UPSCFCStandardStateDeltaGadget

  • File: ups_standard_cfc_state_delta_rs.txt
  • Purpose: Calculates the precise changes to the user's state (UserProvingSessionHeader) resulting from a single, standard CFC transaction. It enforces the consistency between the transaction's claimed effects (witnessed context) and the cryptographic updates to the relevant Merkle trees.
  • Technical Function: Verifies delta/pivot proofs for state updates and computes the next header state.
  • Inputs/Witness:
    • previous_step_header_gadget: Header state before this transaction.
    • corrections: Optional CorrectUPSHeaderHashesGadget to override starting debt tree roots.
    • contract_state_tree_height: Target specifying the height of the specific CSTATE tree being modified.
    • UPSCFCStandardStateDeltaInput: Witness containing:
      • cfc_transaction_input_context: Start (transaction_call_start_ctx) and end (transaction_end_ctx) contexts claimed by the CFC execution.
      • user_contract_tree_update_proof: DeltaMerkleProofCore showing the change to the user's UCON tree (updating the root hash for the specific contract_id).
      • deferred_tx_debt_pivot_proof, inline_tx_debt_pivot_proof: MerkleProofCore showing the start and end roots of the debt trees remain consistent (pivot proofs).
  • Outputs/Computed:
    • (Self, UserProvingSessionHeaderGadget): Returns the gadget instance and the new header state after applying the delta.
    • cfc_inner_public_inputs_hash: Hash of the cfc_transaction_input_context (used for linking to verification).
    • cfc_contract_id, cfc_method_id, etc.: Metadata passed through.
  • Constraints:
    • CFC Context Hash: Computes cfc_inner_public_inputs_hash from witness.
    • UCON Update: Verifies user_contract_tree_update_proof:
      • old_root matches previous_step_header.current_state.user_leaf.user_state_tree_root.
      • index matches cfc_transaction_input_context...contract_id.
      • old_value consistency check: If zero, ensures start_ctx.start_contract_state_tree_root matches the default zero hash for the given contract_state_tree_height. If non-zero, ensures it matches start_ctx.start_contract_state_tree_root.
      • new_value matches end_ctx.end_contract_state_tree_root.
    • Debt Tree Pivots: Verifies deferred/inline_tx_debt_pivot_proof:
      • historical_root matches the corrected previous step debt root (from previous_step_header or corrections).
      • current_root matches the end_ctx.end_deferred/inline_tx_debt_tree_root.
    • Start State Consistency: Connects start_ctx fields (balance, event index, debt roots) to the corresponding fields in the (potentially corrected) previous_step_header.
    • Counters & Stack: Increments tx_count. Pushes hash of TransactionLogStackItemGadget onto tx_hash_stack using SimpleHashStackGadget.
    • Construct New Header: Creates the output UserProvingSessionHeaderGadget with updated roots, counts, stack, and user leaf values (balance/event index updated based on end_ctx). Note: Currently balance/event updates are disabled via constraints.
  • Assumptions: Assumes witness proofs and contexts are valid initially. Assumes previous_step_header and corrections are correctly provided.
  • Role: The core state transition engine for UPS. It cryptographically enforces that the claimed effects of a CFC execution (start/end states) are correctly reflected in the updates to the user's persistent state trees (UCON, debt trees) and session metadata (tx count/stack).

UPSVerifyCFCStandardStepGadget

  • File: ups_cfc_standard_rs.txt
  • Purpose: Encapsulates a complete, standard transaction processing step within UPS. It combines the verification of the CFC proof's existence and validity with the calculation and verification of the resulting state delta.
  • Technical Function: Orchestrates UPSVerifyCFCProofExistsAndValidGadget and UPSCFCStandardStateDeltaGadget, connecting their inputs and outputs to ensure consistency.
  • Inputs/Witness:
    • previous_step_header_gadget: Header state before this step.
    • current_proof_tree_root: Root of the UPS proof tree this step belongs to.
    • ups_session_proof_tree_height: Parameter.
    • UPSVerifyCFCStandardStepInput: Witness containing inputs for both sub-gadgets.
  • Outputs/Computed:
    • new_header_gadget: The header state after this transaction step.
  • Constraints:
    • Instantiates the verification and state delta sub-gadgets.
    • verify_cfc_exists_and_valid_gadget.attested_proof_tree_root == current_proof_tree_root.
    • verify_cfc_exists_and_valid_gadget.checkpoint_leaf_hash == previous_step_header_gadget.session_start_context.checkpoint_leaf_hash.
    • Connects all key metadata (contract_id, method_id, num_inputs/outputs, inner_public_inputs_hash) between the verification and state delta gadgets, ensuring they operate on the exact same verified transaction.
  • Assumptions: Relies on sub-gadget assumptions. Assumes current_proof_tree_root is correct.
  • Role: Defines a standard, verifiable "block" within the user's local recursive proof chain, corresponding to processing one smart contract call.

UPSVerifyPopDeferredTxStepGadget

  • File: ups_cfc_standard_pop_deferred_tx_rs.txt
  • Purpose: Handles transactions specifically designed to settle a previously incurred deferred transaction debt. It verifies the debt removal and then processes the corresponding CFC execution.
  • Technical Function: Verifies a delta proof for removing an item from the deferred debt tree, checks its consistency with the CFC being executed, and then uses UPSVerifyCFCStandardStepGadget with a corrected starting context.
  • Inputs/Witness:
    • previous_step_header_gadget, current_proof_tree_root, ups_session_proof_tree_height: Same as standard step.
    • UPSVerifyPopDeferredTxStepInput: Witness containing standard step inputs plus ups_pop_deferred_tx_proof (a DeltaMerkleProofCore for the deferred debt tree).
  • Outputs/Computed: Exposes outputs from the internal standard_cfc_verify_gadget, notably the new_header_gadget.
  • Constraints:
    • Verifies ups_pop_deferred_tx_proof using DeltaMerkleProofGadget.
    • ups_pop_deferred_tx_proof.old_root == previous_step_header.current_state.deferred_tx_debt_tree_root.
    • ups_pop_deferred_tx_proof.new_value == ZERO_HASH (proves removal).
    • Computes expected hash of the deferred item (DeferredTransactionStackItemGadget) based on the CFC's call_data.
    • ups_pop_deferred_tx_proof.old_value == computed deferred item hash (ensures correct debt removed).
    • Instantiates CorrectUPSHeaderHashesGadget, setting previous_step_deferred_tx_debt_tree_root = ups_pop_deferred_tx_proof.new_root.
    • Instantiates UPSVerifyCFCStandardStepGadget using the corrections.
  • Assumptions: Relies on sub-gadget assumptions. Assumes witness delta proof is valid initially.
  • Role: Enables verifiable settlement of asynchronous obligations (deferred calls) generated by previous transactions within the UPS flow.

PsyUserProvingSessionSignatureDataCompactGadget

  • File: ups_signature_data_rs.txt
  • Purpose: Defines the precise data structure that is cryptographically signed by the user to authorize the submission of their completed User Proving Session.
  • Technical Function: Aggregates key state identifiers from the start and end of the UPS into a single, hashable structure, then combines it with context (network, user, nonce) for signing.
  • Inputs/Witness: start_user_leaf_hash, end_user_leaf_hash, checkpoint_leaf_hash, tx_stack_hash, tx_count.
  • Outputs/Computed: ups_end_cap_sighash (via get_sig_action_with_user_info).
  • Constraints: Internal hashing logic (to_hash method) combines inputs. get_sig_action_with_user_info uses compute_sig_action_hash_circuit to combine the data hash with network_magic, user_id, nonce, and the PSY_SIG_ACTION_SIGN_UPS_END_CAP constant.
  • Assumptions: Assumes input hash/target values correctly represent the final UPS state.
  • Role: Standardizes the payload for UPS authorization, ensuring all critical state transition elements are committed to before the user signs off.

UPSEndCapResultCompactGadget

  • File: ups_end_cap_result_rs.txt
  • Purpose: Defines the minimal, verifiable summary of a completed UPS, intended for submission to the GUTA layer.
  • Technical Function: A data structure containing the essential start/end state identifiers needed for aggregation.
  • Inputs/Witness: start_user_leaf_hash, end_user_leaf_hash, checkpoint_tree_root_hash, user_id.
  • Outputs/Computed: end_cap_result_hash (to_hash method).
  • Constraints: Hashing logic combines inputs with the GLOBAL_USER_TREE_HEIGHT constant.
  • Assumptions: Assumes input hash/target values correctly represent the final UPS state and context.
  • Role: Creates the standardized output data that represents the user's net state change for the block in a format suitable for GUTA circuits.

UPSEndCapCoreGadget

  • File: ups_end_cap_rs.txt
  • Purpose: Enforces the final set of critical constraints required to validly conclude a User Proving Session, linking the final state to the user's signature authorization.
  • Technical Function: Verifies nonce progression, public key consistency, signature data correctness, checkpoint progression, empty debt trees, and computes final outputs (Result and Stats).
  • Inputs/Witness:
    • last_header_gadget: Final header state of the UPS.
    • sig_proof_public_inputs_hash: Public inputs hash from the user's signature proof.
    • sig_proof_fingerprint, sig_proof_param_hash: Signature circuit identifier hashes.
    • nonce, slots_modified: Witness targets.
    • network_magic, empty debt root constants: Parameters.
  • Outputs/Computed: sig_data_compact_gadget, end_cap_result_gadget, guta_stats.
  • Constraints:
    • Nonce Check: nonce > start_user_leaf.nonce. Updates final leaf nonce.
    • PK Check: Derives expected PK from sig proof params, ensures start/end user leaves have this same PK.
    • User ID Check: Start/End user IDs match.
    • Sig Data Check: Instantiates PsyUserProvingSessionSignatureDataCompactGadget, computes expected sig_proof_public_inputs_hash, connects it to the input hash from the sig proof.
    • Checkpoint Check: Ensures end_user_leaf.last_checkpoint_id == session checkpoint_id > start_user_leaf.last_checkpoint_id.
    • Debt Check: Connects last_header.current_state debt roots to empty root constants.
    • Output Generation: Instantiates UPSEndCapResultCompactGadget and GUTAStatsGadget.
  • Assumptions: Assumes input last_header_gadget is correct (verified by previous step). Assumes signature proof is valid (verified elsewhere). Assumes witness nonce/slots/params are correct.
  • Role: The final gatekeeper for UPS validity, ensuring all protocol rules for session finalization are met and linking the state transition to cryptographic authorization before generating the outputs for network submission.

VerifyPreviousUPSStepProofInProofTreeGadget

  • File: verify_previous_ups_step_rs.txt
  • Purpose: Essential gadget for recursion within UPS. It verifies the ZK proof generated by the immediately preceding UPS step, ensuring the chain of proofs is unbroken and follows protocol rules.
  • Technical Function: Verifies a proof (AttestTreeAwareProofInTreeGadget), checks its circuit fingerprint against a whitelist (MerkleProofGadget), and confirms its public inputs match the expected previous header state hash.
  • Inputs/Witness:
    • VerifyPreviousUPSStepProofInProofTreeInput: Contains attestation witness, previous_step_header witness, and whitelist Merkle proof witness.
    • Tree height parameters.
  • Outputs/Computed: previous_step_header_gadget (representing verified public inputs), current_proof_tree_root, ups_step_circuit_whitelist_root.
  • Constraints:
    • Verifies proof attestation.
    • Verifies whitelist proof.
    • Connects attestation fingerprint to whitelist proof value.
    • Connects previous_step_header.ups_step_circuit_whitelist_root to whitelist proof root.
    • Computes hash of previous_step_header witness.
    • Connects computed hash to proof_attestation_gadget.inner_public_inputs_hash.
  • Assumptions: Assumes witness data (proofs, header, whitelist proof) is valid initially.
  • Role: Enforces the integrity of the recursive proof chain generated locally by the user, step by step. Ensures only allowed UPS circuits are used.

VerifyPreviousUPSStepProofInProofTreePartialFromCurrentGadget

  • File: verify_previous_ups_step_partial_from_current_rs.txt
  • Purpose: An optimized version of the previous gadget, used when the session_start_context is constant within the verifying circuit (like the End Cap circuit). Reduces witness size.
  • Technical Function: Similar to the full gadget, but reconstructs the previous_step_header_gadget internally using the known session_start_context (from current_header input) and only requiring the previous_step_state portion as witness.
  • Inputs/Witness:
    • current_header: Input gadget (not witness).
    • VerifyPreviousUPSStepProofInProofTreePartialInput: Contains attestation witness, previous_step_state witness, whitelist proof witness.
  • Outputs/Computed: Same as the full gadget.
  • Constraints: Similar to full gadget, plus connects current_header.ups_step_circuit_whitelist_root to the whitelist proof root. Reconstructs previous header internally before hashing and connecting to attestation inner public inputs.
  • Assumptions: Assumes input current_header is correct. Assumes witness data is valid initially.
  • Role: Witness optimization for recursive proof verification in specific circuit contexts.

UPSEndCapFromProofTreeGadget

  • File: ups_end_cap_tree_rs.txt
  • Purpose: Top-level gadget within the UPSStandardEndCapCircuit. Orchestrates the verification of the final UPS step, verification of the ZK signature, and enforcement of final session constraints.
  • Technical Function: Instantiates and connects VerifyPreviousUPSStepProofInProofTreeGadget (often partial), AttestProofInTreeGadget (for signature), and UPSEndCapCoreGadget.
  • Inputs/Witness:
    • UPSEndCapFromProofTreeGadgetInput: Contains witnesses for previous step verification, signature verification, user_public_key_param, nonce, slots_modified.
    • Tree height and network parameters.
  • Outputs/Computed: end_cap_core_gadget, current_proof_tree_root.
  • Constraints:
    • Instantiates sub-gadgets.
    • Connects verify_zk_signature_proof_gadget.attested_proof_tree_root to verify_previous_ups_step_gadget.current_proof_tree_root (ensures signature and last step are in the same proof tree).
    • Passes verified data (previous header, signature hashes) and witnesses (nonce, slots, params) into UPSEndCapCoreGadget for final checks.
  • Assumptions: Relies on sub-gadget assumptions. Assumes witness data is valid initially.
  • Role: Coordinates the final verification checks within the End Cap circuit, ensuring the entire session is valid, linked, and authorized.

UPSStartStepGadget

  • File: ups_start_rs.txt
  • Purpose: Core logic for the UPSStartSessionCircuit. Verifies the user's provided initial state against the last finalized block's checkpoint data and initializes the session header.
  • Technical Function: Verifies Merkle proofs linking the checkpoint leaf to the checkpoint root and the user leaf to the user tree root within that checkpoint. Ensures header consistency and correct initialization of session state (debt trees, counters).
  • Inputs/Witness: UPSStartStepInput (header witness, checkpoint leaf/roots witness, checkpoint proof, user proof).
  • Outputs/Computed: header_gadget (the validated starting header).
  • Constraints:
    • Verifies consistency between checkpoint_tree_proof and header_gadget.session_start_context (root, leaf hash, ID).
    • Verifies consistency between checkpoint_leaf_gadget, state_roots_gadget, and header data.
    • Verifies user_tree_proof.root matches state_roots_gadget.user_tree_root.
    • Verifies user_tree_proof.value matches header_gadget.session_start_context.start_session_user_leaf_hash.
    • Verifies user_tree_proof.index matches user_id.
    • Verifies current_state initialization (updated last_checkpoint_id, empty debts, zero counts/stack).
  • Assumptions: Assumes witness data is valid initially. Assumes empty tree root constants are correct.
  • Role: Securely bootstraps the UPS, ensuring it starts from a globally valid and consistent state anchor.

This document details the various gadgets used within the Psy ZK circuits, primarily focusing on the User Proving Session (UPS) and Global User Tree Aggregation (GUTA) systems. Gadgets are reusable circuit components that enforce specific constraints and logic.

UPS Gadgets (User Proving Session)

These gadgets are components used within the circuits that users run locally to prove their sequence of transactions.

CorrectUPSHeaderHashesGadget

  • File: correct_header_hashes.rs
  • Purpose: To hold potentially modified hash roots from a previous UPS step header. This is specifically used when a transaction needs to alter the starting state assumptions for debt trees (e.g., paying back a deferred transaction before processing the current transaction's main logic).
  • Key Inputs/Witness: Takes a UserProvingSessionHeaderGadget representing the actual previous step.
  • Key Outputs/Computed Values:
    • previous_step_deferred_tx_debt_tree_root: The deferred debt root to assume as the starting point for the next step's logic.
    • previous_step_inline_tx_debt_tree_root: The inline debt root to assume as the starting point for the next step's logic.
  • Core Logic/Constraints: Primarily a data structure. Its values are used by other gadgets (like UPSCFCStandardStateDeltaGadget) to override the default assumption that the starting debt roots are identical to the previous step's ending debt roots.
  • Assumptions: Assumes the input previous_step header gadget is correctly formed (though its values might be overridden).
  • Role: Enables flexible handling of transaction debt repayment within the UPS flow, allowing debts to be settled before the main state delta logic runs, without breaking the constraint flow.

UPSVerifyCFCProofExistsAndValidGadget

  • File: ups_cfc_verify_inclusion.rs
  • Purpose: To verify that a specific Contract Function Call (CFC) proof exists within the user's current UPS proof tree and that the function being called is validly registered within the global contract structure.
  • Key Inputs/Witness:
    • AttestTreeAwareProofInTreeInput: Witness for the CFC proof's existence and validity within the UPS proof tree.
    • PsyCheckpointLeafCompactWithStateRoots: Witness for the relevant checkpoint state.
    • PsyContractFunctionInclusionProof: Witness proving the function's fingerprint exists in the contract's function tree.
    • ups_session_proof_tree_height: Parameter.
  • Key Outputs/Computed Values:
    • attested_proof_tree_root: The root of the UPS proof tree the CFC proof is part of.
    • checkpoint_leaf_hash: Hash of the checkpoint leaf used for context.
    • cfc_fingerprint: The fingerprint (verifier data hash) of the CFC proof being verified.
    • cfc_inner_public_inputs_hash: The hash of the CFC proof's internal public inputs (before wrapping with tree awareness).
    • cfc_contract_id, cfc_method_id, cfc_num_inputs, cfc_num_outputs: Metadata about the contract function extracted from the inclusion proof.
  • Core Logic/Constraints:
    • Verifies the CFC proof using AttestTreeAwareProofInTreeGadget.
    • Verifies the function's inclusion in the contract tree using PsyContractFunctionInclusionProofGadget.
    • Connects the contract tree root from the inclusion proof to the one in the checkpoint state.
    • Connects the CFC fingerprint from the proof attestation to the one in the inclusion proof.
  • Assumptions: Assumes the witness data provided (proofs, checkpoint state) is valid for the constraints to pass.
  • Role: Ensures that the CFC proof being processed corresponds to a valid, registered function and exists within the user's current session proof tree. Links the specific CFC execution to the global contract state via the checkpoint.

UPSCFCStandardStateDeltaGadget

  • File: ups_standard_cfc_state_delta.rs
  • Purpose: Calculates the state changes to a user's UPS header based on executing a standard CFC transaction. It updates the user's contract state tree root, transaction debt trees, transaction count, and transaction hash stack.
  • Key Inputs/Witness:
    • previous_step_header_gadget: The header state before this transaction.
    • corrections: Optional CorrectUPSHeaderHashesGadget to override starting debt tree roots.
    • contract_state_tree_height: The height of the specific CSTATE tree being modified.
    • UPSCFCStandardStateDeltaInput: Witness containing the transaction input context (start/end states) and proofs for tree updates (user contract tree delta, debt tree pivots).
  • Key Outputs/Computed Values:
    • new_header_gadget: The UserProvingSessionHeaderGadget representing the state after this transaction.
    • cfc_inner_public_inputs_hash: Hash of the CFC transaction input context (used for connecting to verification gadgets).
    • cfc_contract_id, cfc_method_id, cfc_num_inputs, cfc_num_outputs: Metadata passed through.
  • Core Logic/Constraints:
    • Computes the expected cfc_inner_public_inputs_hash from the transaction context witness.
    • Verifies the user_contract_tree_update_proof (DeltaMerkleProofGadget):
      • Checks old_root matches the user_state_tree_root in the previous_step_header_gadget.
      • Checks index matches the tx_in_contract_id.
      • Checks old_value consistency (handles initialization from zero hash to default root based on height).
      • Checks new_value matches the tx_in_end_contract_state_tree_root.
    • Verifies deferred_tx_debt_pivot_proof (HistoricalRootMerkleProofGadget):
      • Checks historical_root matches the corrected previous_step_deferred_tx_debt_tree_root.
      • Checks current_root matches tx_in_end_deferred_tx_debt_tree_root.
    • Verifies inline_tx_debt_pivot_proof similarly.
    • Checks consistency between previous header state (balance, event index) and transaction start context.
    • Updates transaction count (new_tx_count = old_tx_count + 1).
    • Pushes the transaction log item onto the tx_hash_stack.
    • Constructs the new_header_gadget with updated values.
  • Assumptions: Assumes witness proofs and context are valid. Assumes the previous_step_header_gadget and corrections are correctly provided.
  • Role: The core engine for calculating user state updates resulting from a standard contract call within the UPS.

UPSVerifyCFCStandardStepGadget

  • File: ups_cfc_standard.rs
  • Purpose: Combines the verification of a CFC proof's existence/validity (UPSVerifyCFCProofExistsAndValidGadget) with the calculation of its resulting state changes (UPSCFCStandardStateDeltaGadget). It acts as the main gadget for processing a standard transaction step within a UPS circuit.
  • Key Inputs/Witness:
    • previous_step_header_gadget: Header state before this step.
    • current_proof_tree_root: The root of the UPS proof tree this step belongs to.
    • ups_session_proof_tree_height: Parameter.
    • UPSVerifyCFCStandardStepInput: Witness containing inputs for both sub-gadgets.
  • Key Outputs/Computed Values:
    • new_header_gadget: The UserProvingSessionHeaderGadget representing the state after this transaction step.
    • (Internal gadgets expose their outputs too).
  • Core Logic/Constraints:
    • Instantiates UPSVerifyCFCProofExistsAndValidGadget and UPSCFCStandardStateDeltaGadget.
    • Connects current_proof_tree_root to the attested_proof_tree_root of the verification gadget.
    • Connects the checkpoint_leaf_hash between the verification gadget and the previous_step_header_gadget.
    • Connects key metadata (cfc_contract_id, cfc_method_id, cfc_num_inputs, cfc_num_outputs, cfc_inner_public_inputs_hash) between the verification gadget and the state delta gadget to ensure they operate on the same transaction.
  • Assumptions: Relies on the assumptions of its constituent gadgets. Assumes current_proof_tree_root is correctly provided.
  • Role: Represents a complete, verifiable step for processing one standard CFC transaction within the user's local proving session.

UPSVerifyPopDeferredTxStepGadget

  • File: ups_cfc_standard_pop_deferred_tx.rs
  • Purpose: Processes a transaction that pays back a previously incurred deferred transaction debt. It verifies the removal of the debt item from the deferred debt tree and then processes the CFC transaction itself (using the standard step gadget but with a corrected starting deferred debt state).
  • Key Inputs/Witness:
    • previous_step_header_gadget: Header state before this step.
    • current_proof_tree_root: The root of the UPS proof tree this step belongs to.
    • ups_session_proof_tree_height: Parameter.
    • UPSVerifyPopDeferredTxStepInput: Witness containing inputs for the standard CFC step and the DeltaMerkleProof for removing the deferred transaction.
  • Key Outputs/Computed Values:
    • (Exposes outputs from UPSVerifyCFCStandardStepGadget, notably the new_header_gadget).
  • Core Logic/Constraints:
    • Instantiates DeltaMerkleProofGadget (ups_pop_deferred_tx_proof) for the deferred debt tree.
    • Connects ups_pop_deferred_tx_proof.old_root to the deferred_tx_debt_tree_root in previous_step_header_gadget.
    • Asserts ups_pop_deferred_tx_proof.new_value is the zero hash (proving removal).
    • Computes the hash of the expected deferred transaction item based on the call data from the CFC being processed.
    • Asserts ups_pop_deferred_tx_proof.old_value (the item removed) matches the computed hash.
    • Creates a CorrectUPSHeaderHashesGadget, setting previous_step_deferred_tx_debt_tree_root to ups_pop_deferred_tx_proof.new_root.
    • Instantiates UPSVerifyCFCStandardStepGadget using the corrected header hashes.
  • Assumptions: Relies on assumptions of sub-gadgets. Assumes witness data (proofs, context) is valid.
  • Role: Enables verifiable settlement of deferred transaction debts within the UPS, ensuring the debt removed matches the transaction being executed.

PsyUserProvingSessionSignatureDataCompactGadget

  • File: ups_signature_data.rs
  • Purpose: Defines the data structure that gets signed by the user's private key (or equivalent signature scheme) to authorize the end of a User Proving Session.
  • Key Inputs/Witness:
    • start_user_leaf_hash: Hash of the user leaf at the start of the session.
    • end_user_leaf_hash: Hash of the user leaf at the end of the session.
    • checkpoint_leaf_hash: Hash of the checkpoint leaf the session was based on.
    • tx_stack_hash: Final hash of the transaction log stack.
    • tx_count: Total number of transactions processed.
  • Key Outputs/Computed Values:
    • ups_end_cap_sighash: The final hash (part of the signature pre-image) computed from the input data combined with network magic, user ID, and nonce.
  • Core Logic/Constraints:
    • Computes an intermediate hash combining start/end user leaves.
    • Computes an intermediate hash combining tx stack and count.
    • Computes an intermediate hash combining checkpoint leaf and user leaf combo.
    • Computes the final data hash by combining the state context and tx combo hashes.
    • Uses compute_sig_action_hash_circuit to combine this data hash with network magic, user ID, nonce, and the specific signature action code (PSY_SIG_ACTION_SIGN_UPS_END_CAP) to produce the final ups_end_cap_sighash.
  • Assumptions: Assumes input hashes and targets are correctly provided.
  • Role: Standardizes the data payload for UPS end cap signatures, ensuring all necessary state transition information is committed to before signing.

UPSEndCapResultCompactGadget

  • File: ups_end_cap_result.rs
  • Purpose: Defines the compact data structure representing the result of a completed UPS, which is submitted to the GUTA layer for aggregation.
  • Key Inputs/Witness:
    • start_user_leaf_hash: Hash of the user leaf at the start of the session.
    • end_user_leaf_hash: Hash of the user leaf at the end of the session.
    • checkpoint_tree_root_hash: Root of the checkpoint tree the session was based on.
    • user_id: The ID of the user.
  • Key Outputs/Computed Values:
    • end_cap_result_hash: The hash representing this result structure.
  • Core Logic/Constraints:
    • Computes an intermediate hash combining user ID, start/end user leaves, and the global user tree height constant.
    • Computes the final end_cap_result_hash by hashing the intermediate hash with the checkpoint_tree_root_hash.
  • Assumptions: Assumes input hashes and targets are correctly provided.
  • Role: Creates the standardized, hashable output data for a completed UPS, suitable for inclusion as a public input in the end cap proof and for verification by GUTA circuits.

UPSEndCapCoreGadget

  • File: ups_end_cap.rs
  • Purpose: Enforces the core constraints for finalizing a User Proving Session (End Cap). It connects the final UPS state to the signature proof and prepares the final output data (result and stats).
  • Key Inputs/Witness:
    • last_header_gadget: The header state at the end of the UPS (after the last transaction).
    • sig_proof_public_inputs_hash: Public inputs hash from the user's signature proof.
    • sig_proof_fingerprint, sig_proof_param_hash: Hashes related to the signature circuit's verifier data and parameters (used to derive the expected public key).
    • nonce: The nonce used in the signature, provided as witness.
    • slots_modified: Total storage slots modified, provided as witness.
    • network_magic, empty_deferred_tx_debt_tree_root, empty_inline_tx_debt_tree_root: Constants/parameters.
  • Key Outputs/Computed Values:
    • sig_data_compact_gadget: The computed signature data payload.
    • end_cap_result_gadget: The computed end cap result structure.
    • guta_stats: The computed GUTA statistics for this session.
  • Core Logic/Constraints:
    • Checks nonce progression: Ensures the final nonce is greater than the starting nonce and updates the user leaf nonce.
    • Verifies public key consistency:
      • Derives the expected_public_key from the signature proof fingerprint/params.
      • Asserts the start and end user leaves have the same public key.
      • Asserts this public key matches the expected_public_key.
    • Verifies user ID consistency.
    • Instantiates PsyUserProvingSessionSignatureDataCompactGadget using data from the last_header_gadget.
    • Computes the expected ups_end_cap_sighash using the signature data gadget, network magic, user ID, and nonce.
    • Computes the expected sig_proof_public_inputs_hash by hashing the ups_end_cap_sighash with the sig_proof_param_hash.
    • Asserts the computed sig_proof_public_inputs_hash matches the one provided as input.
    • Instantiates UPSEndCapResultCompactGadget with final state data.
    • Checks checkpoint ID progression: Ensures the final user leaf's last_checkpoint_id matches the session's checkpoint ID and is greater than the starting leaf's last_checkpoint_id.
    • Asserts final debt trees are empty by comparing their roots in last_header_gadget to the provided empty tree root constants.
    • Instantiates GUTAStatsGadget using the final transaction count and provided slots_modified.
  • Assumptions: Assumes input hashes, targets, and the last_header_gadget are correct. Assumes the signature proof itself is valid (verification happens elsewhere or is implied).
  • Role: The central gadget for validating the conditions required to end a UPS, linking the final state to the signature authorization and generating the outputs needed for GUTA.

VerifyPreviousUPSStepProofInProofTreeGadget

  • File: verify_previous_ups_step.rs
  • Purpose: Verifies a ZK proof corresponding to the previous step in the User Proving Session's recursive chain. It ensures the proof is valid, exists in the expected UPS proof tree, used an allowed UPS circuit, and matches the expected previous header state.
  • Key Inputs/Witness:
    • ups_session_proof_tree_height, ups_circuit_whitelist_tree_height: Parameters.
    • VerifyPreviousUPSStepProofInProofTreeInput: Witness containing:
      • AttestTreeAwareProofInTreeInput: Witness for the previous step's proof attestation.
      • UserProvingSessionHeader: Witness for the header state that should be the public input of the previous proof.
      • MerkleProof: Witness proving the previous proof's circuit fingerprint is in the UPS circuit whitelist.
  • Key Outputs/Computed Values:
    • proof_attestation_gadget: The underlying attestation gadget.
    • previous_step_header_gadget: The header gadget representing the public inputs of the verified proof.
    • ups_circuit_whitelist_merkle_proof: The whitelist proof gadget.
    • current_proof_tree_root: The root of the UPS proof tree identified by the attestation proof.
    • ups_step_circuit_whitelist_root: The root of the UPS circuit whitelist tree.
  • Core Logic/Constraints:
    • Instantiates AttestTreeAwareProofInTreeGadget to verify the previous proof's existence in the tree.
    • Instantiates UserProvingSessionHeaderGadget for the previous step's public inputs (header).
    • Instantiates MerkleProofGadget for the circuit whitelist check.
    • Connects the fingerprint from the proof attestation to the value in the whitelist proof.
    • Connects the ups_step_circuit_whitelist_root from the previous header gadget to the root of the whitelist proof gadget.
    • Computes the expected hash of the previous_step_header_gadget.
    • Connects this expected hash to the inner_public_inputs_hash from the proof attestation gadget.
  • Assumptions: Assumes witness data (proofs, headers) is valid for constraints to pass.
  • Role: Crucial for the recursive nature of UPS. Each step verifies the previous step's proof, ensuring the integrity of the entire transaction chain generated locally by the user. Links the execution to the allowed set of UPS circuits.

VerifyPreviousUPSStepProofInProofTreePartialFromCurrentGadget

  • File: verify_previous_ups_step_partial_from_current.rs
  • Purpose: Similar to the full verification gadget, but optimized for cases where the current header's session_start_context is already known and fixed within the circuit. It only needs the previous step's state (UserProvingSessionCurrentStateGadget) as witness, reconstructing the full previous header internally.
  • Key Inputs/Witness:
    • current_header: The current step's header gadget (provided as input, not witness).
    • ups_session_proof_tree_height, ups_circuit_whitelist_tree_height: Parameters.
    • VerifyPreviousUPSStepProofInProofTreePartialInput: Witness containing:
      • AttestTreeAwareProofInTreeInput: Witness for the previous proof attestation.
      • UserProvingSessionCurrentState: Witness for the state portion of the previous header.
      • MerkleProof: Witness for the UPS circuit whitelist proof.
  • Key Outputs/Computed Values: Same as VerifyPreviousUPSStepProofInProofTreeGadget.
  • Core Logic/Constraints:
    • Similar logic to the full version, but it constructs the previous_step_header_gadget by combining the known session_start_context from the current_header with the previous_step_state provided as witness.
    • Enforces the same connections for fingerprint, whitelist root, and inner public inputs hash.
    • Adds a constraint connecting the ups_step_circuit_whitelist_root from the current header to the whitelist proof root.
  • Assumptions: Assumes the provided current_header gadget is correct. Assumes witness data is valid.
  • Role: An optimization for verifying previous steps within circuits where the session start context is constant (like the End Cap circuit). Reduces witness size.

UPSEndCapFromProofTreeGadget

  • File: ups_end_cap_tree.rs
  • Purpose: Orchestrates the entire End Cap process within a ZK circuit. It verifies the last UPS step proof, verifies the user's ZK signature proof, and enforces the final End Cap constraints.
  • Key Inputs/Witness:
    • ups_session_proof_tree_height, ups_circuit_whitelist_tree_height: Parameters.
    • network_magic: Parameter.
    • UPSEndCapFromProofTreeGadgetInput: Witness containing:
      • VerifyPreviousUPSStepProofInProofTreeInput: Witness for verifying the last UPS step.
      • AttestProofInTreeInput: Witness for verifying the ZK signature proof.
      • user_public_key_param: Hash representing signature parameters.
      • nonce: The nonce used for signing.
      • slots_modified: Total storage slots modified.
  • Key Outputs/Computed Values:
    • end_cap_core_gadget: The core gadget performing final checks.
    • current_proof_tree_root: The root of the UPS proof tree.
  • Core Logic/Constraints:
    • Instantiates VerifyPreviousUPSStepProofInProofTreeGadget to verify the last UPS step.
    • Instantiates AttestProofInTreeGadget to verify the ZK signature proof.
    • Connects the attested_proof_tree_root from the signature proof verification to the current_proof_tree_root from the previous step verification (ensuring both proofs are in the same tree).
    • Defines empty debt tree root constants.
    • Instantiates UPSEndCapCoreGadget, passing outputs from the verification gadgets (previous_step_header_gadget, signature proof hashes) and witness values (nonce, slots_modified, user_public_key_param) along with constants.
  • Assumptions: Relies on assumptions of sub-gadgets. Assumes witness data is valid.
  • Role: The top-level gadget within the End Cap circuit, ensuring the final UPS state is valid, linked to the previous step, and properly authorized by a valid ZK signature, all within the same consistent UPS proof tree.

UPSStartStepGadget

  • File: ups_start.rs
  • Purpose: Initializes a User Proving Session. It takes the user's initial state (from the last finalized block's checkpoint) and sets up the starting header for the UPS circuit chain.
  • Key Inputs/Witness:
    • UPSStartStepInput: Witness containing:
      • UserProvingSessionHeader: The expected starting header.
      • PsyCheckpointLeaf: Witness for the checkpoint leaf data.
      • PsyCheckpointGlobalStateRoots: Witness for the global state roots within the checkpoint.
      • MerkleProof: Proof linking the checkpoint leaf to the checkpoint tree root.
      • MerkleProof: Proof linking the user's starting leaf hash to the global user tree root.
  • Key Outputs/Computed Values:
    • header_gadget: The validated starting header gadget.
  • Core Logic/Constraints:
    • Constraint Set 1 (Start Session Context):
      • Verifies checkpoint_tree_proof consistency with header_gadget.session_start_context (root, leaf hash, ID).
      • Verifies consistency between checkpoint_leaf_gadget, state_roots_gadget, and the header_gadget (checkpoint leaf hash, global chain root).
      • Verifies user_tree_proof root matches the user_tree_root in state_roots_gadget.
      • Verifies user_tree_proof value matches header_gadget.session_start_context.start_session_user_leaf_hash.
      • Verifies user_tree_proof index matches the user_id in the header's start leaf.
    • Constraint Set 2 (Current State Initialization):
      • Computes the hash of the expected current user leaf (start leaf with last_checkpoint_id updated to the session's checkpoint ID).
      • Asserts this computed hash matches the hash of the header_gadget.current_state.user_leaf.
      • Asserts the deferred_tx_debt_tree_root and inline_tx_debt_tree_root in header_gadget.current_state match the known empty tree root constants.
      • Asserts tx_hash_stack is zero hash and tx_count is zero.
  • Assumptions: Assumes the witness data (header, proofs, leaves, roots) is valid for constraints to pass. Assumes the empty tree root constants are correct.
  • Role: Securely bootstraps the User Proving Session, anchoring the initial state to a verified checkpoint and user leaf from the last finalized block and ensuring the session starts with clean debt trees and transaction counts.

GUTA Gadgets (Global User Tree Aggregation)

These gadgets are components used within the circuits run by the decentralized proving network to aggregate proofs from multiple users into a single block proof.

GUTAStatsGadget

  • File: guta_stats.rs
  • Purpose: Represents and processes statistics aggregated during the GUTA process.
  • Key Inputs/Witness: fees_collected, user_ops_processed, total_transactions, slots_modified.
  • Key Outputs/Computed Values: Can compute a hash of the stats.
  • Core Logic/Constraints: Primarily a data structure. Includes a combine_with method to add stats from two gadgets together (used during aggregation). to_hash packs the stats into a HashOutTarget.
  • Assumptions: Assumes input target values are correct.
  • Role: Tracks key metrics about the aggregated user operations within a GUTA proof branch.

GlobalUserTreeAggregatorHeaderGadget

  • File: guta_header.rs
  • Purpose: Represents the public inputs (header) of a GUTA proof. It encapsulates the essential information about the aggregation step.
  • Key Inputs/Witness:
    • guta_circuit_whitelist: Root hash of the allowed GUTA circuits.
    • checkpoint_tree_root: Root hash of the checkpoint tree relevant to this aggregation step.
    • state_transition: A SubTreeNodeStateTransitionGadget representing the change in the Global User Tree (GUSR) this proof covers.
    • stats: A GUTAStatsGadget containing aggregated statistics.
  • Key Outputs/Computed Values: Computes the hash of the entire header.
  • Core Logic/Constraints: Data structure. to_hash computes the final header hash by combining hashes of state_transition, stats, checkpoint_tree_root, and guta_circuit_whitelist.
  • Assumptions: Assumes input targets/gadgets are correctly formed.
  • Role: Defines the standard public interface for all GUTA circuits, ensuring consistent information is passed and verified during recursive aggregation.

VerifyEndCapProofGadget

  • File: verify_end_cap.rs
  • Purpose: Verifies a user's End Cap proof within the GUTA aggregation process. It checks the proof's validity, ensures it used the correct End Cap circuit, verifies the user's claimed checkpoint root is historical, and extracts the state transition and stats.
  • Key Inputs/Witness:
    • proof_common_data, verifier_data_cap_height: Parameters for proof verification.
    • known_end_cap_fingerprint_hash: Constant representing the hash of the official End Cap circuit's verifier data.
    • UPSEndCapResultCompact: Witness for the result claimed by the End Cap proof.
    • GUTAStats: Witness for the stats claimed by the End Cap proof.
    • MerkleProofCore: Witness for the historical checkpoint root proof.
    • ProofWithPublicInputs: The End Cap proof itself.
    • VerifierOnlyCircuitData: Verifier data matching the proof.
  • Key Outputs/Computed Values:
    • (Implements ToGUTAHeader) Outputs a GlobalUserTreeAggregatorHeaderGadget representing the state transition performed by this user.
  • Core Logic/Constraints:
    • Verifies the proof_target using the provided verifier_data.
    • Computes the proof_fingerprint from the verifier_data.
    • Asserts proof_fingerprint matches known_end_cap_fingerprint_hash.
    • Instantiates UPSEndCapResultCompactGadget and GUTAStatsGadget from witness.
    • Computes the expected public inputs hash by hashing the result and stats gadgets.
    • Asserts the computed hash matches the hash derived from proof_target.public_inputs.
    • Verifies the checkpoint_historical_merkle_proof, connecting its historical_root to the checkpoint_tree_root_hash from the end cap result gadget.
    • Constructs the output GlobalUserTreeAggregatorHeaderGadget:
      • Uses a default/input guta_circuit_whitelist.
      • Uses the current_root from the historical proof as the checkpoint_tree_root.
      • Creates a SubTreeNodeStateTransitionGadget using start/end user leaf hashes and user ID from the result gadget, marking the level as GLOBAL_USER_TREE_HEIGHT.
      • Includes the verified guta_stats.
  • Assumptions: Assumes witness data (proofs, results, stats) is valid. Assumes known_end_cap_fingerprint_hash is correct.
  • Role: The entry point for incorporating a user's completed UPS into the GUTA aggregation tree. Verifies the user's session proof and translates its result into the standard GUTA header format for further aggregation.

VerifyGUTAProofGadget

  • File: verify_guta_proof.rs
  • Purpose: Verifies a GUTA proof generated by a lower level in the aggregation hierarchy. Checks the proof validity, ensures it used an allowed GUTA circuit, and extracts its header.
  • Key Inputs/Witness:
    • proof_common_data, verifier_data_cap_height: Parameters.
    • GlobalUserTreeAggregatorHeader: Witness for the header claimed by the proof being verified.
    • MerkleProof: Witness proving the sub-proof's circuit fingerprint is in the GUTA circuit whitelist.
    • ProofWithPublicInputs: The GUTA proof itself.
    • VerifierOnlyCircuitData: Verifier data for the proof.
  • Key Outputs/Computed Values:
    • guta_proof_header_gadget: The verified header gadget of the sub-proof.
  • Core Logic/Constraints:
    • Verifies the proof_target using the provided verifier_data.
    • Computes the proof_fingerprint from the verifier_data.
    • Verifies the guta_whitelist_merkle_proof.
    • Connects the guta_proof_header_gadget.guta_circuit_whitelist to the guta_whitelist_merkle_proof.root.
    • Computes the expected public inputs hash from the guta_proof_header_gadget.
    • Asserts this matches the hash derived from proof_target.public_inputs.
    • Asserts the guta_whitelist_merkle_proof.value matches the computed proof_fingerprint.
  • Assumptions: Assumes witness data is valid.
  • Role: The core recursive verification step within GUTA. Allows aggregation circuits to securely incorporate results from lower-level GUTA proofs.

TwoNCAStateTransitionGadget

  • File: two_nca_state_transition.rs
  • Purpose: Combines the state transitions from two child GUTA proofs (a_header, b_header) that modify different parts of the Global User Tree. It uses a Nearest Common Ancestor (NCA) proof to compute the resulting state transition at their common parent node in the tree.
  • Key Inputs/Witness:
    • a_header, b_header: The headers of the two child GUTA proofs.
    • UpdateNearestCommonAncestorProof: Witness containing the NCA proof data.
  • Key Outputs/Computed Values:
    • new_guta_header: The combined GUTA header representing the transition at the NCA.
  • Core Logic/Constraints:
    • Instantiates UpdateNearestCommonAncestorProofOptGadget.
    • Connects a_header.checkpoint_tree_root to b_header.checkpoint_tree_root.
    • Connects a_header.guta_circuit_whitelist to b_header.guta_circuit_whitelist.
    • Connects a_header.state_transition fields (old/new value, index, level) to the child_a fields in the NCA proof gadget.
    • Connects b_header.state_transition fields similarly to child_b.
    • Combines stats: new_stats = a_header.stats.combine_with(b_header.stats).
    • Constructs new_guta_header:
      • Uses whitelist/checkpoint root from children (they must match).
      • Creates state_transition using the old/new_nearest_common_ancestor_value, index, and level from the NCA proof gadget.
      • Includes the new_stats.
  • Assumptions: Assumes input headers are valid (verified previously). Assumes the NCA proof witness is valid. Assumes children operate on the same checkpoint and whitelist.
  • Role: A fundamental building block for parallel aggregation. Allows merging results from independent branches of the GUTA proof tree efficiently using NCA proofs.

GUTAHeaderLineProofGadget

  • File: guta_line.rs
  • Purpose: Aggregates a GUTA proof state transition upwards along a direct line towards the root of the Global User Tree realm. Used when a node only has one child contributing to the update in that part of the tree.
  • Key Inputs/Witness:
    • global_user_tree_realm_height, global_user_tree_height: Parameters.
    • child_proof_header: The header of the single child GUTA proof.
    • siblings: Witness containing the Merkle sibling hashes needed to recompute the root along the path.
  • Key Outputs/Computed Values:
    • new_guta_header: The GUTA header representing the state transition at the top of the line (realm root).
  • Core Logic/Constraints:
    • Instantiates SubTreeNodeTopLineGadget, providing the child's state transition. This gadget internally performs the Merkle path hashing using the provided siblings.
    • Constructs new_guta_header:
      • Copies whitelist/checkpoint root/stats from the child.
      • Uses the new_state_transition computed by the SubTreeNodeTopLineGadget.
  • Assumptions: Assumes child_proof_header is valid. Assumes sibling hashes in the witness are correct.
  • Role: Efficiently propagates a state change up the GUTA tree when no merging (NCA) is required. Used to bring proofs to a common level before NCA or to reach the final realm root.

VerifyGUTAProofToLineGadget

  • File: verify_guta_proof_to_line.rs
  • Purpose: Combines verifying a lower-level GUTA proof with aggregating its state transition upwards using a line proof.
  • Key Inputs/Witness:
    • proof_common_data, verifier_data_cap_height: Parameters.
    • global_user_tree_realm_height, global_user_tree_height: Parameters.
    • MerkleProofCore: GUTA whitelist proof witness.
    • GlobalUserTreeAggregatorHeader: Child proof header witness.
    • ProofWithPublicInputs, VerifierOnlyCircuitData: Child GUTA proof and verifier data witness.
    • top_line_siblings: Sibling hashes for the line proof witness.
  • Key Outputs/Computed Values:
    • new_guta_header: The final GUTA header at the top of the line.
  • Core Logic/Constraints:
    • Instantiates VerifyGUTAProofGadget to verify the child proof.
    • Instantiates GUTAHeaderLineProofGadget, using the verified header from the first gadget as input.
  • Assumptions: Relies on assumptions of sub-gadgets. Assumes witness data is valid.
  • Role: A common pattern in GUTA aggregation: verify a child proof and immediately propagate its result upwards via a line proof.

GUTARegisterUserCoreGadget

  • File: guta_register_user_core.rs
  • Purpose: Handles the core logic for registering a single new user in the Global User Tree (GUSR). It verifies the update proof that inserts the new user leaf.
  • Key Inputs/Witness:
    • global_user_tree_realm_height, global_user_tree_height: Parameters.
    • default_user_state_tree_root: Constant.
    • input_height_target: Optional target for variable height proof.
    • public_key: The public key hash for the new user (can be witness or input).
    • DeltaMerkleProofCore: Witness for the GUSR tree update.
  • Key Outputs/Computed Values:
    • user_id: The ID (index) of the newly registered user.
    • user_leaf_hash: The hash of the newly created user leaf.
    • state_transition: Represents the GUTA state transition for this single registration.
  • Core Logic/Constraints:
    • Instantiates VariableHeightDeltaMerkleProofOptGadget for the GUSR update.
    • Asserts the old_value in the proof is the zero hash (ensuring it's an insertion into an empty slot).
    • Creates the default PsyUserLeafGadget using the proof's index (user ID), the public_key, and default_user_state_tree_root.
    • Computes the user_leaf_hash.
    • Asserts the new_value in the proof matches the computed user_leaf_hash.
    • Calculates the state_transition based on the delta proof's old/new roots, height, and computed parent index.
  • Assumptions: Assumes witness proof and public key (if witness) are valid. Assumes default_user_state_tree_root is correct.
  • Role: The lowest-level gadget for handling user registration state changes in GUTA.

GUTARegisterUserFullGadget

  • File: guta_register_user_full.rs
  • Purpose: Extends the core registration by adding verification against a user registration tree. This tree (presumably managed off-chain or via a separate mechanism) maps user IDs to public keys. This gadget ensures the public key used for registration matches the one committed to in the registration tree.
  • Key Inputs/Witness:
    • (Inherits inputs from Core gadget).
    • MerkleProofCore: Witness proving the public_key exists at the correct index (user ID) in the user_registration_tree.
  • Key Outputs/Computed Values:
    • (Inherits outputs from Core gadget).
    • user_registration_tree_root: The root of the user registration tree.
  • Core Logic/Constraints:
    • Instantiates MerkleProofGadget for the user registration tree.
    • Maps the proof's index bits to an expected user ID.
    • Asserts the value (public key) from the registration tree proof is non-zero.
    • Instantiates GUTARegisterUserCoreGadget, passing the value from the registration proof as the public_key.
    • Asserts the user_id from the Core gadget matches the expected_user_id derived from the registration proof index.
  • Assumptions: Relies on Core gadget assumptions. Assumes the user registration tree proof witness is valid.
  • Role: Adds a layer of validation, ensuring user registrations correspond to pre-committed public keys in a dedicated registration structure.

GUTARegisterUsersGadget

  • File: guta_register_users.rs
  • Purpose: Aggregates multiple user registration operations (using GUTARegisterUserFullGadget) sequentially within a single circuit. Handles padding/disabling for a fixed maximum number of users.
  • Key Inputs/Witness:
    • (Inherits inputs from Full gadget).
    • max_users: Parameter.
    • GUTARegisterUserFullInput[]: Array witness for each potential user registration (proofs).
    • register_user_count: Witness target indicating the actual number of users being registered (<= max_users).
  • Key Outputs/Computed Values:
    • state_transition: The aggregate state transition covering all registered users.
    • user_registration_tree_root: Root of the registration tree (taken from the first user, checked for consistency).
  • Core Logic/Constraints:
    • Instantiates max_users instances of GUTARegisterUserFullGadget.
    • Asserts register_user_count is non-zero.
    • Iterates from the second user onwards:
      • Compares loop index i with register_user_count to determine if the current user slot is_disabled.
      • If not disabled:
        • Connects the current user's old_global_user_tree_root to the previous user's new_global_user_tree_root.
        • Connects the current user's user_registration_tree_root to the root from the first user (ensuring consistency).
        • Connects proof heights.
        • Updates the aggregate state_transition.new_node_value to the current user's new_global_user_tree_root.
      • Selects the final new_node_value based on the last enabled user's output.
  • Assumptions: Relies on Full gadget assumptions. Assumes witness array and count are valid. Assumes dummy inputs are used correctly for padding.
  • Role: Allows batching multiple user registrations into a single GUTA proof step, improving aggregation efficiency.

GUTAOnlyRegisterUsersGadget

  • File: guta_only_register_users_gadget.rs
  • Purpose: A specialized GUTA gadget that only performs user registration (using GUTARegisterUsersGadget) and assumes no other state changes (zero stats).
  • Key Inputs/Witness:
    • guta_circuit_whitelist, checkpoint_tree_root: Inputs (likely from a previous step or constant).
    • (Inherits inputs for GUTARegisterUsersGadget).
  • Key Outputs/Computed Values:
    • new_guta_header: The GUTA header representing only the registration state change.
  • Core Logic/Constraints:
    • Instantiates GUTARegisterUsersGadget.
    • Constructs new_guta_header:
      • Uses input guta_circuit_whitelist and checkpoint_tree_root.
      • Uses the state_transition from the GUTARegisterUsersGadget.
      • Creates a zeroed GUTAStatsGadget.
  • Assumptions: Relies on GUTARegisterUsersGadget assumptions. Assumes the provided whitelist/checkpoint roots are correct for this context.
  • Role: Provides a dedicated gadget for GUTA steps that solely involve registering new users.

GUTARegisterUsersBatchGadget

  • File: guta_register_users_batch.rs
  • Purpose: Combines verification of a previous GUTA proof (brought up to a certain tree level via a line proof) with a subsequent batch registration of new users.
  • Key Inputs/Witness:
    • (Inherits inputs for VerifyGUTAProofToLineGadget).
    • (Inherits inputs for GUTARegisterUsersGadget).
  • Key Outputs/Computed Values:
    • new_guta_header: The combined GUTA header.
  • Core Logic/Constraints:
    • Instantiates VerifyGUTAProofToLineGadget (verify_to_line_gadget).
    • Instantiates GUTARegisterUsersGadget (register_users_gadget).
    • Connects the state transitions:
      • line_state_transition.node_index == register_users_state_transiton.node_index.
      • line_state_transition.node_level == register_users_state_transiton.node_level.
      • line_state_transition.new_node_value == register_users_state_transiton.old_node_value (ensures the registration starts from the state reached by the verified line proof).
    • Constructs new_guta_header:
      • Uses whitelist/checkpoint root/stats from the line proof header.
      • Creates combined state_transition: old_node_value from line, new_node_value from registration, index/level from line (must match registration).
  • Assumptions: Relies on assumptions of sub-gadgets. Assumes witness data is valid.
  • Role: Handles a common GUTA pattern: verifying a previous aggregation step and then applying a batch of user registrations originating from the state achieved by that previous step.

GUTANoChangeGadget

  • File: guta_no_change_gadget.rs
  • Purpose: Represents a GUTA step where the Global User Tree (GUSR) does not change. It primarily serves to advance the checkpoint_tree_root based on a new checkpoint.
  • Key Inputs/Witness:
    • guta_circuit_whitelist: Input constant/parameter.
    • checkpoint_tree_height: Parameter.
    • MerkleProofCore: Witness proving a checkpoint_leaf exists in the checkpoint_tree.
    • PsyCheckpointLeafCompactWithStateRoots: Witness for the checkpoint leaf data.
  • Key Outputs/Computed Values:
    • new_guta_header: GUTA header indicating no user tree change but potentially a new checkpoint root.
  • Core Logic/Constraints:
    • Verifies the checkpoint_tree_proof (append-only).
    • Verifies the hash of checkpoint_leaf_gadget matches the value in the proof.
    • Constructs new_guta_header:
      • Uses input guta_circuit_whitelist.
      • Uses the checkpoint_tree_proof.root as the checkpoint_tree_root.
      • Creates a "no-op" state_transition: old_node_value and new_node_value are both set to the user_tree_root from the checkpoint_leaf_gadget, with index/level zero.
      • Creates a zeroed GUTAStatsGadget.
  • Assumptions: Assumes witness proof and leaf data are valid. Assumes guta_circuit_whitelist is correct.
  • Role: Allows the GUTA aggregation to incorporate new checkpoints even if no user state changed during that period, keeping the aggregated checkpoint root up-to-date.

Circuit Definition Files (Higher Level)

These files define complete ZK circuits, orchestrating various gadgets to perform end-to-end tasks like starting a session, processing transactions, or finalizing a session.

UPSStartSessionCircuit

  • File: ups_start.rs (Circuit definition wrapping UPSStartStepGadget)
  • Purpose: The circuit executed to begin a User Proving Session.
  • Core Logic: Primarily uses the UPSStartStepGadget to verify the initial state against a checkpoint and set up the starting header. Computes the final public inputs hash by wrapping the inner header hash (start_step_gadget.header_gadget.to_hash()) with tree awareness information (using compute_tree_aware_proof_public_inputs), assuming the proof tree starts empty (empty_ups_proof_tree_root_target).
  • What it Proves: That a valid starting UserProvingSessionHeader has been constructed, correctly anchored to a specific, verified checkpoint and user leaf state from the last finalized block, and that the session starts with empty debt trees and zero transaction count.
  • Assumptions: Assumes the witness (UPSStartStepInput) containing the starting header, proofs, and state roots is valid before constraints are applied. Assumes the provided empty_ups_proof_tree_root constant is correct.
  • Role: Securely initializes the recursive proof chain for a user's local transactions.

UPSCFCStandardTransactionCircuit

  • File: ups_cfc_standard.rs (Circuit definition wrapping VerifyPreviousUPSStepProofInProofTreeGadget and UPSVerifyCFCStandardStepGadget)
  • Purpose: Processes a single, standard Contract Function Call (CFC) transaction within an ongoing User Proving Session.
  • Core Logic:
    • Uses VerifyPreviousUPSStepProofInProofTreeGadget to verify the proof of the previous UPS step.
    • Uses UPSVerifyCFCStandardStepGadget (which internally uses verification and state delta gadgets) to:
      • Verify the CFC proof exists in the current proof tree.
      • Verify the CFC function is valid.
      • Calculate the state changes based on the CFC execution.
    • Connects the current_proof_tree_root from the previous step verification to the standard step gadget.
    • Computes the final public inputs hash by wrapping the new header hash (standard_cfc_step_gadget.new_header_gadget.to_hash()) with tree awareness information (current_proof_tree_root).
  • What it Proves: That given a valid previous UPS step proof and header, executing the specified CFC transaction (verified to exist and be valid) correctly transitions the UPS state to the new header state.
  • Assumptions: Assumes the witness (UPSCFCStandardTransactionCircuitInput) containing the previous step proof details, the current transaction details (proofs, context), and circuit whitelist proofs is valid. Relies on the validity of the previous UPS step proof (which is verified internally).
  • Role: The workhorse circuit for processing most user transactions locally within the recursive UPS chain.

UPSCFCDeferredTransactionCircuit

  • File: ups_cfc_deferred_tx.rs (Circuit definition wrapping VerifyPreviousUPSStepProofInProofTreeGadget and UPSVerifyPopDeferredTxStepGadget)
  • Purpose: Processes a transaction that pays back a deferred transaction debt within an ongoing User Proving Session.
  • Core Logic:
    • Uses VerifyPreviousUPSStepProofInProofTreeGadget to verify the previous UPS step proof.
    • Uses UPSVerifyPopDeferredTxStepGadget to:
      • Verify the removal of the correct deferred transaction item from the debt tree.
      • Process the CFC itself using corrected starting state assumptions.
    • Connects the current_proof_tree_root appropriately.
    • Computes the final public inputs hash by wrapping the new header hash (deferred_tx_cfc_step_gadget...new_header_gadget.to_hash()) with tree awareness information.
  • What it Proves: That given a valid previous UPS step, the specified deferred transaction debt was correctly removed, and executing the corresponding CFC transaction correctly transitions the UPS state to the new header state.
  • Assumptions: Assumes the witness (UPSCFCDeferredTransactionCircuitInput) is valid. Relies on the validity of the previous UPS step proof.
  • Role: Enables the settlement of deferred transaction debts within the user's local proof chain.

UPSStandardEndCapCircuit

  • File: end_cap.rs (Circuit definition wrapping UPSEndCapFromProofTreeGadget and proof verification gadgets)
  • Purpose: The final circuit in a User Proving Session. It verifies the last UPS step, verifies the user's ZK signature proof, enforces final state conditions (e.g., empty debt trees), and generates the final public outputs (End Cap Result hash and GUTA Stats hash).
  • Core Logic:
    • Uses UPSEndCapFromProofTreeGadget which orchestrates:
      • Verification of the last UPS step proof (VerifyPreviousUPSStepProofInProofTreeGadget).
      • Verification of the ZK signature proof (AttestProofInTreeGadget).
      • Enforcement of final conditions via UPSEndCapCoreGadget.
    • (Original end_cap.rs also includes VerifyAggProofGadget): Verifies a proof about the UPS proof tree itself (e.g., ensuring the tree was built using allowed aggregation circuits). This connects the user's session to the global proof aggregation infrastructure rules.
    • Connects the proof tree root from the UPS gadget to the state transition end of the verified aggregation proof.
    • Connects the UPS circuit whitelist root from the UPS gadget to a known constant or input, ensuring the UPS steps used allowed circuits.
    • Connects the proof tree aggregation circuit whitelist root to a known constant (ensuring the tree proof used allowed circuits).
    • Computes the final public inputs hash by hashing the end_cap_result_gadget hash and the guta_stats hash.
  • What it Proves: That the entire User Proving Session was valid (by verifying the last step), the final state is consistent (empty debts, correct nonce/checkpoint progression), the session is authorized by a valid ZK signature corresponding to the user's key, the session used allowed UPS circuits, and the session's proof tree was built correctly using allowed aggregation circuits. It outputs the final state transition hash and stats hash.
  • Assumptions: Assumes the witness (UPSEndCapFromProofTreeGadgetInput, aggregation proof witnesses) is valid. Assumes the known whitelist root constants are correct.
  • Role: Securely concludes the user's local proving session, producing a verifiable proof and result ready for submission to the GUTA layer. Links the user's activity to global rules via whitelist checks.


Circuits.md

This document describes the end-to-end flow of circuits involved in processing user transactions and aggregating them into a final block proof within the Psy system. It highlights the assumptions made at each stage and how they are progressively verified, ultimately enabling horizontal scalability.

Phase 1: User Proving Session (UPS) - Local Execution

This phase happens locally on the user's device (or via a delegated prover). The user builds a recursive chain of proofs for their transactions within a single block context.

1. UPSStartSessionCircuit

  • Purpose: Initializes the proving session for a user based on the last finalized blockchain state.
  • What it Proves:
    • The starting UserProvingSessionHeader is valid.
    • This header is correctly anchored to a specific checkpoint_leaf_hash which exists at checkpoint_id within the checkpoint_tree_root from the last finalized block.
    • The session_start_context within the header accurately reflects the user's state (start_session_user_leaf_hash, user_id, etc.) as found in the user_tree_root associated with the starting checkpoint.
    • The current_state within the starting header is correctly initialized (user leaf last_checkpoint_id updated, debt trees empty, tx count/stack zero).
  • Assumptions:
    • The witness data (UPSStartStepInput) provided by the user (fetching state from the last block) is correct initially. Constraints verify its consistency.
    • The constant empty_ups_proof_tree_root (representing the start of the recursive proof tree for this session) is correct.
  • How Assumptions are Discharged: Internal consistency checks verify the relationships between the provided header, checkpoint leaf, state roots, user leaf, and the Merkle proofs linking them. The assumption about the previous block's checkpoint_tree_root being correct is implicitly carried forward, as this circuit uses it as the basis for initialization.
  • Contribution to Horizontal Scalability: Establishes a user-specific, isolated starting point based on globally finalized state, allowing this session to proceed independently of other users' sessions within the same new block.
  • High-Level Functionality: Securely starts a user's transaction batch processing.

2. UPSCFCStandardTransactionCircuit (Executed potentially multiple times)

  • Purpose: Processes a single standard transaction (contract call) within the user's ongoing session, extending the recursive proof chain.
  • What it Proves:
    • The ZK proof for the previous UPS step is valid.
    • The previous step's proof was generated by a circuit listed in the ups_circuit_whitelist_root specified in the previous step's header.
    • The public inputs (header hash) of the previous step's proof match the provided previous_step_header.
    • The ZK proof for the current Contract Function Call (CFC) exists within the current current_proof_tree_root.
    • This CFC proof corresponds to a function registered in the GCON tree (via checkpoint context).
    • Executing this CFC correctly transitions the state from the previous_step_header to the new_header_gadget state (updating CSTATE->UCON root, debt trees, tx count/stack).
  • Assumptions:
    • The witness data (UPSCFCStandardTransactionCircuitInput) for this specific step (CFC proof, state delta witnesses, previous step proof info) is correct initially.
    • The current_proof_tree_root provided matches the actual root of the recursive proof tree being built.
  • How Assumptions are Discharged:
    • Verifies the previous step's proof using VerifyPreviousUPSStepProofInProofTreeGadget. This discharges the assumption about the previous step's validity and its public inputs.
    • Verifies the CFC proof and its link to the contract state using UPSVerifyCFCStandardStepGadget.
    • Verifies the state delta logic, ensuring the transition is correct based on witness data.
    • The assumption about the current_proof_tree_root is passed implicitly to the next step or the End Cap circuit.
  • Contribution to Horizontal Scalability: User processes transactions locally and serially for themselves, maintaining self-consistency without interacting with other users' current block activity.
  • High-Level Functionality: Securely executes and proves individual smart contract interactions locally.

3. UPSCFCDeferredTransactionCircuit (Executed if applicable)

  • Purpose: Processes a transaction that settles a deferred debt, then executes the main CFC logic.
  • What it Proves: Similar to the standard circuit, but additionally proves:
    • A specific deferred transaction item was removed from the deferred_tx_debt_tree.
    • The item removed corresponds exactly to the call data of the CFC being executed.
    • The subsequent CFC state transition starts from the state after the debt was removed.
  • Assumptions: Same as the standard circuit, plus assumes the witness for the deferred transaction removal proof (DeltaMerkleProofGadget) is correct initially.
  • How Assumptions are Discharged: Verifies previous step proof. Verifies the debt removal proof and its consistency with the CFC call data. Verifies the subsequent state delta.
  • Contribution to Horizontal Scalability: Same as standard transaction circuit (local serial execution).
  • High-Level Functionality: Enables settlement of asynchronous transaction debts within the local proving flow.

4. UPSStandardEndCapCircuit

  • Purpose: Finalizes the user's entire proving session for the block.
  • What it Proves:
    • The proof for the last UPS transaction step is valid and used an allowed circuit.
    • The ZK proof for the user's signature (authorizing the session) is valid and exists in the same UPS proof tree.
    • The signature corresponds to the user's registered public key (derived from signature proof parameters).
    • The signature payload (PsyUserProvingSessionSignatureDataCompact) correctly reflects the session's start/end user leaves, checkpoint, final tx stack, and tx count.
    • The nonce used in the signature is valid (incremented).
    • The final UPS state shows both deferred_tx_debt_tree_root and inline_tx_debt_tree_root are empty (all debts settled).
    • The last_checkpoint_id in the final user leaf matches the session's checkpoint_id and has progressed correctly.
    • (If aggregation proof verification included): The UPS proof tree itself was constructed using circuits from a known proof_tree_circuit_whitelist_root.
  • Assumptions:
    • Witness data (UPSEndCapFromProofTreeGadgetInput, potentially agg proof witness) is correct initially.
    • Known constants (network_magic, empty debt roots, known_ups_circuit_whitelist_root, known_proof_tree_circuit_whitelist_root) are correct.
  • How Assumptions are Discharged:
    • Verifies the last UPS step proof.
    • Verifies the ZK signature proof.
    • Connects signature data to the final UPS header state.
    • Checks nonce, checkpoint ID, empty debt trees.
    • Verifies proofs against whitelists using provided roots.
    • The output of this circuit (the End Cap proof) now implicitly carries the assumption that the starting checkpoint_tree_root (used in the UPSStartSessionCircuit) was correct. All internal UPS assumptions have been discharged.
  • Contribution to Horizontal Scalability: Creates a single, verifiable proof representing all of a user's activity for the block. This proof can now be processed in parallel with proofs from other users by the GUTA layer.
  • High-Level Functionality: Securely concludes a user's transaction batch, authorizes it, and packages it for network aggregation.

Phase 2: Global User Tree Aggregation (GUTA) - Parallel Network Execution

The Decentralized Proving Network (DPN) takes End Cap proofs (and potentially other GUTA proofs like user registrations) from many users and aggregates them in parallel. This involves specialized GUTA circuits.

(Note: The provided files focus heavily on UPS and GUTA gadgets. The exact structure of the GUTA circuits using these gadgets is inferred but follows standard recursive proof aggregation patterns.)

Example GUTA Circuits (Inferred):

5. GUTAProcessEndCapCircuit (Hypothetical)

  • Purpose: To take a user's validated UPSStandardEndCapCircuit proof and integrate its state change into the GUTA proof hierarchy.
  • Core Logic: Uses VerifyEndCapProofGadget.
  • What it Proves:
    • The End Cap proof is valid and used the correct circuit (known_end_cap_fingerprint_hash).
    • The checkpoint_tree_root claimed by the user in the End Cap result existed historically.
    • Outputs a standard GlobalUserTreeAggregatorHeader representing the user's GUSR tree state transition (start leaf hash -> end leaf hash at the user's ID index) and stats.
  • Assumptions:
    • Witness (End Cap proof, result, stats, historical proof) is correct initially.
    • The known_end_cap_fingerprint_hash constant is correct.
    • A default_guta_circuit_whitelist root is provided or known.
  • How Assumptions are Discharged: Verifies the End Cap proof and historical checkpoint proof. Packages the result into a standard GUTA header. The assumption about the default_guta_circuit_whitelist is passed upwards. The assumption about the current checkpoint_tree_root (from the historical proof) is passed upwards.
  • Contribution to Horizontal Scalability: Allows individual user session results to be verified independently and prepared for parallel aggregation.
  • High-Level Functionality: Validates and incorporates user end-of-session proofs into the global aggregation process.

6. GUTARegisterUserCircuit (Hypothetical)

  • Purpose: To process the registration of one or more new users.
  • Core Logic: Uses GUTAOnlyRegisterUsersGadget (which uses GUTARegisterUsersGadget, GUTARegisterUserFullGadget, GUTARegisterUserCoreGadget).
  • What it Proves:
    • For each registered user, their public_key was correctly inserted at their user_id index in the GUSR tree (transitioning from zero hash to the new user leaf hash).
    • The public_key used matches an entry in the user_registration_tree_root.
    • Outputs a GlobalUserTreeAggregatorHeader representing the aggregate GUSR state transition for all registered users, with zero stats.
  • Assumptions:
    • Witness (registration proofs, user count) is correct initially.
    • guta_circuit_whitelist and checkpoint_tree_root inputs are correct for this context.
    • default_user_state_tree_root constant is correct.
  • How Assumptions are Discharged: Verifies delta proofs for GUSR insertion and Merkle proofs against the registration tree. Outputs a standard GUTA header, passing assumptions about whitelist/checkpoint upwards.
  • Contribution to Horizontal Scalability: User registration can be batched and potentially processed in parallel branches of the GUTA tree.
  • High-Level Functionality: Securely adds new users to the system state.

7. GUTAAggregationCircuit (Hypothetical - Multiple Variants)

  • Purpose: To combine the results (headers) from two or more lower-level GUTA proofs (which could be End Cap results, registrations, or previous aggregations).
  • Core Logic:
    • Verifies each input GUTA proof using VerifyGUTAProofGadget.
    • Ensures all input proofs used circuits from the same guta_circuit_whitelist and reference the same checkpoint_tree_root.
    • Combines the state_transitions from the input proofs:
      • If transitions are on different branches, uses TwoNCAStateTransitionGadget with an NCA proof.
      • If transitions are on the same branch (e.g., one input is a line proof output), connects them directly (old_root of current matches new_root of previous).
      • If only one input, uses GUTAHeaderLineProofGadget to propagate upwards.
    • Combines the stats from input proofs using GUTAStatsGadget.combine_with.
    • Outputs a single GlobalUserTreeAggregatorHeader representing the combined state transition and stats.
  • What it Proves: That given valid input GUTA proofs operating under the same whitelist and checkpoint context, the combined state transition and stats represented by the output header are correct.
  • Assumptions:
    • Witness (input proofs, headers, NCA/sibling proofs) is correct initially.
  • How Assumptions are Discharged: Verifies input proofs and their headers. Verifies the logic of combining state transitions (NCA/Line/Direct). Passes the common whitelist/checkpoint root assumptions upwards.
  • Contribution to Horizontal Scalability: This is the core of parallel aggregation. Multiple instances of this circuit run concurrently across the DPN, merging proof branches in a tree structure (like MapReduce).
  • High-Level Functionality: Securely and recursively combines verified state changes from multiple sources into larger, aggregated proofs.

8. GUTANoChangeCircuit (Hypothetical)

  • Purpose: To handle cases where no user state changed but the checkpoint advanced.
  • Core Logic: Uses GUTANoChangeGadget.
  • What it Proves: That given a new checkpoint_leaf verified to be in the checkpoint_tree_proof, the GUSR tree root remains unchanged, and stats are zero. Outputs a GUTA header reflecting this.
  • Assumptions: Witness (checkpoint proof, leaf) is correct initially. Input guta_circuit_whitelist is correct.
  • How Assumptions are Discharged: Verifies checkpoint proof. Outputs a standard GUTA header passing assumptions upward.
  • Contribution to Horizontal Scalability: Allows the aggregation process to stay synchronized with the checkpoint tree even during periods of inactivity for certain state trees.
  • High-Level Functionality: Advances the aggregated checkpoint state reference.

Phase 3: Final Block Proof

9. Checkpoint Tree "Block" Circuit (Top-Level Aggregation)

  • Purpose: The final aggregation circuit that combines proofs from the roots of all major state trees (like GUSR via the top-level GUTA proof, GCON, etc.) for the block.
  • Core Logic:
    • Verifies the top-level GUTA proof (and proofs for other top-level trees if applicable).
    • Takes the previous block's finalized CHKP root as a public input.
    • Constructs the new CHKP leaf based on the newly computed roots of GUSR, GCON, etc., and other block metadata.
    • Computes the new CHKP root.
    • The only external assumption verified here is that the input previous_block_chkp_root matches the actual finalized root of the last block.
  • What it Proves: That the entire state transition for the block, represented by the change from the previous_block_chkp_root to the new_chkp_root, is valid, having recursively verified all constituent user transactions and aggregations according to protocol rules and circuit whitelists.
  • Assumptions: The only remaining input assumption is the hash of the previous block's CHKP root.
  • How Assumptions are Discharged: All assumptions from lower levels (circuit whitelists, internal state consistencies) have been verified recursively. The final link to the previous block state is checked against the public input.
  • Contribution to Horizontal Scalability: Represents the culmination of the massively parallel aggregation process, producing a single, succinct proof for the entire block's validity.
  • High-Level Functionality: Creates the final, verifiable proof of state transition for the entire block, linking it cryptographically to the previous block. This proof can be efficiently verified by any node or light client.

Proving Jobs Architecture

Overview

This document describes the proving jobs architecture for both Realm and Coordinator processors, including the tree structure of different proof types and their public inputs layout.

Public Inputs Layout Standard

IMPORTANT: Different circuit types have different public inputs layouts!

Coordinator Main Circuits (19 inputs)

  • [0..4]: commitment
  • [4..8]: worker_public_key
  • [8..11]: pm_jobs_completed_stats (deploy_contracts_completed, register_users_completed, gutas_completed)
  • [11..15]: circuit_whitelist_root
  • [15..19]: state_transition_hash

GUTA Circuits (15 inputs)

  • [0..4]: commitment
  • [4..8]: worker_public_key
  • [8..11]: pm_jobs_completed_stats
  • [11..15]: guta_header_hash

Special: AggUserRegistrationDeployContractsGUTA (19 inputs)

  • [0..4]: state_transition_hash (NOT commitment!)
  • [4..8]: hash(user_registration_commitment, user_registration_worker_pk)
  • [8..12]: hash(deploy_contracts_commitment, deploy_contracts_worker_pk)
  • [12..16]: hash(guta_commitment, guta_worker_pk)
  • [16..19]: additional data

Realm Proving Jobs

User Operations Tree

graph TB
    subgraph "User Operations Leaves"
        UO1[UserOp 1<br/>Circuit: ProcessUserOp]
        UO2[UserOp 2<br/>Circuit: ProcessUserOp]
        UO3[UserOp 3<br/>Circuit: ProcessUserOp]
        UON[UserOp N<br/>Circuit: ProcessUserOp]
    end

    subgraph "Aggregation Layer"
        AGG1[Aggregate UserOps<br/>Circuit: AggregateUserOps]
        AGG2[Aggregate UserOps<br/>Circuit: AggregateUserOps]
    end

    subgraph "Root"
        ROOT[Realm State Transition<br/>Circuit: RealmStateTransition]
    end

    UO1 --> AGG1
    UO2 --> AGG1
    UO3 --> AGG2
    UON --> AGG2
    AGG1 --> ROOT
    AGG2 --> ROOT

Realm Circuit Details

CircuitTypePublic InputsCommitment Calculation
ProcessUserOpLeaf[0..4]: commitment
[4..8]: worker_public_key
[8..11]: pm_jobs_completed_stats
[11..15]: user_op_hash
commitment = worker_public_key
AggregateUserOpsIntermediate[0..4]: commitment
[4..8]: worker_public_key
[8..11]: pm_jobs_completed_stats
[11..15]: agg_hash
commitment = hash(hash(left.commitment, right.commitment), worker_public_key)
RealmStateTransitionRoot[0..4]: commitment
[4..8]: worker_public_key
[8..11]: pm_jobs_completed_stats
[11..15]: state_transition_hash
commitment = hash(hash(children), worker_public_key)

Coordinator Proving Jobs

Three Main Trees + Final Aggregation

graph TB
    subgraph "GUTA Tree"
        subgraph "GUTA Leaves"
            GUTA1[Realm GUTA 1]
            GUTA2[Realm GUTA 2]
            GUTAN[Realm GUTA N]
        end

        subgraph "GUTA Aggregation"
            GUTA_AGG1[GUTATwoGUTA]
            GUTA_AGG2[GUTATwoGUTA]
            GUTA_CAP[GUTAVerifyToCap<br/>Optional]
        end

        GUTA1 --> GUTA_AGG1
        GUTA2 --> GUTA_AGG1
        GUTAN --> GUTA_AGG2
        GUTA_AGG1 --> GUTA_CAP
        GUTA_AGG2 --> GUTA_CAP
    end

    subgraph "Register Users Tree"
        subgraph "Register Users Leaves"
            RU1[Batch 1<br/>Circuit: BatchAppendUserRegistrationTree]
            RU2[Batch 2<br/>Circuit: BatchAppendUserRegistrationTree]
            RUN[Batch N<br/>Circuit: BatchAppendUserRegistrationTree]
        end

        subgraph "Register Users Aggregation"
            RU_AGG1[Circuit: AggStateTransition]
            RU_AGG2[Circuit: AggStateTransition]
            RU_ROOT[Root Aggregation<br/>Circuit: AggStateTransition]
        end

        RU1 --> RU_AGG1
        RU2 --> RU_AGG1
        RUN --> RU_AGG2
        RU_AGG1 --> RU_ROOT
        RU_AGG2 --> RU_ROOT
    end

    subgraph "Deploy Contracts Tree"
        subgraph "Deploy Contracts Leaves"
            DC1[Batch 1<br/>Circuit: BatchDeployContracts]
            DC2[Batch 2<br/>Circuit: BatchDeployContracts]
            DCN[Batch N<br/>Circuit: BatchDeployContracts]
        end

        subgraph "Deploy Contracts Aggregation"
            DC_AGG1[Circuit: AggStateTransition]
            DC_AGG2[Circuit: AggStateTransition]
            DC_ROOT[Root Aggregation<br/>Circuit: AggStateTransition]
        end

        DC1 --> DC_AGG1
        DC2 --> DC_AGG1
        DCN --> DC_AGG2
        DC_AGG1 --> DC_ROOT
        DC_AGG2 --> DC_ROOT
    end

    subgraph "Final Aggregation"
        STATE_PART_1[State Part 1<br/>Circuit: AggUserRegistrationDeployContractsGUTA]
        CHECKPOINT[Checkpoint State Transition<br/>Circuit: CheckpointStateTransition]
    end

    GUTA_CAP --> STATE_PART_1
    RU_ROOT --> STATE_PART_1
    DC_ROOT --> STATE_PART_1
    STATE_PART_1 --> CHECKPOINT

GUTA Circuit Variants

The GUTA (Global User Tree Aggregator) has multiple circuit variants to handle different scenarios:

GUTA Circuit Types and Usage

graph LR
    subgraph "Leaf Circuits (No Child Proofs)"
        GNC[GUTANoChange<br/>No state changes]
        GSE[GUTASingleEndCap<br/>Single realm update]
        GOR[GUTAOnlyRegisterUsers<br/>Only user registrations]
        GRU[GUTARegisterUsers<br/>With user ops]
    end

    subgraph "Two Children Aggregation"
        GTG[GUTATwoGUTA<br/>Two GUTA proofs]
        GTE[GUTATwoEndCap<br/>Two EndCap proofs]
        GLR[GUTALeftGUTARightEndCap<br/>GUTA + EndCap]
        GLE[GUTALeftEndCapRightGUTA<br/>EndCap + GUTA]
    end

    subgraph "Special Purpose"
        GVC[GUTAVerifyToCap<br/>Verify to tree cap]
    end

GUTA Circuit Details

Additional GUTA Circuits

Other GUTA circuits (also 15 inputs, same layout):

  • GUTANoChange: No state changes
  • GUTATwoEndCap: Aggregate two EndCap proofs
  • GUTAVerifyToCap: Verify GUTA to tree cap
  • GUTATwoGUTAWithCheckpointUpgrade: Two GUTA with checkpoint upgrade
  • GUTAVerifyToCapWithCheckpointUpgrade: Verify to cap with checkpoint upgrade

All follow the same commitment calculation rules based on their dependency count.

State Part 1 (AggUserRegistrationDeployContractsGUTA)

This circuit aggregates the three main trees:

Inputs

  • Register Users proof (from aggregation root)
  • Deploy Contracts proof (from aggregation root)
  • GUTA proof (from aggregation root or GUTAVerifyToCap)

Public Inputs Layout (19 total) - SPECIAL LAYOUT!

WARNING: This circuit has a unique layout different from other circuits!

  • [0..4]: state_transition_hash (NOT commitment!)
  • [4..8]: hash(register_users_commitment, register_users_worker_pk)
  • [8..12]: hash(deploy_contracts_commitment, deploy_contracts_worker_pk)
  • [12..16]: hash(guta_commitment, guta_worker_pk)
  • [16..19]: additional data

How Child Proofs Are Processed

#![allow(unused)]
fn main() {
// Extract from each child proof:
let user_registration_commitment = child_proof.public_inputs[0..4];
let user_registration_worker_pk = child_proof.public_inputs[4..8];
let user_registration_final = hash(commitment, worker_pk);

// This final hash goes into parent's public_inputs[4..8]
}

PM Rewards Commitment

The PM (Prover/Miner) Rewards Commitment is calculated from these three roots:

#![allow(unused)]
fn main() {
PMRewardCommitment {
    register_users_root,
    deploy_contracts_root,
    gutas_root,
}
}

Checkpoint State Transition

The final circuit that creates the checkpoint proof:

Inputs

  • State Part 1 proof
  • Previous checkpoint proof
  • Checkpoint tree merkle proof
  • Various metadata (block time, random seed, etc.)

Public Inputs Layout (19 inputs total)

  • [0..4]: commitment
  • [4..8]: worker_public_key
  • [8..11]: pm_jobs_completed_stats (from State Part 1 proof)
  • [11..15]: old_checkpoint_tree_root
  • [15..19]: new_checkpoint_tree_root

Job Dependencies and Task Graph

graph LR
    subgraph "Parallel Execution"
        RU[Register Users Jobs<br/>PM Stats: (0, N, 0)]
        DC[Deploy Contracts Jobs<br/>PM Stats: (M, 0, 0)]
        GUTA[GUTA Jobs<br/>PM Stats: (0, 0, K)]
    end

    subgraph "Sequential Dependencies"
        SP1[State Part 1<br/>PM Stats: (M, N, K)]
        CST[Checkpoint State Transition<br/>PM Stats: (M, N, K)]
        NOTIFY[Notify Block Complete]
    end

    RU --> SP1
    DC --> SP1
    GUTA --> SP1
    SP1 --> CST
    CST --> NOTIFY

The dependency graph shows how PM stats flow through the system:

  1. Parallel Trees: Each tree type accumulates its specific job counts
  2. State Part 1: Combines PM stats from all three trees
  3. Checkpoint: Preserves the combined PM stats for final reward calculation
  4. Block Completion: Uses PM stats to calculate and distribute rewards

Commitment Calculation Rules

The commitment calculation follows a consistent pattern across all circuits:

1. Leaf Circuits (No Dependencies)

#![allow(unused)]
fn main() {
commitment = worker_public_key
}

Examples: GUTANoChange, GUTAOnlyRegisterUsers, BatchDeployContracts, AppendUserRegistrationTree

2. Single Dependency Circuits (One Child Proof)

#![allow(unused)]
fn main() {
commitment = hash(child.commitment, worker_public_key)
}

Examples: GUTASingleEndCap, GUTARegisterUsers, GUTAVerifyToCap, GUTAVerifyToCapWithCheckpointUpgrade

3. Two Dependencies Circuits (Two Child Proofs)

#![allow(unused)]
fn main() {
commitment = hash(hash(left.commitment, right.commitment), worker_public_key)
}

Examples: GUTATwoGUTA, GUTATwoGUTAWithCheckpointUpgrade, GUTATwoEndCap, GUTALeftGUTARightEndCap, AggStateTransition

Core Design Principles

  1. Commitment Chain: Forms a tree structure but NOT for reward distribution as originally thought

    • ALL leaf circuits: commitment = hash(0, 0) (constant value!)
    • Aggregation circuits: commitment = hash(hash(child1.commit, child1.worker), hash(child2.commit, child2.worker))
  2. Job Categories: Three parallel proving trees

    • User Registration: batch_appendstate_transition → final aggregation
    • Contract Deployment: batch_deploystate_transition → final aggregation
    • GUTA Tree: Various GUTA circuits → final aggregation
  3. Special Cases:

    • Dummy circuits: Used when no real work is available
    • AggUserRegistration: Unique layout combining all three trees
    • Checkpoint: Final proof creating the rollup state transition

Core Proving Circuits

Coordinator Main Circuits (19 inputs)

CircuitTypeDependenciesCommitment Calculation
BatchAppendUserRegistrationTreeLeafNonecommitment = hash(0, 0)
BatchDeployContractsLeafNonecommitment = hash(0, 0)
AggStateTransitionAggregation2 proofscommitment = hash(hash(left.commit, left.worker), hash(right.commit, right.worker))
DummyAggStateTransitionDummyNonecommitment = hash(0, 0)

Public Inputs Layout (19 total):

  • [0..4]: commitment
  • [4..8]: worker_public_key
  • [8..11]: pm_jobs_completed_stats
  • [11..15]: circuit_whitelist
  • [15..19]: state_transition_hash

GUTA Core Circuits (15 inputs)

CircuitTypeDependenciesCommitment Calculation
GUTAOnlyRegisterUsersLeafNonecommitment = hash(0, 0)
GUTASingleEndCapLeaf1 EndCapcommitment = hash(0, 0)
GUTARegisterUsersLeaf1 GUTAcommitment = hash(0, 0)
GUTATwoGUTAAggregation2 GUTAcommitment = hash(hash(left.commit, left.worker), hash(right.commit, right.worker))
GUTALeftGUTARightEndCapMixed1 GUTA + 1 EndCapcommitment = hash(hash(left.commit, left.worker), hash(right.commit, right.worker))
GUTALeftEndCapRightGUTAMixed1 EndCap + 1 GUTAcommitment = hash(hash(left.commit, left.worker), hash(right.commit, right.worker))

Public Inputs Layout (15 total):

  • [0..4]: commitment
  • [4..8]: worker_public_key
  • [8..11]: pm_jobs_completed_stats
  • [11..15]: guta_header_hash

Final Aggregation Circuits

CircuitDependenciesSpecial Notes
VerifyAggUserRegistrationDeployContractsGUTA3 proofs (user_reg + deploy + guta)UNIQUE LAYOUT: [0..4] = state_transition_hash (NOT commitment!)
PsyCheckpointStateTransition1 proof (state_part_1)Standard 19-input layout

PM Jobs Completed Stats Tracking

The PM (Proof Miner) jobs completed stats track the number of different types of jobs completed throughout the circuit hierarchy. These stats flow upward through the trees and are combined at aggregation points.

PM Stats Components

  • deploy_contracts_completed: Number of deploy contract jobs completed in this subtree
  • register_users_completed: Number of user registration jobs completed in this subtree
  • gutas_completed: Number of GUTA jobs completed in this subtree

How Stats Flow Through the Hierarchy

Leaf Circuits

Leaf circuits initialize their PM stats based on the work they perform:

  • Deploy Contract leaves (BatchDeployContracts): pm_stats = (batch_size, 0, 0)
  • Register Users leaves (AppendUserRegistrationTree): pm_stats = (0, batch_size, 0)
  • GUTA leaves (GUTANoChange, GUTASingleEndCap, etc.): pm_stats = (0, 0, 0) initially
  • Dummy circuits (AggStateTransitionDummy): pm_stats = (0, 0, 0) (all zeros)

Aggregation Circuits

Aggregation circuits combine PM stats from their children:

#![allow(unused)]
fn main() {
// Two children aggregation (AggStateTransition, GUTATwoGUTA)
final_pm_stats = PMJobsCompletedStats {
    deploy_contracts_completed: left.pm_stats[0] + right.pm_stats[0],
    register_users_completed: left.pm_stats[1] + right.pm_stats[1],
    gutas_completed: left.pm_stats[2] + right.pm_stats[2],
}
}

GUTA Circuits Special Handling

GUTA circuits add 1 to their gutas_completed count:

#![allow(unused)]
fn main() {
// Single child GUTA aggregation (GUTAVerifyToCap)
final_pm_stats = PMJobsCompletedStats {
    deploy_contracts_completed: child.pm_stats[0],
    register_users_completed: child.pm_stats[1],
    gutas_completed: child.pm_stats[2] + 1, // Add 1 GUTA completion
}
}

Final Aggregation

At the State Part 1 level (AggUserRegistrationDeployContractsGUTA), the PM stats from all three trees are combined:

#![allow(unused)]
fn main() {
final_pm_stats = register_users_proof.pm_stats +
                 deploy_contracts_proof.pm_stats +
                 guta_proof.pm_stats
}

This provides a complete count of all work performed in the current checkpoint.

Key Design Principles

  1. Consistent Public Inputs: All circuits follow the same [commitment, worker_public_key, pm_jobs_completed_stats, data_hash] layout
  2. Tree Aggregation: Each category (GUTA, Register Users, Deploy Contracts) forms its own tree
  3. Parallel Processing: The three trees can be processed in parallel
  4. Commitment Chain: Commitments flow up from leaves to root, enabling reward distribution
  5. Flexibility: GUTA circuits handle various scenarios (no changes, single realm, multiple realms)
  6. Worker Tracking: Every circuit includes the worker's public key who computed that proof
  7. PM Stats Tracking: Job completion counts flow upward through the tree hierarchy for reward calculation

Introduction

This is the documentation for [Psy Smart Contract Language], a new programming language designed for ["safe and efficient smart contract development"]. This book serves as a comprehensive guide for developers interested in learning the language and building applications with it.

[Psy Smart Contract Language] is in active development and a work in progress. If you have feedback or suggestions, feel free to contribute via the GitHub repository.

This book covers the core features of [Psy Smart Contract Language], including its syntax, module system, structs, traits, and storage capabilities.

Design Philosophy

Psy Smart Contract Language embodies a unique design philosophy specifically crafted for zero-knowledge proof systems. While drawing inspiration from modern languages like Jakt, Noir, Sway, and Cairo, Psy takes a fundamentally different approach optimized for circuit generation.

Core Design Principles

ZK-Native Architecture

Unlike traditional virtual machines that operate on stacks and registers, Psy compiles to DPN opcodes optimized for zkVM execution. Our primary goal is to generate efficient opcodes that produce minimal circuits during proof generation, which directly translates to:

  • Faster proof generation
  • Lower computational overhead
  • Reduced memory requirements
  • Improved scalability

Symbolic Execution Model

At the heart of Psy's design lies symbolic execution rather than traditional runtime execution:

#![allow(unused)]
fn main() {
// Traditional VM approach - Simple addition
// Requires multiple stack/register operations
fn add_numbers(a: Felt, b: Felt) -> Felt {
    a + b
}
// VM execution steps:
// 1. PUSH a               (push 'a' onto stack)
// 2. PUSH b               (push 'b' onto stack)  
// 3. POP R2               (pop 'b' into register R2)
// 4. POP R1               (pop 'a' into register R1)
// 5. ADD R3, R1, R2       (add R1 + R2, store in R3)
// 6. PUSH R3              (push result back to stack)
// 7. POP result           (pop final result)
// Total: 7 VM operations + stack/register management overhead

// Psy's symbolic execution approach  
// Converts directly to a single ZK constraint
fn add_numbers(a: Felt, b: Felt) -> Felt {
    a + b  // Single arithmetic gate: result = a + b
}
// Circuit: 1 addition constraint, no VM overhead
}

Function-to-Symbol Transformation

Every function in a Psy smart contract is transformed into a mathematical symbol or constraint system:

  1. Variables become circuit wires
  2. Operations become arithmetic gates
  3. Control flow is flattened into conditional execution using boolean arithmetic
  4. State changes become constraint updates on state variables

Control Flow Flattening Example

#![allow(unused)]
fn main() {
// Psy source code with branching
fn min(a: Felt, b: Felt) -> Felt {
    if a < b {
        a
    } else {
        b
    }
}

// Compiler flattens to conditional arithmetic (no dynamic jumps)
// Both paths computed, result selected using boolean arithmetic:
fn min_flattened(a: Felt, b: Felt) -> Felt {
    let condition = (a < b) as Felt;  // 1 if true, 0 if false
    let true_path = a;
    let false_path = b;
    // Arithmetic selection instead of branching:
    condition * true_path + (1 - condition) * false_path
}
}

This transforms if (a > 1) { execute_branch() } into:

condition = (a > 1) as Felt;  // 0 or 1
execution_weight = condition;
no_execution_weight = 1 - condition;
// Both branches computed, weighted by condition

Architectural Differences from Traditional Languages

Control Flow Flattening

Traditional programming languages use dynamic control flow with jumps and branches. Psy flattens all control structures at compile time:

If-Else Flattening

#![allow(unused)]
fn main() {
// Source code
fn conditional_logic(condition: bool, a: Felt, b: Felt) -> Felt {
    if condition {
        a + b
    } else {
        a * b
    }
}

// Flattened to circuit constraints
fn conditional_logic(condition: bool, a: Felt, b: Felt) -> Felt {
    let condition_felt = condition as Felt;
    let sum = a + b;
    let product = a * b;
    // Select result based on condition using arithmetic
    sum * condition_felt + product * (1 - condition_felt)
}
}

Loop Unrolling

#![allow(unused)]
fn main() {
// Source code with bounded loop (simplified fibonacci)
fn fibonacci_partial() -> Felt {
    let mut a: Felt = 2;
    let mut b: Felt = 3;
    let mut i: Felt = 2;
    while i <= 5 {  // Fixed bound of 3 iterations
        let next = a + b;
        a = b;
        b = next;
        i += 1;
    }
    b
}

// Completely unrolled at compile time (3 iterations)
fn fibonacci_partial_unrolled() -> Felt {
    // Initial state
    let mut a_0: Felt = 2;
    let mut b_0: Felt = 3;
    let mut i_0: Felt = 2;
    
    // Iteration 1: i = 2, condition (2 <= 5) = true
    let next_1 = a_0 + b_0;  // 5
    let a_1 = b_0;           // 3
    let b_1 = next_1;        // 5
    let i_1 = i_0 + 1;       // 3
    
    // Iteration 2: i = 3, condition (3 <= 5) = true  
    let next_2 = a_1 + b_1;  // 8
    let a_2 = b_1;           // 5
    let b_2 = next_2;        // 8
    let i_2 = i_1 + 1;       // 4
    
    // Iteration 3: i = 4, condition (4 <= 5) = true
    let next_3 = a_2 + b_2;  // 13
    let a_3 = b_2;           // 8  
    let b_3 = next_3;        // 13
    let i_3 = i_2 + 1;       // 5
    
    // Iteration 4: i = 5, condition (5 <= 5) = true
    let next_4 = a_3 + b_3;  // 21
    let a_4 = b_3;           // 13
    let b_4 = next_4;        // 21
    let i_4 = i_3 + 1;       // 6
    
    // Iteration 5: i = 6, condition (6 <= 5) = false - exit
    b_4  // Return 21
}
}

No Runtime Stack or Registers

Traditional VMs maintain:

  • Call stack for function invocation
  • Registers for temporary values
  • Memory management with allocation/deallocation

Psy optimizes this by:

  • Static analysis of all possible execution paths
  • Compile-time memory layout determination
  • Compilation to DPN opcodes which are executed by zkVM to generate contract call proofs

Inline Function Calls by Default

Unlike traditional VMs that use call stacks, Psy inlines all function calls by default. Each function is compiled to its own set of DPN opcodes:

// Source code with nested function calls (from actual test)
fn add(a: Felt, b: Felt) -> Felt {
    let mut c: Felt = a + b;
    return c;
}

fn mul(a: Felt, b: Felt) -> Felt {
    let mut c: Felt = a * b;
    return c;
}

fn main() {
    // Complex nested function calls
    let mut f: Felt = add(add(1, 2), 2);        // add() called twice, nested
    let mut g: Felt = add(1 + 3, mul(2, 3));    // add() and mul() calls
}

// Each function generates its own DPN opcode sequence:
// add() -> [Constant(a), Constant(b), Add, InputTarget] opcodes
// mul() -> [Constant(x), Constant(y), Mul, InputTarget] opcodes  

// At compilation, all calls are inlined:
fn main_inlined() {
    // add(add(1, 2), 2) becomes completely flattened:
    // Inner add(1, 2):
    // - Constant(1), Constant(2), Add -> temp1 = 3
    // Outer add(temp1, 2): 
    // - Constant(3), Constant(2), Add -> f = 5
    
    // add(1 + 3, mul(2, 3)) becomes:
    // mul(2, 3):
    // - Constant(2), Constant(3), Mul -> temp2 = 6
    // add(4, temp2):
    // - Constant(4), Constant(6), Add -> g = 10
}

Benefits of Default Inlining:

  • No call overhead - eliminates stack push/pop operations
  • Better optimization - enables cross-function optimizations
  • Predictable circuit size - function calls don't add dynamic overhead
  • Simplified analysis - all execution paths are statically visible

Circuit-First Design Benefits

Predictable Circuit Size

Since all control flow is flattened at compile time, developers can predict exact circuit sizes:

#![allow(unused)]
fn main() {
// Circuit size is deterministic
fn deterministic_function(x: Felt, y: Felt) -> Felt {
    // Always exactly 3 arithmetic gates
    let intermediate = x + y;      // 1 gate
    let result = intermediate * 2;  // 1 gate  
    result + 1                     // 1 gate
}
}

Optimized Constraint Systems

The compiler can perform aggressive optimizations specific to arithmetic circuits:

  • Gate merging - Combine multiple operations
  • Constant propagation - Eliminate unnecessary constraints
  • Dead code elimination - Remove unused circuit paths
  • Algebraic simplification - Reduce constraint complexity

Zero-Knowledge Friendly Primitives

Psy provides built-in primitives that map directly to efficient ZK operations:

#![allow(unused)]
fn main() {
// Poseidon hash - ZK-friendly cryptographic hash
fn hash_example() -> Hash {
    let data: Hash = [1, 2, 3, 4];
    hash(data)  // Built-in Poseidon hash function
}

// ECDSA signature verification using secp256k1
fn signature_verification() -> bool {
    let pub_key = [4203227662u32, 540940946u32, 962567723u32, 1830567167u32, 
                   3450763808u32, 3950740017u32, 3026903052u32, 3029228469u32, 
                   1837759160u32, 825683440u32, 3630293783u32, 436568768u32, 
                   3543321651u32, 1044682747u32, 168350425u32, 936127172u32];
    
    let sig = [201339544u32, 2533129003u32, 3911198242u32, 2163032835u32, 
               2488559593u32, 2971164201u32, 3572923983u32, 3650316646u32, 
               3964687905u32, 1624041662u32, 2373224611u32, 3243422930u32, 
               1353934640u32, 2321957132u32, 2691932396u32, 1560388502u32];
    
    let msg = [6716978020874491267, 18326158388222717469, 
               7113070761591959818, 9714795267687279217];
    
    let signature_is_valid = __secp256k1_verify(pub_key, msg, sig);
    assert(signature_is_valid, "signature is not valid");
    signature_is_valid
}

// Example: Combining both primitives for authenticated operations
fn authenticated_hash(input_data: Hash, pub_key: [u32; 16], msg: [u64; 4], sig: [u32; 16]) -> Hash {
    // Verify signature first
    let signature_is_valid = __secp256k1_verify(pub_key, msg, sig);
    assert(signature_is_valid, "Invalid signature");
    
    // Then hash the authenticated data
    hash(input_data)
}
}

Language Design Trade-offs

What We Gain

  • Minimal circuit overhead
  • Predictable performance
  • Formal verification friendly
  • Optimal proof generation

What We Accept

  • Static loop bounds (no dynamic iteration)
  • Compile-time complexity for circuit optimization
  • Limited recursion (must be bounded and unrollable)
  • Circuit-aware programming model

Language Syntax Design

Psy adopts a Rust-like syntax that provides familiarity for developers while being optimized for ZK circuit compilation:

#![allow(unused)]
fn main() {
// Familiar Rust-style struct definitions
pub struct Person {
    pub age: Felt,
    male: bool,
}

// Rust-style implementations
impl Person {
    pub fn get_age(self: Person) -> Felt {
        return self.age;
    }
}

// Rust-style function definitions with type annotations
fn add_numbers(a: Felt, b: Felt) -> Felt {
    a + b
}

// Rust-style control flow
fn conditional_min(a: Felt, b: Felt) -> Felt {
    if a < b {
        a
    } else {
        b
    }
}
}

This familiar syntax reduces the learning curve while the compiler performs ZK-specific optimizations behind the scenes.

The Psy Innovation

Psy's unique contribution is ZK-optimized compilation through symbolic execution. By treating smart contract functions as mathematical transformations and compiling to DPN opcodes, we achieve:

  1. Optimized DPN opcode generation for zkVM execution
  2. Efficient proof generation for contract calls
  3. Predictable circuit characteristics
  4. Streamlined ZK proving pipeline

This design philosophy makes Psy particularly suitable for applications where proof generation speed and circuit size are critical, such as high-frequency DeFi operations, privacy-preserving computations, and scalable rollup systems.

Looking Forward

As the ZK ecosystem evolves, Psy's circuit-first design positions it to take advantage of new developments in:

  • Advanced circuit optimizations
  • Hardware acceleration
  • Recursive proof systems
  • Cross-chain interoperability

The symbolic execution model ensures that improvements in the underlying proof systems automatically benefit all Psy programs without requiring language-level changes.

Language Features

Psy Smart Contract Language provides a comprehensive set of features designed for zero-knowledge circuit development. This page outlines all major language features with examples and explanations.

Basic Types

Psy supports a range of primitive and composite types optimized for ZK circuit generation.

Primitive Types

Felt

The fundamental numeric type representing a field element in the Goldilocks field.

#![allow(unused)]
fn main() {
let value: Felt = 42;
let negative: Felt = -10;
let arithmetic: Felt = value + negative * 2;
}

Boolean

Boolean values for logical operations.

#![allow(unused)]
fn main() {
let flag: bool = true;
let condition: bool = false;
let result: bool = flag && !condition;
}

u32

32-bit unsigned integer type for efficient arithmetic operations.

#![allow(unused)]
fn main() {
let count: u32 = 100u32;
let mask: u32 = 0xFFFFu32;
let shifted: u32 = count << 2;
}

Composite Types

Arrays

Fixed-size arrays with compile-time known lengths.

#![allow(unused)]
fn main() {
let numbers: [Felt; 4] = [1, 2, 3, 4];
let matrix: [[Felt; 2]; 2] = [[1, 2], [3, 4]];
let element: Felt = numbers[0];
}

Tuples

Heterogeneous collections of values with full support for nesting, mutation, and complex access patterns.

#![allow(unused)]
fn main() {
// Basic tuple creation and access
let pair: (Felt, bool) = (42, true);
let triple: (Felt, Felt, bool) = (1, 2, false);
let first: Felt = pair.0;
let flag: bool = pair.1;

// Mutable tuples
let mut coordinates: (Felt, Felt) = (10, 20);
coordinates.0 = 15;  // Modify x coordinate
coordinates.1 = 25;  // Modify y coordinate
coordinates = (30, 40);  // Replace entire tuple

// Nested tuples
let nested: ((Felt, Felt), (Felt, Felt)) = ((1, 2), (3, 4));
let deeply_nested: (((Felt, Felt), Felt), Felt) = (((5, 6), 7), 8);

// Complex nested tuple manipulation
let mut complex_tuple: ((Felt, Felt), (Felt, Felt)) = ((9, 10), (11, 12));
complex_tuple.0.1 = 42;     // Modify first tuple's second element
complex_tuple.1 = (20, 21); // Replace entire second tuple

// Tuples in structs
struct TupleHolder {
    pub pair: (Felt, Felt),
    pub nested: ((Felt, Felt), Felt),
}

let mut holder: TupleHolder = new TupleHolder {
    pair: (22, 23),
    nested: ((24, 25), 26),
};
holder.pair.1 = 99;          // Modify tuple field in struct
holder.nested.0.0 = 88;      // Modify nested tuple in struct

// Functions returning tuples
fn create_tuple(x: Felt, y: Felt) -> (Felt, Felt) {
    (x + 1, y + 1)
}

// Functions taking tuples as parameters
fn sum_tuple(input: (Felt, Felt)) -> Felt {
    input.0 + input.1
}

let result_tuple: (Felt, Felt) = create_tuple(30, 31);
let sum_result: Felt = sum_tuple((40, 50));

// Tuples with different types
let mixed: (Felt, bool, u32) = (100, true, 42u32);
let complex_mixed: ((Felt, bool), (u32, Felt)) = ((1, false), (10u32, 2));
}

Tuple Features:

  • Type Safety: Each tuple element has a specific type
  • Indexed Access: Use .0, .1, .2, etc. to access elements
  • Mutation Support: Both individual elements and entire tuples can be modified
  • Arbitrary Nesting: Tuples can contain other tuples to any depth
  • Function Integration: Can be passed to and returned from functions
  • Struct Fields: Tuples can be used as struct field types

Note: Tuple destructuring (pattern matching like let (x, y) = tuple) is not currently supported, but individual element access provides full functionality.

Structs

User-defined composite types with named fields.

#![allow(unused)]
fn main() {
pub struct Person {
    pub age: Felt,
    male: bool,
}

let person: Person = new Person {
    age: 25,
    male: true,
};
}

Hash Type

Special 4-element array type for cryptographic operations.

#![allow(unused)]
fn main() {
let data: Hash = [1, 2, 3, 4];
let result: Hash = hash(data);
}

Functions

Functions are first-class citizens with support for parameters, return types, and inlining.

Basic Functions

#![allow(unused)]
fn main() {
fn add(a: Felt, b: Felt) -> Felt {
    a + b
}

fn greet() {
    // No return value
}
}

Function Calls and Inlining

All function calls are inlined by default for optimal circuit generation.

#![allow(unused)]
fn main() {
fn multiply(x: Felt, y: Felt) -> Felt {
    x * y
}

fn complex_calculation(a: Felt, b: Felt, c: Felt) -> Felt {
    // These calls get inlined at compile time
    let product = multiply(a, b);
    add(product, c)
}
}

Closures

Anonymous functions that can capture variables from their environment.

fn main() -> Felt {
    let multiplier: Felt = 3;
    
    let triple = |x: Felt| -> Felt {
        x * multiplier  // Captures 'multiplier' from environment
    };
    
    triple(5)  // Returns 15
}

Constants

Compile-time constant values for improved optimization.

#![allow(unused)]
fn main() {
const MAX_USERS: Felt = 1000;
const PI_APPROXIMATION: Felt = 3141;
const DEFAULT_AMOUNT: Felt = 100;

fn validate_user_count(count: Felt) -> bool {
    count <= MAX_USERS
}

fn calculate_with_constant() -> Felt {
    DEFAULT_AMOUNT + PI_APPROXIMATION
}
}

Comptime

Compile-time evaluation for constant folding and optimization.

#![allow(unused)]
fn main() {
// Complex constant expressions are evaluated at compile time
const COMPLEX_CALC: Felt = ((100 + 200) * 3 - 50) / 2 + (10 * 10);

fn use_constants() -> Felt {
    // All constant arithmetic is folded during compilation
    COMPLEX_CALC + 42
}
}

Control Flow

If Expressions

Conditional expressions that are flattened into arithmetic selection during compilation.

#![allow(unused)]
fn main() {
fn min(a: Felt, b: Felt) -> Felt {
    if a < b {
        a
    } else {
        b
    }
}

// Supports complex nested conditions
fn classify_number(x: Felt) -> Felt {
    if x > 100 {
        if x > 1000 {
            3  // Very large
        } else {
            2  // Large
        }
    } else {
        1  // Small
    }
}
}

Block Expressions

Scoped code blocks that return values.

#![allow(unused)]
fn main() {
fn calculate() -> Felt {
    let result = {
        let temp1 = 10;
        let temp2 = 20;
        temp1 + temp2  // Block returns this value
    };
    result * 2
}
}

Match Expressions

Pattern matching for control flow.

#![allow(unused)]
fn main() {
fn process_status(status: Felt) -> Felt {
    match status {
        0 => 10,        // Pending
        1 => 20,        // Processing  
        2 => 30,        // Complete
        _ => 0,         // Unknown
    }
}
}

Loops

Bounded loops that are completely unrolled at compile time.

#![allow(unused)]
fn main() {
fn sum_range() -> Felt {
    let mut total: Felt = 0;
    let mut i: Felt = 1;
    
    while i <= 5 {  // Fixed bound - unrolled at compile time
        total += i;
        i += 1;
    }
    
    total  // Returns 15
}
}

Modules

Code organization and namespace management.

Module Definition

#![allow(unused)]
fn main() {
mod math {
    pub fn add(a: Felt, b: Felt) -> Felt {
        a + b
    }
    
    fn private_helper() -> Felt {
        42
    }
}
}

Module Usage

use math::*;

fn main() -> Felt {
    add(10, 20)  // Using imported function
}

Nested Modules

#![allow(unused)]
fn main() {
mod crypto {
    pub mod hash {
        pub fn poseidon(input: Hash) -> Hash {
            hash(input)
        }
    }
    
    pub mod signature {
        pub fn verify(msg: [u64; 4], sig: [u32; 16], pubkey: [u32; 16]) -> bool {
            __secp256k1_verify(pubkey, msg, sig)
        }
    }
}
}

Traits

Interface definitions for shared behavior across types.

Basic Traits

#![allow(unused)]
fn main() {
pub trait Arithmetic {
    fn add(self, other: Self) -> Self;
    fn multiply(self, other: Self) -> Self;
}

pub trait Display {
    fn show(self) -> Felt;
}
}

Trait Implementation

#![allow(unused)]
fn main() {
struct Point {
    x: Felt,
    y: Felt,
}

impl Arithmetic for Point {
    fn add(self, other: Point) -> Point {
        new Point {
            x: self.x + other.x,
            y: self.y + other.y,
        }
    }
    
    fn multiply(self, other: Point) -> Point {
        new Point {
            x: self.x * other.x,
            y: self.y * other.y,
        }
    }
}
}

Generics

Type parameters for code reuse and type safety.

Generic Functions

#![allow(unused)]
fn main() {
fn identity<T>(value: T) -> T {
    value
}

fn pair<T, U>(first: T, second: U) -> (T, U) {
    (first, second)
}
}

Generic Structs

#![allow(unused)]
fn main() {
struct Container<T> {
    value: T,
}

impl<T> Container<T> {
    pub fn new(value: T) -> Container<T> {
        new Container { value }
    }
    
    pub fn get(self) -> T {
        self.value
    }
}
}

Generic Traits

#![allow(unused)]
fn main() {
trait Convert<T> {
    fn convert(self) -> T;
}

impl Convert<Felt> for u32 {
    fn convert(self) -> Felt {
        self as Felt
    }
}
}

Type System Features

Type Checking

Static type checking ensures type safety at compile time.

#![allow(unused)]
fn main() {
fn type_safe_function(x: Felt, y: bool) -> (Felt, bool) {
    // Compiler verifies all types match
    let result_x: Felt = x + 1;      // ✓ Felt arithmetic
    let result_y: bool = y && true;  // ✓ Boolean logic
    (result_x, result_y)
}
}

Type Hints

Explicit type annotations for clarity and optimization.

#![allow(unused)]
fn main() {
fn with_type_hints() {
    let inferred = 42;              // Type inferred as Felt
    let explicit: Felt = 42;        // Explicit type annotation
    let tuple: (Felt, bool) = (1, true);  // Tuple type hint
}
}

Trait Constraints

Generic type constraints using trait bounds.

#![allow(unused)]
fn main() {
fn generic_add<T: Arithmetic>(a: T, b: T) -> T {
    a.add(b)  // T must implement Arithmetic trait
}

fn display_value<T: Display + Clone>(value: T) -> Felt {
    value.show()  // T must implement both Display and Clone
}
}

Storage and Smart Contracts

Persistent state management for blockchain applications.

Storage Structs

#![allow(unused)]
fn main() {
#[storage]
struct TokenContract {
    // Contract state fields
}

impl TokenContract {
    pub fn mint(amount: Felt) -> Felt {
        let user_id = get_user_id();
        let current_state = get_state_hash_at(user_id);
        let current_balance = current_state[0];
        
        let new_balance = current_balance + amount;
        cset_state_hash_at(user_id, [new_balance, current_state[1], current_state[2], current_state[3]]);
        
        new_balance
    }
}
}

Built-in Functions

ZK-optimized cryptographic and system functions.

Cryptographic Functions

#![allow(unused)]
fn main() {
// Poseidon hash
let data: Hash = [1, 2, 3, 4];
let hash_result: Hash = hash(data);

// ECDSA signature verification
let pubkey = [/* 16 u32 values */];
let message = [/* 4 u64 values */];  
let signature = [/* 16 u32 values */];
let is_valid: bool = __secp256k1_verify(pubkey, message, signature);
}

System Functions

#![allow(unused)]
fn main() {
// State access functions
let user_id: Felt = get_user_id();
let contract_id: Felt = get_contract_id();
let checkpoint: Felt = get_checkpoint_id();
let user_state: Hash = get_state_hash_at(user_id);
}

Development Tools

Dargo Package Manager

Complete project lifecycle management.

# Initialize new project
dargo init

# Create new project
dargo new my_project

# Compile contract
dargo compile --contract-name MyContract --method-names deploy transfer

# Execute with parameters  
dargo execute --contract-name MyContract --method-names mint --parameters 100

# Run tests
dargo test

# Format code
dargo fmt src/main.psy

Language Server Protocol (LSP)

IDE integration for enhanced development experience.

Supported Features:

  • Hover - Type information and documentation
  • Go to Definition - Navigate to symbol definitions
  • Find References - Locate all symbol usages
  • Code Formatting - Automatic code formatting
  • Error Diagnostics - Real-time error reporting

IDE Support:

  • Visual Studio Code
  • Neovim
  • RustRover/IntelliJ

Error Reporting

Precise error diagnostics with line and column information.

#![allow(unused)]
fn main() {
// Error example
fn invalid_function() {
    let x: Felt = true;  // Type mismatch error
    //    ^^^^     ^^^^ 
    //    |        |
    //    |        Expected Felt, found bool
    //    |
    //    Variable declared as Felt
}
}

Error Message:

error[E0308]: mismatched types
 --> src/main.psy:2:19
  |
2 |     let x: Felt = true;
  |            ----   ^^^^ expected `Felt`, found `bool`
  |            |
  |            expected due to this type annotation

Testing Framework

Built-in testing support with the #[test] attribute.

#![allow(unused)]
fn main() {
#[test]
fn test_arithmetic() {
    let result = add(2, 3);
    assert_eq(result, 5, "2 + 3 should equal 5");
}

#[test]
fn test_contract_mint() {
    // Test contract functionality
    let initial_balance = 100;
    let mint_amount = 50;
    let expected = initial_balance + mint_amount;
    
    let result = mint(mint_amount);
    assert(result > initial_balance, "Balance should increase after minting");
}
}

Package System (Crates)

Modular code organization and dependency management.

Dargo.toml Configuration

[package]
name = "my_contract"
version = "0.1.0"
edition = "2024"

[dependencies]
std = "0.1.0"
crypto = "0.2.0"

[lib]
name = "my_contract"
path = "src/lib.psy"

Library Structure

my_project/
├── Dargo.toml
├── src/
│   ├── main.psy
│   ├── lib.psy
│   └── utils/
│       ├── mod.psy
│       ├── math.psy
│       └── crypto.psy
└── tests/
    └── integration_test.psy

Zero-Knowledge Optimizations

All language features are designed with ZK circuit efficiency in mind:

  • Control flow flattening - Branches become arithmetic selections
  • Loop unrolling - Bounded loops are completely expanded
  • Function inlining - Eliminates call overhead
  • Constant folding - Compile-time evaluation
  • Dead code elimination - Unused code paths removed
  • Arithmetic optimization - Efficient field operations

This comprehensive feature set makes Psy a powerful language for developing efficient zero-knowledge smart contracts while maintaining familiar programming patterns and strong type safety.

Before We Begin

[Psy Smart Contract Language] requires a development environment to write and run programs. This chapter covers the prerequisites: setting up your IDE, installing the compiler, and understanding the basic tools.

If you already have the compiler installed (via git or another method), you can skip to the next chapter.

Installing the Compiler

The Psy compiler (dargo) is available from the official repository at [https://github.com/PsyProtocol/psy-compiler].

Installation via Cargo

Currently, the only supported installation method is via Cargo (Rust package manager):

Install the Psy compiler:

cargo install --git https://github.com/PsyProtocol/psy-compiler dargo

Install the Language Server (for IDE support):

cargo install --git https://github.com/PsyProtocol/psy-compiler psy-lsp-server

Prerequisites:

  • Rust toolchain installed (visit rustup.rs to install)
  • Git for cloning the repository

Note: Package managers like Homebrew (macOS) and Chocolatey (Windows) are not currently supported, but may be available in future releases.

Verifying Installation

After installation, verify that both tools are available in your PATH:

# Verify the compiler
dargo --version

# Verify the language server
psy-lsp-server --version

You should see version information for both the Psy compiler and language server.

Environment Setup

Set the DARGO_STD_PATH environment variable to point to the standard library:

# For bash/zsh
export DARGO_STD_PATH="$HOME/Projects/psy-compiler/psy-std/std.psy"

# For fish shell
set -gx DARGO_STD_PATH "$HOME/Projects/psy-compiler/psy-std/std.psy"

Add this to your shell configuration file (.bashrc, .zshrc, or ~/.config/fish/config.fish) to make it persistent.

Setting Up Your IDE

Psy provides Language Server Protocol (LSP) support for enhanced development experience. Currently supported IDEs:

  • Visual Studio Code - Full LSP support with extension
  • Neovim - LSP configuration guide

See the Set up your IDE section for detailed configuration instructions.

LSP Features:

  • Hover for type information
  • Go to definition
  • Find references
  • Code formatting
  • Real-time error diagnostics

Getting Started

Once you have dargo installed and your IDE configured, you're ready to start writing Psy smart contracts! Continue to the Hello, World! chapter to create your first program.

Psy LSP Developer Tutorial

Psy is a custom language with a dedicated Language Server Protocol (LSP) service, providing basic features such as hover, goto definition, find references, and formatting.

This document introduces how to use the Psy language server psy-lsp-server for a better development experience in VSCode, Neovim, and RustRover.

🛠️ Preparation

  1. Clone repository:
  git clone https://github.com/PsyProtocol/psy-compiler.git
  cd psy-compiler
  1. Compile psy-lsp-server
  cd psy-lsp-server
  cargo build --release

⚠️ Note: Regardless of which IDE you are using, the psy-lsp-server binary is required for the language features to work properly.
Please make sure you have built it and remember its path.

💻 VSCode Usage Tutorial

Developer debugging mode (recommended for developers)

  1. Start VSCode:
  cd psy-lsp-server/psy-lsp-vscode
  code .
  1. Press F5 to enter plugin debugging mode. VSCode will start a new VSCode window and load the local plugin.
  2. In the new window, open a Psy project containing Dargo.toml to enable plugin features, such as:
    • Mouse hover → Show type information
    • Right click → Goto Definition / Find References / Format

💡 Note: In the file psy-lsp-vscode/src/extension.ts, the path to the psy-lsp-server binary is currently hardcoded:

const serverExecutable = path.join(
    // Warning: this path is hardcoded and may not be portable across systems.
    context.extensionPath, '..', '..', 'target', 'release', 'psy-lsp-server'
);

This assumes you've built the LSP server in the psy-compiler directory. If you need to change the path (for example, to use a different build directory or binary location), please modify this line accordingly and then rebuild the extension by running:

  npm run build

🧑‍💻 Neovim Configuration for Psy

This guide shows how to configure Neovim for Psy development using the Language Server Protocol.

⚠️ Prerequisites: Ensure psy-lsp-server is installed and available in your PATH.


1️⃣ File Type Detection

Add this to your Neovim configuration to recognize .psy and .qed files:

-- File type detection for Psy files
vim.api.nvim_create_augroup("FiletypeConfig", { clear = true })

vim.api.nvim_create_autocmd({ "BufNewFile", "BufReadPost" }, {
    pattern = "*.psy",
    group = "FiletypeConfig",
    callback = function()
        vim.bo.filetype = "psy"
    end,
})

vim.api.nvim_create_autocmd({ "BufNewFile", "BufReadPost" }, {
    pattern = "*.qed",
    group = "FiletypeConfig",
    callback = function()
        vim.bo.filetype = "qed"
    end,
})

2️⃣ LSP Configuration

Configure the Psy language server:

-- Configure Psy LSP
vim.lsp.config('psy_lsp', {
    cmd = { "psy-lsp-server" },
    filetypes = { "psy", "qed" },
    root_markers = { "Dargo.toml" },
    settings = {},
})

-- Enable Psy LSP
vim.lsp.enable('psy_lsp')

3️⃣ Syntax Highlighting

Configure Tree-sitter to use Rust highlighting for Psy files:

-- Reuse Rust syntax highlighting for Psy files
vim.treesitter.language.register("rust", "psy")
vim.treesitter.language.register("rust", "qed")

4️⃣ LSP Key Mappings

Set up key bindings for LSP functionality:

-- LSP key mappings
vim.api.nvim_create_autocmd("LspAttach", {
    desc = "LSP actions",
    callback = function(event)
        local bufmap = function(mode, lhs, rhs)
            local opts = { buffer = true }
            vim.keymap.set(mode, lhs, rhs, opts)
        end

        -- Core LSP navigation
        bufmap("n", "gd", "<cmd>lua vim.lsp.buf.definition()<cr>")
        bufmap("n", "gr", "<cmd>lua vim.lsp.buf.references()<cr>")
        bufmap("n", "gi", "<cmd>lua vim.lsp.buf.implementation()<cr>")
        bufmap("n", "gy", "<cmd>lua vim.lsp.buf.type_definition()<cr>")

        -- Code actions and formatting
        bufmap("n", "<leader>f", "<cmd>lua vim.lsp.buf.format()<cr>")
        bufmap("n", "<leader>rn", "<cmd>lua vim.lsp.buf.rename()<cr>")
        bufmap("n", "<leader>ca", "<cmd>lua vim.lsp.buf.code_action()<cr>")
    end,
})

5️⃣ Comment Support

Configure comment strings for Psy files:

-- Comment configuration for Psy
vim.api.nvim_create_autocmd("FileType", {
    pattern = { "psy", "qed" },
    callback = function()
        vim.bo.commentstring = "//%s"
    end,
})

6️⃣ Complete Configuration Example

Here's a complete minimal configuration for Psy development:

-- File type detection
vim.api.nvim_create_augroup("FiletypeConfig", { clear = true })

local filetypes = {
    psy = "*.psy",
    qed = "*.qed",
}

for filetype, pattern in pairs(filetypes) do
    vim.api.nvim_create_autocmd({ "BufNewFile", "BufReadPost" }, {
        pattern = pattern,
        group = "FiletypeConfig",
        callback = function()
            vim.bo.filetype = filetype
        end,
    })
end

-- LSP configuration
vim.lsp.config('psy_lsp', {
    cmd = { "psy-lsp-server" },
    filetypes = { "psy", "qed" },
    root_markers = { "Dargo.toml" },
    settings = {},
})

vim.lsp.enable('psy_lsp')

-- Syntax highlighting
vim.treesitter.language.register("rust", "psy")
vim.treesitter.language.register("rust", "qed")

-- Comment support
vim.api.nvim_create_autocmd("FileType", {
    pattern = { "psy", "qed" },
    callback = function()
        vim.bo.commentstring = "//%s"
    end,
})

-- LSP key mappings
vim.api.nvim_create_autocmd("LspAttach", {
    desc = "LSP actions",
    callback = function(event)
        local bufmap = function(mode, lhs, rhs)
            local opts = { buffer = true }
            vim.keymap.set(mode, lhs, rhs, opts)
        end

        bufmap("n", "gd", "<cmd>lua vim.lsp.buf.definition()<cr>")
        bufmap("n", "gr", "<cmd>lua vim.lsp.buf.references()<cr>")
        bufmap("n", "<leader>f", "<cmd>lua vim.lsp.buf.format()<cr>")
        bufmap("n", "<leader>rn", "<cmd>lua vim.lsp.buf.rename()<cr>")
        bufmap("n", "<leader>ca", "<cmd>lua vim.lsp.buf.code_action()<cr>")
    end,
})

📚 Key Bindings Summary

Key BindingFunction
gdGo to definition
grFind references
giGo to implementation
gyGo to type definition
<leader>fFormat document
<leader>rnRename symbol
<leader>caCode actions

🔧 Additional IDE Support

While VSCode and Neovim have official configuration guides, other IDEs may be supported through generic LSP clients. If you're using a different IDE that supports LSP, you can configure it to use psy-lsp-server as the language server for .psy files.

General LSP Configuration:

  • Server Command: psy-lsp-server
  • File Extensions: *.psy
  • Language ID: psy
  • Root Pattern: Dargo.toml

For specific IDE setup instructions, please refer to your IDE's LSP configuration documentation.

Setting up Shell Completions

Shell completions allow you to use the Tab key to autocomplete commands, options, and arguments when working with the Psy CLI tools. This makes development faster and more convenient.

This guide will help you set up shell completions for the Dargo CLI.

Zsh Completions

You can add the completions script to your Zsh completions directory:

# Create the completions directory if it doesn't exist
mkdir -p ~/.zsh/completions

# Generate and save the completion script
dargo complete zsh > ~/.zsh/completions/_dargo

# Add the directory to your fpath in ~/.zshrc if you haven't already
echo 'fpath=(~/.zsh/completions $fpath)' >> ~/.zshrc
echo 'autoload -U compinit && compinit' >> ~/.zshrc

# Reload your shell
source ~/.zshrc

Bash Completions

You can add the completions script to your Bash environment in a few ways:

If you have bash-completion installed:

# Generate and save the completion script to the bash-completion directory
dargo complete bash > /usr/local/etc/bash_completion.d/dargo

Without bash-completion, you can add it to your profile:

# Create a completions directory if you don't have one
mkdir -p ~/.bash_completions

# Generate and save the completion script
dargo complete bash > ~/.bash_completions/dargo.bash

# Add the following line to your ~/.bash_profile or ~/.bashrc
echo 'source ~/.bash_completions/dargo.bash' >> ~/.bashrc

# Reload your shell
source ~/.bashrc

Fish Completions

For Fish shell users, you can easily set up completions:

# Generate and save the completion script to the Fish completions directory
dargo complete fish > ~/.config/fish/completions/dargo.fish

No additional steps are needed as Fish automatically loads completions from this directory.

Verifying Completions

To verify that completions are working correctly, type dargo followed by a space and press Tab. You should see a list of available commands.

Try:

dargo <TAB>

You should see a list of commands like new, build, compile, etc.

Troubleshooting

If completions aren't working:

  1. Make sure you've restarted your terminal or sourced your .zshrc file
  2. Check that the dargo complete zsh command works and produces output
  3. Verify that the path to the completion script is correct
  4. Ensure that Zsh completions are enabled in your shell configuration

Hello, World!

This chapter walks you through creating your first [Psy Smart Contract Language] program.

Creating a New Program

Create a new project:

dargo new hello_world

This creates a new directory with a basic project structure. Navigate to the project and edit the src/main.psy file:

// input: 2,3
// output: 5
fn main(a: Felt, b: Felt) -> Felt {
    assert(a < b, "a should be less than b");
    assert_eq(b - a, 1, "b - a should equal 1");
    return a + b;
}

This simple program:

  • Takes two Felt parameters (a and b)
  • Asserts that a is less than b
  • Asserts that the difference is exactly 1
  • Returns the sum of a and b

Compiling

Compile the program using the compiler:

dargo compile

You will see a target directory created with the compiled output. The Psy compiler generates DPN opcodes and circuit data for each function.

[
    {
        "name": "main",
        "method_id": 680917438,
        "circuit_inputs": [
            0,
            1
        ],
        "circuit_outputs": [],
        "state_commands": [],
        "state_command_resolution_indices": [],
        "assertions": [
            {
                "left": 4294967296,
                "right": 4294967297,
                "message": "x != y"
            }
        ],
        "definitions": [
            {
                "data_type": 0,
                "index": 0,
                "op_type": 0,
                "inputs": [
                    0
                ]
            },
            {
                "data_type": 0,
                "index": 1,
                "op_type": 0,
                "inputs": [
                    1
                ]
            },
            {
                "data_type": 1,
                "index": 0,
                "op_type": 13,
                "inputs": [
                    0,
                    1
                ]
            },
            {
                "data_type": 1,
                "index": 1,
                "op_type": 2,
                "inputs": [
                    1
                ]
            }
        ]
    }
]

Running

Execute the program with test inputs:

dargo execute --program-dir . --entry-path src/main.psy --parameters 2,3

This runs the main function with the parameters 2 and 3. You should see the output result_vm: [5] as the result.

Basic Syntax

[Psy Smart Contract Language] uses a syntax inspired by [Rust]. Here are the basics:

Variables

Variables are declared with let and can be mutable with mut:

Basic Type Assignment

#![allow(unused)]
fn main() {
// Felt type assignment
let a: Felt = 1;
let b = 42; // Type inference
let mut c: Felt = 0;
c = 100;

// Boolean assignment
let flag: bool = true;
let mut status = false;
status = true;

// u32 assignment  
let count: u32 = 42u32;
let mut index = 0u32;
index = 5u32;
}

Array Assignment

#![allow(unused)]
fn main() {
// Fixed-size array assignment
let numbers: [Felt; 3] = [1, 2, 3];
let mut values: [u32; 5] = [0u32, 1u32, 2u32, 3u32, 4u32];

// Individual element assignment
values[0] = 10u32;
values[1] = 20u32;

// Nested array assignment
let mut matrix: [[Felt; 2]; 3] = [[1, 2], [3, 4], [5, 6]];
matrix[0][1] = 99;
}

Tuple Assignment

#![allow(unused)]
fn main() {
// Tuple assignment
let point: (Felt, Felt) = (10, 20);
let mixed: (Felt, bool, u32) = (42, true, 100u32);

// Individual element assignment
let mut coordinates: (Felt, Felt) = (0, 0);
coordinates.0 = 5;
coordinates.1 = 15;

// Nested tuple assignment
let mut complex: ((Felt, Felt), (Felt, Felt)) = ((1, 2), (3, 4));
complex.0.1 = 42; // Assign to nested element
}

Struct Assignment

#![allow(unused)]
fn main() {
struct Point {
    pub x: Felt,
    pub y: Felt,
}

// Struct creation assignment
let origin: Point = new Point { x: 0, y: 0 };

// Field assignment
let mut position: Point = new Point { x: 10, y: 20 };
position.x = 15;
position.y = 25;

// Struct with complex types
struct Container {
    pub data: [Felt; 3],
    pub coords: (Felt, Felt),
}

let mut container: Container = new Container {
    data: [1, 2, 3],
    coords: (10, 20),
};
container.data[0] = 99;
container.coords.1 = 30;
}

Comments

Psy supports both single-line and multi-line comments:

#![allow(unused)]
fn main() {
// Single-line comment
let a: Felt = 1;

/*
 * Multi-line comment
 * across multiple lines
 */
let b: Felt = 2;

/* Single-line block comment */
let c: Felt = 3;
}

Placement

Comments can be placed in these locations:

#![allow(unused)]
fn main() {
// Comment before struct
struct Point {
    pub x: Felt,
    // Comment between fields
    pub y: Felt,
}

// Comment before function
fn calculate(a: Felt, b: Felt) -> Felt {
    // Comment inside function
    let result = a + b;
    return result;
}

#[test]
fn test_function() {
    // Comment in test
    assert_eq(calculate(1, 2), 3, "should be 3");
}
}

Limitations

Comments cannot be placed in these locations:

#![allow(unused)]
fn main() {
// ❌ Invalid: After attributes
#[test] // This causes an error
fn test_func() { }

// ❌ Invalid: Inside parameter lists
fn invalid_func(
    // This causes an error
    a: Felt,
    b: Felt
) -> Felt { a + b }

// ❌ Invalid: In middle of declarations  
struct InvalidStruct {
    pub /* error here */ x: Felt,
}
}

Operators

This chapter covers all operators available in Psy, including arithmetic, logical, comparison, bitwise, and special operators.

Arithmetic Operators

Basic Arithmetic

Psy supports standard arithmetic operations for both Felt and u32 types:

fn main() {
    let a = 10;
    let b = 3;
    
    // Addition
    let sum = a + b;        // 13
    
    // Subtraction
    let diff = a - b;       // 7
    
    // Multiplication
    let product = a * b;    // 30
    
    // Division
    let quotient = a / b;   // 3 (integer division)
    
    // Modulo
    let remainder = a % b;  // 1
}

Exponentiation

Psy provides the exponentiation operator **:

fn main() {
    let base = 2;
    let exponent = 8;
    let result = base ** exponent;  // 256
    
    // u32 exponentiation
    let u32_base = 2u32;
    let u32_exp = 5u32;
    let u32_result = u32_base ** u32_exp;  // 32u32
    
    // Large numbers
    let large = 2 ** 64;  // 4294967295
}

Unary Operators

fn main() {
    let positive = 5;
    let negative = -positive;  // -5
    
    // Negation of zero
    let zero = 0;
    let neg_zero = -zero;  // Still 0
}

Comparison Operators

All comparison operators return bool values:

fn main() {
    let a = 10;
    let b = 5;
    let c = 10;
    
    // Equality
    let equal = a == c;        // true
    let not_equal = a != b;    // true
    
    // Ordering
    let less = b < a;          // true
    let less_equal = b <= a;   // true
    let greater = a > b;       // true
    let greater_equal = a >= c; // true
}

Type-specific Comparisons

fn main() {
    // u32 comparisons
    let u32_a = 100u32;
    let u32_b = 50u32;
    let u32_less = u32_b < u32_a;  // true
    
    // bool comparisons
    let bool_a = true;
    let bool_b = false;
    let bool_equal = bool_a == bool_b;  // false
}

Logical Operators

Boolean Logic

Logical operators work with bool types:

fn main() {
    let true_val = true;
    let false_val = false;
    
    // Logical AND
    let and_result = true_val && false_val;   // false
    
    // Logical OR
    let or_result = true_val || false_val;    // true
    
    // Logical XOR
    let xor_result = true_val ^ false_val;    // true
    
    // Logical NOT
    let not_result = !false_val;              // true
}

Felt Logic

For Felt values, logical NOT treats 0 as false and 1 as true:

fn main() {
    let zero = 0;
    let one = 1;
    
    let not_zero = !zero;  // 1 (true)
    let not_one = !one;    // 0 (false)
}

Bitwise Operators (u32 only)

Bitwise operations are only available for u32 type:

fn main() {
    let a = 0b11110000u32;  // 240
    let b = 0b00111100u32;  // 60
    
    // Bitwise AND
    let and_bits = a & b;   // 0b00110000 = 48
    
    // Bitwise OR
    let or_bits = a | b;    // 0b11111100 = 252
    
    // Bitwise XOR
    let xor_bits = a ^ b;   // 0b11001100 = 204
}

Bit Shifting

fn main() {
    let value = 0b11111111u32;  // 255
    
    // Left shift
    let left_shift = value << 1u32;   // 0b11111110 = 254
    let left_shift_8 = value << 8u32; // 65280
    
    // Right shift
    let right_shift = value >> 1u32;  // 0b01111111 = 127
    let right_shift_4 = value >> 4u32; // 15
    
    // Shift by 32 or more results in zero
    let zero_result = value << 32u32;  // 0
}

Advanced Bitwise Examples

fn main() {
    let max_u32 = 4294967295u32;  // 0xFFFFFFFF
    let high_bit = 2147483648u32;  // 0x80000000
    
    // Extract specific bits
    let masked = max_u32 & high_bit;  // 2147483648u32
    
    // Set all bits except high bit
    let almost_max = max_u32 ^ high_bit;  // 2147483647u32
    
    // Combine values
    let combined = high_bit | 42u32;  // 2147483690u32
}

Special Operators

Bit Manipulation Functions

Psy provides special functions for bit manipulation:

fn main() {
    let value = 12345;
    
    // Split a Felt into individual bits
    let bits = __split_bits(value, 32);  // Returns [Felt; 32]
    
    // Reconstruct from bits
    let reconstructed = __sum_bits(bits);
    // reconstructed equals original value
    
    // Working with specific bit positions
    let bit_0 = bits[0];   // Least significant bit
    let bit_15 = bits[15]; // 16th bit from right
}

Advanced Bit Operations

fn main() {
    let large_number = 2 ** 32;  // 4294967296
    
    // Split into 64 bits
    let bits_64 = __split_bits(large_number, 33);  // Needs 33 bits for 2^32
    
    // Verify specific bits
    let bit_31 = bits_64[31];  // 0
    let bit_32 = bits_64[32];  // 1 (the 2^32 bit)
    
    // Reconstruct
    let sum = __sum_bits(bits_64);  // Equals large_number
}

Type Casting

Casting Between Types

fn main() {
    // bool to other types
    let bool_val = true;
    let bool_as_u32 = bool_val as u32;   // 1u32
    let bool_as_felt = bool_val as Felt;  // 1
    
    // u32 to other types
    let u32_val = 42u32;
    let u32_as_felt = u32_val as Felt;   // 42
    let u32_as_bool = 1u32 as bool;      // true (only 0 and 1 are valid)
    
    // Felt to other types
    let felt_val = 123;
    let felt_as_u32 = felt_val as u32;   // 123u32 (if in valid range)
    let felt_as_bool = 0 as bool;        // false
}

Casting Constraints

fn main() {
    // Valid bool casts
    let zero_bool = 0 as bool;      // false
    let one_bool = 1 as bool;       // true
    
    // Invalid bool cast (would panic)
    // let invalid_bool = 2 as bool;  // Error: Invalid bool value
    
    // Valid u32 range
    let max_u32_felt = 4294967295;
    let valid_u32 = max_u32_felt as u32;  // 4294967295u32
    
    // Invalid u32 cast (would panic)
    // let invalid_u32 = 4294967296 as u32;  // Error: Invalid u32 value
}

Field Arithmetic (Felt)

Psy operates over the Goldilocks field with prime p = 18446744069414584321:

fn main() {
    let zero = 0;
    let one = 1;
    let two = 2;
    
    // Field arithmetic wraps around at the prime
    let p_minus_one = zero - one;  // 18446744069414584320 (p-1)
    
    // Division in field arithmetic is multiplicative inverse
    // NOT integer division - computes x such that (divisor * x) ≡ dividend (mod p)
    let half = p_minus_one / two;  // (p-1)/2 = multiplicative inverse
    let inv_two = one / two;       // 1/2 in field arithmetic
    
    // Verify: division result times divisor equals dividend
    let verify = inv_two * two;    // Should equal 1
    
    // Multiplication
    let result = p_minus_one * p_minus_one;  // 1 (since (-1)² = 1)
}

Large Number Examples

fn main() {
    // Working with large field elements
    let large = 2 ** 63;  // 9223372036854775808
    let modulo_result = large % (3 ** 35);  // 17567738638829720
    
    // Field inverse behavior
    let p = 18446744069414584321;  // Field prime
    let one = 1;
    let two = 2;
    let inv_two = one / two;  // Multiplicative inverse of 2
}

Operator Precedence

Operators follow standard mathematical precedence:

  1. Unary operators: -, !
  2. Exponentiation: ** (right-associative)
  3. Multiplicative: *, /, %
  4. Additive: +, -
  5. Shift: <<, >>
  6. Bitwise AND: &
  7. Bitwise XOR: ^
  8. Bitwise OR: |
  9. Comparison: <, <=, >, >=, ==, !=
  10. Logical AND: &&
  11. Logical OR: ||
fn main() {
    let result = 2 + 3 * 4 ** 2;  // 2 + (3 * (4 ** 2)) = 2 + 48 = 50
    let complex = 8 / 2 + 3 * 2;  // (8 / 2) + (3 * 2) = 4 + 6 = 10
    
    // Use parentheses for clarity
    let explicit = (2 + 3) * (4 ** 2);  // 5 * 16 = 80
}

Type Compatibility

Compatible Operations

fn main() {
    // Same-type operations
    let felt_result = 10 + 20;          // Felt + Felt
    let u32_result = 10u32 + 20u32;     // u32 + u32
    let bool_result = true && false;    // bool && bool
    
    // Mixed with literals
    let mixed1 = 10u32 + 5u32;          // u32 + u32 literal
    let mixed2 = 10 + 5;                // Felt + Felt literal
}

Type Errors

fn main() {
    // These would cause compilation errors:
    // let invalid1 = 10 + 10u32;       // Error: Felt + u32
    // let invalid2 = true + false;     // Error: bool + bool
    // let invalid3 = 10u32 && 20u32;   // Error: u32 && u32
    
    // Use explicit casting instead:
    let valid1 = 10 + (10u32 as Felt);  // Cast u32 to Felt
    let valid2 = (10 as u32) + 10u32;   // Cast Felt to u32
}

Performance Considerations

ZK-Circuit Friendly Operations

Some operations are more efficient in zero-knowledge circuits:

fn main() {
    // Efficient: Basic arithmetic
    let efficient = a + b * c;
    
    // Less efficient: Division (requires field inversion)
    let less_efficient = a / b;
    
    // Moderately efficient: Comparisons
    let comparison = a < b;
    
    // Bit operations are expanded to constraints
    let bitwise = a_u32 & b_u32;  // Generates multiple constraints
}

Key Points

  1. Arithmetic: Standard +, -, *, /, %, ** operators
  2. Comparison: All comparison operators return bool
  3. Logical: &&, ||, ^, ! for boolean logic
  4. Bitwise: &, |, ^, <<, >> for u32 only
  5. Casting: Explicit type conversion with as keyword
  6. Field Arithmetic: Operations over Goldilocks field for Felt
  7. Type Safety: Operations must be between compatible types
  8. Bit Functions: __split_bits and __sum_bits for bit manipulation
  9. Precedence: Standard mathematical operator precedence
  10. ZK Optimization: Some operations are more circuit-friendly than others

Structs and Implementations

Structs define custom data types that group related data together, and impl blocks add methods to operate on that data. This chapter covers struct definition, implementation patterns, and method types.

Defining Structs

Basic Struct Definition

#![allow(unused)]
fn main() {
// Public struct with mixed field visibility
pub struct Person {
    pub age: Felt,        // Public field - accessible from anywhere
    pub name_hash: Felt,  // Public field
    is_active: bool,      // Private field - only accessible within this module
}

// Private struct - only accessible within this module
struct InternalData {
    value: Felt,          // Private field in private struct
}

pub struct Point {
    pub x: Felt,          // Public field
    pub y: Felt,          // Public field
}

pub struct Rectangle {
    pub top_left: Point,  // Public field
    pub width: Felt,      // Public field
    pub height: Felt,     // Public field
}
}

Visibility Rules

Struct Visibility:

  • struct Name - Private struct, only accessible within the same module
  • pub struct Name - Public struct, accessible from other modules

Field Visibility:

  • field: Type - Private field, only accessible within the struct's own methods
  • pub field: Type - Public field, accessible wherever the struct is accessible
#![allow(unused)]
fn main() {
// Examples of visibility combinations
pub struct PublicStruct {
    pub public_field: Felt,    // ✅ Accessible wherever struct is accessible
    private_field: Felt,       // ❌ Only accessible within struct's own methods
}

struct PrivateStruct {
    pub public_field: Felt,    // ❌ Still not accessible outside module (struct is private)
    private_field: Felt,       // ❌ Only accessible within struct's own methods
}

// Demonstrating private field access
pub struct BankAccount {
    pub account_number: Felt,
    balance: Felt,  // Private - can only be accessed via methods
}

impl BankAccount {
    pub fn new(account_number: Felt, initial_balance: Felt) -> BankAccount {
        return new BankAccount {
            account_number: account_number,
            balance: initial_balance,  // ✅ Can access private field in constructor
        };
    }
    
    pub fn get_balance(self) -> Felt {
        return self.balance;  // ✅ Can access private field in method
    }
    
    pub fn deposit(mut self, amount: Felt) -> BankAccount {
        self.balance = self.balance + amount;  // ✅ Can modify private field in method
        return self;
    }
}
}

Structs with Arrays and Tuples

#![allow(unused)]
fn main() {
struct Vector3D {
    pub components: [Felt; 3],
    pub magnitude: Felt,
}

struct BoundingBox {
    pub corners: (Point, Point), // (min_corner, max_corner)
    pub is_valid: bool,
}

struct Matrix2x2 {
    pub data: [[Felt; 2]; 2],
    pub determinant: Felt,
}
}

Implementation Blocks

Instance Methods

Instance methods operate on a specific instance of the struct. In Psy, the self parameter follows the same passing rules as function parameters:

self Parameter Passing:

  • Structs with only value types (Felt, u32, bool) - Passed by copy
  • Structs with reference types (arrays, other structs) - Passed by move/reference
  • Mixed structs - Follow the most restrictive member (move/reference if any member requires it)

Important: Psy only supports self and mut self syntax, never &self or &mut self.

Method Visibility:

  • fn method_name - Private method, only callable within the same module
  • pub fn method_name - Public method, callable wherever the struct is accessible
#![allow(unused)]
fn main() {
impl Point {
    // Public instance method - accessible from anywhere
    pub fn distance_from_origin(self) -> Felt {
        // Simplified distance calculation
        return self.x + self.y;
    }
    
    // Public instance method that modifies the point
    pub fn translate(mut self, dx: Felt, dy: Felt) -> Point {
        self.x = self.x + dx;
        self.y = self.y + dy;
        return self;
    }
    
    // Public instance method that uses other structs
    pub fn distance_to(self, other: Point) -> Felt {
        return self.calculate_manhattan_distance(other);
    }
    
    // Private helper method - only usable within this module
    fn calculate_manhattan_distance(self, other: Point) -> Felt {
        let dx = if self.x > other.x { self.x - other.x } else { other.x - self.x };
        let dy = if self.y > other.y { self.y - other.y } else { other.y - self.y };
        return dx + dy;
    }
}
}

Static Methods (Associated Functions)

Static methods don't take self and are called on the type itself, not an instance.

#![allow(unused)]
fn main() {
impl Point {
    // Static method - constructor
    pub fn new(x: Felt, y: Felt) -> Point {
        return new Point { x: x, y: y };
    }
    
    // Static method - create special points
    pub fn origin() -> Point {
        return new Point { x: 0, y: 0 };
    }
    
    // Static method - utility functions
    pub fn midpoint(p1: Point, p2: Point) -> Point {
        let mid_x = (p1.x + p2.x) / 2;
        let mid_y = (p1.y + p2.y) / 2;
        return new Point { x: mid_x, y: mid_y };
    }
}
}

The self Parameter

Important: Psy only supports self syntax, not &self or &mut self:

#![allow(unused)]
fn main() {
impl Rectangle {
    // ✅ Valid: self moves the struct into the method
    pub fn area(self) -> Felt {
        return self.width * self.height;
    }
    
    // ✅ Valid: mut self allows modification
    pub fn scale(mut self, factor: Felt) -> Rectangle {
        self.width = self.width * factor;
        self.height = self.height * factor;
        return self;
    }
    
    // ✅ Valid: self is alias for self: Self
    pub fn perimeter(self) -> Felt {
        return 2 * (self.width + self.height);
    }
    
    // ❌ Invalid: Psy doesn't support reference syntax
    // pub fn invalid_method(&self) -> Felt { ... }
    // pub fn invalid_mut(&mut self) { ... }
}
}

Complex Examples

Vector3D Implementation

#![allow(unused)]
fn main() {
impl Vector3D {
    pub fn new(x: Felt, y: Felt, z: Felt) -> Vector3D {
        let components = [x, y, z];
        let magnitude = x * x + y * y + z * z; // Simplified magnitude
        return new Vector3D {
            components: components,
            magnitude: magnitude,
        };
    }
    
    pub fn dot_product(self, other: Vector3D) -> Felt {
        return self.components[0] * other.components[0] +
               self.components[1] * other.components[1] +
               self.components[2] * other.components[2];
    }
    
    pub fn scale(mut self, factor: Felt) -> Vector3D {
        self.components[0] = self.components[0] * factor;
        self.components[1] = self.components[1] * factor;
        self.components[2] = self.components[2] * factor;
        self.magnitude = self.magnitude * (factor * factor);
        return self;
    }
    
    pub fn get_x(self) -> Felt {
        return self.components[0];
    }
    
    pub fn get_y(self) -> Felt {
        return self.components[1];
    }
    
    pub fn get_z(self) -> Felt {
        return self.components[2];
    }
}
}

Matrix2x2 Implementation

#![allow(unused)]
fn main() {
impl Matrix2x2 {
    pub fn new(a: Felt, b: Felt, c: Felt, d: Felt) -> Matrix2x2 {
        let data = [[a, b], [c, d]];
        let determinant = a * d - b * c;
        return new Matrix2x2 {
            data: data,
            determinant: determinant,
        };
    }
    
    pub fn identity() -> Matrix2x2 {
        return Matrix2x2::new(1, 0, 0, 1);
    }
    
    pub fn multiply(self, other: Matrix2x2) -> Matrix2x2 {
        let a = self.data[0][0] * other.data[0][0] + self.data[0][1] * other.data[1][0];
        let b = self.data[0][0] * other.data[0][1] + self.data[0][1] * other.data[1][1];
        let c = self.data[1][0] * other.data[0][0] + self.data[1][1] * other.data[1][0];
        let d = self.data[1][0] * other.data[0][1] + self.data[1][1] * other.data[1][1];
        
        return Matrix2x2::new(a, b, c, d);
    }
    
    pub fn apply_to_point(self, point: Point) -> Point {
        let new_x = self.data[0][0] * point.x + self.data[0][1] * point.y;
        let new_y = self.data[1][0] * point.x + self.data[1][1] * point.y;
        return new Point { x: new_x, y: new_y };
    }
}
}

Usage Examples

#![allow(unused)]
fn main() {
#[test]
fn test_point_operations() {
    // Using static methods
    let origin = Point::origin();
    let p1 = Point::new(3, 4);
    let p2 = Point::new(6, 8);
    
    // Using instance methods
    let distance = p1.distance_from_origin();
    let moved = p1.translate(2, 1);
    let mid = Point::midpoint(p1, p2);
    
    assert_eq(distance, 7, "distance should be 3 + 4 = 7");
    assert_eq(moved.x, 5, "moved x should be 3 + 2 = 5");
    assert_eq(mid.x, 4, "midpoint x should be (3 + 6) / 2 = 4");
}

#[test]
fn test_vector_operations() {
    let v1 = Vector3D::new(1, 2, 3);
    let v2 = Vector3D::new(4, 5, 6);
    
    let dot = v1.dot_product(v2);
    let scaled = v1.scale(2);
    
    assert_eq(dot, 32, "dot product should be 1*4 + 2*5 + 3*6 = 32");
    assert_eq(scaled.get_x(), 2, "scaled x should be 1 * 2 = 2");
}

#[test]
fn test_matrix_operations() {
    let m1 = Matrix2x2::new(1, 2, 3, 4);
    let identity = Matrix2x2::identity();
    let point = Point::new(5, 7);
    
    let result = m1.multiply(identity);
    let transformed = m1.apply_to_point(point);
    
    assert_eq(result.data[0][0], 1, "identity multiplication preserves values");
    assert_eq(transformed.x, 19, "transformed x should be 1*5 + 2*7 = 19");
}
}

Method Call Syntax

#![allow(unused)]
fn main() {
// Both syntaxes are equivalent
let p = Point::new(3, 4);

// Method syntax (recommended)
let distance1 = p.distance_from_origin();

// Function syntax (also valid)
let distance2 = Point::distance_from_origin(p);

// Static methods can only be called with :: syntax
let origin = Point::origin();
let mid = Point::midpoint(p1, p2);
}

Arrays and Tuples

This chapter covers arrays and tuples in Psy, two important data structures for grouping values together.

Arrays

Arrays in Psy are fixed-size sequences of elements of the same type. They are defined with the syntax [T; N] where T is the element type and N is the compile-time known size.

Basic Array Usage

fn main() {
    // Array of 5 Felt values
    let numbers = [1, 2, 3, 4, 5];
    
    // Array with explicit type annotation
    let coordinates: [Felt; 3] = [10, 20, 30];
    
    // Array initialized with same value
    let zeros = [0; 4]; // [0, 0, 0, 0]
    
    // Accessing elements
    let first = numbers[0];
    let third = coordinates[2];
}

Arrays in Structs

struct HW {
    pub height: Felt,
    pub weight: Felt,
}

struct Person {
    pub age: Felt,
    pub hw: [HW; 2],
}

fn main() {
    let hw1 = new HW { height: 180, weight: 140 };
    let hw2 = new HW { height: 175, weight: 110 };
    
    let person = new Person {
        age: 25,
        hw: [hw1, hw2],
    };
    
    // Access nested array elements
    let first_height = person.hw[0].height;
    let second_weight = person.hw[1].weight;
}

Nested Arrays

fn main() {
    // 2D array (matrix)
    let matrix: [[Felt; 3]; 2] = [
        [1, 2, 3],
        [4, 5, 6]
    ];
    
    // Accessing nested elements
    let element = matrix[1][2]; // Gets 6
    
    // 3D array
    let cube: [[[Felt; 2]; 2]; 2] = [
        [[1, 2], [3, 4]],
        [[5, 6], [7, 8]]
    ];
    
    let deep_value = cube[1][0][1]; // Gets 6
}

Mutable Array Operations

fn main() {
    let hw1 = new HW { height: 180, weight: 140 };
    let hw2 = new HW { height: 175, weight: 110 };
    
    let person1 = new Person { age: 8, hw: [hw1, hw1] };
    let person2 = new Person { age: 18, hw: [hw2, hw2] };
    
    let mut people: [Person; 2] = [person1, person2];
    
    // Modify array elements
    people[0].hw[1] = new HW { height: 160, weight: 110 };
    
    // Access modified values
    let total_height = people[0].hw[0].height + people[0].hw[1].height;
}

Array Methods with Generics

Arrays support methods through generic implementations. However, specialized implementations for specific array types are not currently supported:

impl<T, N: u32> [T; N] {
    pub const fn len() -> u32 {
        return N;
    }
}

fn main() {
    // Call len() method on array type
    let length = <[Felt; 5]>::len(); // Returns 5
}

Working with Arrays Using Functions

Since custom array methods are not currently supported, use regular functions to operate on arrays:

fn array_sum(arr: [Felt; 3]) -> Felt {
    return arr[0] + arr[1] + arr[2];
}

fn array_dot_product(a: [Felt; 3], b: [Felt; 3]) -> Felt {
    return a[0] * b[0] + a[1] * b[1] + a[2] * b[2];
}

fn array_max(arr: [Felt; 4]) -> Felt {
    let mut max_val = arr[0];
    if arr[1] > max_val { max_val = arr[1]; };
    if arr[2] > max_val { max_val = arr[2]; };
    if arr[3] > max_val { max_val = arr[3]; };
    return max_val;
}

fn main() {
    let v1 = [1, 2, 3];
    let v2 = [4, 5, 6];
    
    let sum = array_sum(v1); // Returns 6
    let dot = array_dot_product(v1, v2); // Returns 32
    
    let grades = [85, 92, 78, 95];
    let highest = array_max(grades); // Returns 95
}

Tuples

Tuples are fixed-size sequences that can contain elements of different types. They are defined with parentheses and comma-separated values.

Basic Tuple Usage

fn main() {
    // Basic tuple assignment
    let a: Felt = 10;
    let b: Felt = 11;
    let c: (Felt, Felt) = (12, 13);
    let d: (Felt, Felt) = (a, b);
    
    // Mutable tuple
    let mut e: (Felt, Felt) = (14, 15);
    
    // Change tuple elements
    e.0 = 16;
    e.1 = 17;
    e = (18, 19);
    
    // Accessing tuple elements
    let first = c.0;  // Gets 12
    let second = c.1; // Gets 13
}

Nested Tuples

fn main() {
    // Nested tuple
    let nested_tuple: ((Felt, Felt), (Felt, Felt)) = ((1, 2), (3, 4));
    
    // Deeply nested tuple
    let deeply_nested: (((Felt, Felt), Felt), Felt) = (((5, 6), 7), 8);
    
    // Modify nested tuples
    let mut complex_tuple: ((Felt, Felt), (Felt, Felt)) = ((9, 10), (11, 12));
    complex_tuple.0.1 = 42;  // Change nested element
    complex_tuple.1 = (20, 21); // Replace entire sub-tuple
    
    // Access nested elements
    let value = nested_tuple.0.1;  // Gets 2
    let deep_value = deeply_nested.0.0.1; // Gets 6
}

Tuples in Structs

struct TupleHolder {
    pub pair: (Felt, Felt),
    pub nested: ((Felt, Felt), Felt),
}

fn main() {
    let mut struct_with_tuple = new TupleHolder {
        pair: (22, 23),
        nested: ((24, 25), 26),
    };
    
    // Modify tuple fields in struct
    struct_with_tuple.pair.1 = 99;
    struct_with_tuple.nested.0.0 = 88;
    
    // Access tuple elements in struct
    let pair_first = struct_with_tuple.pair.0;
    let nested_value = struct_with_tuple.nested.1;
}

Functions with Tuples

fn create_tuple(x: Felt, y: Felt) -> (Felt, Felt) {
    return (x + 1, y + 1);
}

fn sum_tuple(input: (Felt, Felt)) -> Felt {
    return input.0 + input.1;
}

fn swap_tuple(input: (Felt, Felt)) -> (Felt, Felt) {
    return (input.1, input.0);
}

fn process_nested(input: ((Felt, Felt), Felt)) -> Felt {
    return input.0.0 + input.0.1 + input.1;
}

fn main() {
    // Function returning tuple
    let returned_tuple = create_tuple(30, 31);
    // returned_tuple is (31, 32)
    
    // Function taking tuple as parameter
    let result = sum_tuple((40, 50));
    // result is 90
    
    // More complex operations
    let swapped = swap_tuple((10, 20));
    // swapped is (20, 10)
    
    let nested_result = process_nested(((1, 2), 3));
    // nested_result is 6
}

Tuple Parameter Passing

Tuples follow mixed semantics based on their member types:

fn main() {
    // Tuple with only value types - passed by copy
    let simple_tuple = (1, 2, true);
    process_simple(simple_tuple);
    
    // Tuple with reference types - passed by move/reference
    let array_tuple = ([1, 2, 3], 42);
    process_complex(array_tuple);
}

fn process_simple(t: (Felt, Felt, bool)) {
    let sum = t.0 + t.1;
    let flag = t.2;
}

fn process_complex(t: ([Felt; 3], Felt)) {
    let array_sum = t.0[0] + t.0[1] + t.0[2];
    let extra = t.1;
}

Current Limitations

Tuple Destructuring

Note: Tuple destructuring is not currently supported:

fn main() {
    // ❌ Not supported yet
    // let (x, y) = (50, 60);
    // let ((a, b), c) = ((1, 2), 3);
    
    // ✅ Use explicit access instead
    let pair = (50, 60);
    let x = pair.0;
    let y = pair.1;
}

Arrays vs Tuples

FeatureArraysTuples
Element TypesSame type onlyDifferent types allowed
SizeFixed at compile timeFixed at compile time
AccessIndex notation arr[0]Dot notation tuple.0
MethodsGeneric methods onlyNo custom methods
Parameter PassingAlways by referenceMixed based on members
DestructuringNot supportedNot supported (yet)
MutabilityElements can be modifiedElements can be modified

Complex Examples

Game Inventory System

struct Item {
    pub id: Felt,
    pub quantity: Felt,
}

struct Player {
    pub health: Felt,
    pub position: (Felt, Felt),
    pub inventory: [Item; 5],
    pub stats: (Felt, Felt, Felt), // (strength, defense, speed)
}

fn main() {
    let sword = new Item { id: 1, quantity: 1 };
    let potion = new Item { id: 2, quantity: 3 };
    let empty_slot = new Item { id: 0, quantity: 0 };
    
    let mut player = new Player {
        health: 100,
        position: (10, 20),
        inventory: [sword, potion, empty_slot, empty_slot, empty_slot],
        stats: (15, 10, 12)
    };
    
    // Move player
    player.position.0 = player.position.0 + 5;
    player.position.1 = player.position.1 - 2;
    
    // Add item to inventory
    player.inventory[2] = new Item { id: 3, quantity: 2 };
    
    // Increase stats
    player.stats.0 = player.stats.0 + 1; // strength + 1
    
    let total_strength = player.stats.0;
    let current_x = player.position.0;
}

Matrix Operations

fn matrix_multiply(a: [[Felt; 2]; 2], b: [[Felt; 2]; 2]) -> [[Felt; 2]; 2] {
    let row1_col1 = a[0][0] * b[0][0] + a[0][1] * b[1][0];
    let row1_col2 = a[0][0] * b[0][1] + a[0][1] * b[1][1];
    let row2_col1 = a[1][0] * b[0][0] + a[1][1] * b[1][0];
    let row2_col2 = a[1][0] * b[0][1] + a[1][1] * b[1][1];
    
    return [[row1_col1, row1_col2], [row2_col1, row2_col2]];
}

fn main() {
    let matrix_a: [[Felt; 2]; 2] = [[1, 2], [3, 4]];
    let matrix_b: [[Felt; 2]; 2] = [[5, 6], [7, 8]];
    
    let result = matrix_multiply(matrix_a, matrix_b);
    
    // result is [[19, 22], [43, 50]]
    let top_left = result[0][0]; // 19
    let bottom_right = result[1][1]; // 50
}

Database Record Simulation

type UserRecord = (Felt, (Felt, Felt, Felt), [Felt; 3], bool);

fn create_user_record(
    user_id: Felt, 
    birth_year: Felt, 
    birth_month: Felt, 
    birth_day: Felt,
    scores: [Felt; 3], 
    is_active: bool
) -> UserRecord {
    return (user_id, (birth_year, birth_month, birth_day), scores, is_active);
}

fn get_user_age(record: UserRecord, current_year: Felt) -> Felt {
    let birth_date = record.1;
    return current_year - birth_date.0;
}

fn calculate_average_score(record: UserRecord) -> Felt {
    let scores = record.2;
    return (scores[0] + scores[1] + scores[2]) / 3;
}

fn main() {
    let user = create_user_record(12345, 1990, 5, 15, [85, 92, 78], true);
    
    let age = get_user_age(user, 2023); // 33
    let avg_score = calculate_average_score(user); // 85
    
    let is_active = user.3;
    let birth_month = user.1.1;
}

Key Points

  1. Arrays: Fixed-size, same type, indexable with [], support custom methods through generics
  2. Tuples: Fixed-size, mixed types, accessible with .N, elements can be modified
  3. Parameter Passing:
    • Arrays are always passed by reference
    • Tuples follow mixed semantics based on member types
  4. Generic Methods: Arrays support generic methods but not specialized implementations
  5. Access Patterns: Arrays use index notation, tuples use field notation
  6. Type Safety: Both are statically typed with compile-time size checking
  7. Mutability: Both arrays and tuples support mutable operations on their elements
  8. Nested Structures: Both support arbitrary nesting levels
  9. Current Limitations: Tuple destructuring is not yet implemented

Conditional Statements

This chapter covers conditional control flow in Psy: if, else if, else, and match expressions.

If Statements

Basic If-Else

The most basic form of conditional control flow is the if expression:

fn min(a: Felt, b: Felt) -> Felt {
    if a < b {
        a
    } else {
        b
    }
}

fn main() {
    let x = 5;
    let y = 10;
    let result = min(x, y);
    // result is 5
}

If as Expression

In Psy, if is an expression that returns a value:

fn main() {
    let a = 10;
    let b = 11;
    let c = 12;
    let d = 13;
    
    // If expression assigned to variable
    let basic = if a > b {
        c
    } else {
        d
    };
    
    // basic will be 13 since 10 > 11 is false
}

If-Else If-Else Chains

You can chain multiple conditions using else if:

fn main() {
    let a = 10;
    let b = 11;
    let c = 12;
    let d = 13;
    let e = 14;
    
    let result = if a > b {
        c
    } else if b > a {
        d
    } else {
        e
    };
    
    // result will be 13 since b > a (11 > 10) is true
}

Nested If Statements

If statements can be nested within other if statements:

fn middle(a: Felt, b: Felt, c: Felt) -> Felt {
    if a > b {
        if b > c {
            b
        } else if a > c {
            c
        } else {
            a
        }
    } else {
        if a > c {
            a
        } else if b > c {
            c
        } else {
            b
        }
    }
}

fn main() {
    let result = middle(5, 3, 7); // Returns 5
    let result2 = middle(1, 8, 4); // Returns 4
}

Complex Nested Examples

fn main() {
    let a = 10;
    let b = 11;
    let c = 12;
    let d = 13;
    let e = 14;
    
    // Nested if with multiple else if branches
    let embedded = if a > b {
        if a > b {
            c
        } else if b > a {
            d
        } else {
            e
        }
    } else if c > d {
        if a > b {
            c
        } else if b > a {
            d
        } else {
            e
        }
    } else {
        if a > b {
            c
        } else if b > a {
            d
        } else {
            e
        }
    };
}

If with Local Variables

You can declare variables inside if blocks:

fn main() {
    let a = 1;
    let b = 2;
    let c = 3;
    let d = 4;
    
    let result = if a > b {
        let tmp0 = 1;
        let tmp1 = 2;
        if tmp0 > tmp1 {
            tmp0
        } else {
            tmp1
        }
    } else if c > d {
        let tmp2 = 3;
        let tmp3 = 4;
        if tmp2 > tmp3 {
            tmp2
        } else {
            tmp3
        }
    } else {
        let tmp4 = 5;
        let tmp5 = 6;
        if tmp4 > tmp5 {
            tmp4
        } else {
            tmp5
        }
    };
}

If in Function Calls

If expressions can be used as arguments to function calls:

fn process(x: Felt, y: Felt) {
    // Function body
}

fn main() {
    let n1 = 1;
    let n2 = 2;
    
    process(if n1 > n2 {
        n1
    } else {
        n2
    }, n2);
}

Match Expressions

Match expressions provide a powerful way to handle multiple possible values:

Basic Match

fn match_test_case(input: Felt) -> Felt {
    match input {
        0 => 10,
        1 => 20,
        2 => 30,
        _ => 40,  // Default case
    }
}

fn main() {
    let result1 = match_test_case(0); // Returns 10
    let result2 = match_test_case(1); // Returns 20
    let result3 = match_test_case(5); // Returns 40 (default case)
}

Match with Blocks

Match arms can contain blocks for more complex logic:

#![allow(unused)]
fn main() {
fn match_with_blocks(input: Felt) -> Felt {
    let mut result = 0;
    match input {
        0 => {
            result += 10;
        },
        1 => {
            result += 20;
        },
        2 => {
            result += 30;
        },
        3 => {
            result += 40;
        },
        4 => {
            result += 40;
        },
        _ => {
            result += 50;
        },
    };
    result
}
}

Match as Expression

Match can be used as an expression to assign values:

fn main() {
    let input = 2;
    
    let base_value = match input {
        0 => 100,
        1 => 200,
        2 => 300,
        _ => 400,
    };
    
    // base_value will be 300
}

Match with Different Types

Match works with various types:

// Match with bool
fn match_bool_test(input: bool) -> Felt {
    match input {
        true => 100,
        false => 200,
    }
}

// Match with u32
fn match_u32_test(input: u32) -> Felt {
    let mut result = 0;
    match input {
        0u32 => {
            result += 5;
        },
        1u32 => {
            result += 15;
        },
        2u32 => {
            result += 25;
        },
        3u32 => {
            result += 35;
        },
        4u32 => {
            result += 45;
        },
        _ => {
            result += 55;
        },
    };
    
    let extra = match input {
        0u32 => 50,
        1u32 => 150,
        2u32 => 250,
        3u32 => 350,
        _ => 450,
    };
    
    result + extra
}

fn main() {
    let bool_result = match_bool_test(true);  // Returns 100
    let u32_result = match_u32_test(2u32);   // Returns 275 (25 + 250)
}

Complex Match Example

#![allow(unused)]
fn main() {
fn complex_match_example(input: Felt) -> Felt {
    let base = match input {
        0 => {
            let temp = 10;
            temp * 2
        },
        1 => {
            let temp = 20;
            temp + 5
        },
        2 => {
            let temp = 30;
            temp - 5
        },
        _ => {
            let temp = 40;
            temp + 10
        },
    };
    
    let multiplier = match input {
        0 => 2,
        1 => 2,
        2 => 3,
        _ => 1,
    };
    
    base * multiplier
}
}

Current Limitations

Multiple Patterns: Multiple patterns in match expressions (e.g., 0 | 1 => value) are not currently supported. Use separate match arms for each pattern:

#![allow(unused)]
fn main() {
// ❌ Not supported
match value {
    0 | 1 => result1,
    2 | 3 => result2,
    _ => default,
}

// ✅ Use this instead
match value {
    0 => result1,
    1 => result1,
    2 => result2,
    3 => result2,
    _ => default,
}
}

Important Notes

No Early Returns in Conditionals

Important: Psy does not support early returns within if statements:

#![allow(unused)]
fn main() {
fn example(x: Felt) -> Felt {
    // ❌ This will cause a compilation error
    // if x > 10 {
    //     return x * 2;
    // }
    
    // ✅ Use expression-based returns instead
    if x > 10 {
        x * 2
    } else {
        x
    }
}
}

Expression-Based Design

Both if and match are expressions in Psy, meaning they return values that can be assigned to variables or used in other expressions:

fn main() {
    let x = 5;
    
    // If as expression
    let result1 = if x > 0 { x } else { -x };
    
    // Match as expression  
    let result2 = match x {
        0 => 0,
        n => n * 2,
    };
    
    // Can be used in function calls
    process_value(if x > 0 { x } else { 0 });
}

fn process_value(value: Felt) {
    // Function implementation
}

Key Points

  1. If statements are expressions that return values
  2. Else if allows chaining multiple conditions
  3. Nested if statements are fully supported
  4. Match expressions provide pattern matching capabilities
  5. No early returns are allowed in conditional blocks
  6. All branches of an if expression must return the same type
  7. Default case in match expressions uses _ wildcard
  8. Both if and match can be used anywhere expressions are expected

Loops

This chapter covers loop constructs in Psy: while loops and for loops.

While Loops

While loops execute a block of code repeatedly as long as a condition remains true.

Basic While Loop

fn main() -> Felt {
    let mut res = 0;
    let mut i = 0;
    while i < 10 {
        res += i;
        i += 1;
    }
    res  // Returns 45 (0+1+2+...+9)
}

Fibonacci Sequence with While

fn fibonacci(a: Felt, b: Felt) -> Felt {
    let mut res = 0;
    let mut a = a;
    let mut b = b;
    let mut i = 2;
    let mut c = 0;
    
    while i <= 10 {
        let d = a + b;
        c = d;
        a = b;
        b = c;
        i += 1;
    }
    
    res = b;
    return res;
}

fn main() {
    let result = fibonacci(2, 3); // Returns 233
}

While with Complex State

fn main() {
    let mut sum = 0;
    let mut count = 0;
    let mut current = 1;
    
    while current <= 100 {
        if current % 2 == 0 {  // Even numbers only
            sum += current;
            count += 1;
        } else {
            // Do nothing for odd numbers  
        };
        current += 1;
    }
    
    // sum contains sum of all even numbers from 1 to 100
    // count contains how many even numbers were found
}

While with Arrays

#![allow(unused)]
fn main() {
struct Data {
    pub values: [Felt; 5],
    pub count: Felt,
}

fn process_data() -> Data {
    let mut data = new Data {
        values: [0; 5],
        count: 0,
    };
    
    let mut i = 0;
    while i < 5 {
        data.values[i] = i * i;  // Square of index
        data.count += 1;
        i += 1;
    }
    
    return data;
}
}

For Loops

For loops provide a way to iterate over ranges with automatic index management.

Basic For Loop

fn main() -> Felt {
    let mut sum = 0;
    for n in 0u32..100u32 {
        sum += (n as Felt);
    }
    sum  // Returns 4950 (sum of 0 to 99)
}

For Loop with Arrays

struct TestSibling {
    pub value: [Felt; 4],
}

fn main() {
    let mut test_array: [TestSibling; 3] = [
        new TestSibling { value: [1, 2, 3, 4] },
        new TestSibling { value: [5, 6, 7, 8] },
        new TestSibling { value: [9, 10, 11, 12] }
    ];

    for i in 0u32..3u32 {
        let level_index = i as Felt;
        let sibling = test_array[level_index];
        let value = sibling.value;

        // Process each element
        let sum = value[0] + value[1] + value[2] + value[3];
        
        // Modify array element
        test_array[level_index] = new TestSibling {
            value: [sum, sum, sum, sum]
        };
    }
}

For Loop with Conditionals

fn main() {
    let mut test_array: [TestSibling; 3] = [
        new TestSibling { value: [1, 2, 3, 4] },
        new TestSibling { value: [0, 0, 0, 0] },
        new TestSibling { value: [0, 0, 0, 0] }
    ];

    for i in 0u32..3u32 {
        let level_index = i as Felt;
        let sibling = test_array[level_index];
        let value = sibling.value;

        if level_index < 1 {
            // First element has specific values
            // value[0] == 1, value[1] == 2, etc.
        } else {
            // Other elements are zeros
            // value[0] == 0, value[1] == 0, etc.
        }
    }
}

For Loop with Complex Processing

#![allow(unused)]
fn main() {
fn matrix_multiplication() {
    let matrix_a: [[Felt; 3]; 3] = [
        [1, 2, 3],
        [4, 5, 6],
        [7, 8, 9]
    ];
    
    let matrix_b: [[Felt; 3]; 3] = [
        [9, 8, 7],
        [6, 5, 4],
        [3, 2, 1]
    ];
    
    let mut result: [[Felt; 3]; 3] = [[0; 3]; 3];
    
    for i in 0u32..3u32 {
        for j in 0u32..3u32 {
            let mut sum = 0;
            for k in 0u32..3u32 {
                let ii = i as Felt;
                let jj = j as Felt;
                let kk = k as Felt;
                sum += matrix_a[ii][kk] * matrix_b[kk][jj];
            }
            result[i as Felt][j as Felt] = sum;
        }
    }
}
}

For Loop with Accumulation

#![allow(unused)]
fn main() {
fn calculate_factorials() -> [Felt; 10] {
    let mut factorials = [0; 10];
    
    for i in 0u32..10u32 {
        let mut factorial = 1;
        let num = (i + 1u32) as Felt; // Calculate factorial of (i+1)
        
        for j in 1u32..(i + 2u32) {
            factorial *= j as Felt;
        }
        
        factorials[i as Felt] = factorial;
    }
    
    return factorials;
}
}

Loop Patterns

Counting Patterns

#![allow(unused)]
fn main() {
// Count up
fn count_up() {
    for i in 1u32..11u32 {  // 1 to 10
        let value = i as Felt;
        // Process value
    }
}

// Count with step
fn count_with_step() {
    for i in 0u32..10u32 {
        let value = (i * 2) as Felt;  // 0, 2, 4, 6, 8, ...
        // Process even numbers
    }
}

// Count down (using while)
fn count_down() {
    let mut i = 10;
    while i > 0 {
        // Process i
        i -= 1;
    }
}
}

Array Processing Patterns

#![allow(unused)]
fn main() {
// Initialize array
fn initialize_array() -> [Felt; 5] {
    let mut arr = [0; 5];
    for i in 0u32..5u32 {
        arr[i as Felt] = (i + 1) as Felt;
    }
    return arr; // [1, 2, 3, 4, 5]
}

// Sum array elements
fn sum_array(arr: [Felt; 5]) -> Felt {
    let mut sum = 0;
    for i in 0u32..5u32 {
        sum += arr[i as Felt];
    }
    return sum;
}

// Find maximum
fn find_max(arr: [Felt; 5]) -> Felt {
    let mut max = arr[0];
    for i in 1u32..5u32 {
        let current = arr[i as Felt];
        if current > max {
            max = current;
        }
    }
    return max;
}
}

Nested Loop Patterns

#![allow(unused)]
fn main() {
// Create multiplication table
fn multiplication_table() -> [[Felt; 10]; 10] {
    let mut table = [[0; 10]; 10];
    
    for i in 0u32..10u32 {
        for j in 0u32..10u32 {
            let ii = i as Felt;
            let jj = j as Felt;
            table[ii][jj] = (ii + 1) * (jj + 1);
        }
    }
    
    return table;
}

// Process 2D grid
fn process_grid() {
    let mut grid: [[Felt; 5]; 5] = [[0; 5]; 5];
    
    for row in 0u32..5u32 {
        for col in 0u32..5u32 {
            let r = row as Felt;
            let c = col as Felt;
            
            // Set diagonal elements to 1
            if r == c {
                grid[r][c] = 1;
            } else {
                grid[r][c] = r + c;
            }
        }
    }
}
}

Search Patterns

#![allow(unused)]
fn main() {
// Linear search
fn linear_search(arr: [Felt; 10], target: Felt) -> Felt {
    for i in 0u32..10u32 {
        if arr[i as Felt] == target {
            return i as Felt;  // Return index
        }
    }
    return 10;  // Not found (return invalid index)
}

// Search with condition
fn find_first_even(arr: [Felt; 10]) -> Felt {
    for i in 0u32..10u32 {
        let value = arr[i as Felt];
        if value % 2 == 0 {
            return value;
        }
    }
    return 0;  // No even number found
}
}

Loop Control Notes

Range Syntax

For loops in Psy use range syntax:

#![allow(unused)]
fn main() {
// Inclusive ranges
for i in 0u32..5u32 {  // i goes from 0 to 4 (5 is excluded)
    // Loop body
}

// Must specify u32 type for ranges
for i in 1u32..10u32 {  // i goes from 1 to 9
    let value = i as Felt;  // Convert to Felt if needed
    // Process value
}
}

Type Conversions in Loops

fn main() {
    for i in 0u32..10u32 {
        let index = i as Felt;        // Convert u32 to Felt for array indexing
        let squared = index * index;   // Felt arithmetic
        
        // Use index for array operations
        let mut arr = [0; 10];
        arr[index] = squared;
    }
}

Loop Variable Scope

fn main() {
    // Loop variable 'i' is only available inside the loop
    for i in 0u32..5u32 {
        let doubled = (i * 2) as Felt;
        // 'i' and 'doubled' are available here
    }
    // 'i' is not available here
    
    let mut counter = 0;
    while counter < 5 {
        let temp = counter * 2;
        // 'counter' and 'temp' are available here
        counter += 1;
    }
    // 'counter' is still available here (declared outside loop)
    // 'temp' is not available here
}

Important Considerations

Psy vs Rust Syntax Differences

Psy has some important syntax differences from Rust:

#![allow(unused)]
fn main() {
// ❌ Psy requires explicit u32 type in arithmetic
for i in 0u32..10u32 {
    let value = (i + 1) as Felt;     // Error: type mismatch
}

// ✅ Correct: use explicit u32 literals
for i in 0u32..10u32 {
    let value = (i + 1u32) as Felt;  // Correct
}

// ❌ If statements as statements need semicolons
if condition {
    // do something
} else {
    // do something else
}
next_statement();  // Error: missing semicolon

// ✅ Correct: add semicolon after if-else statement
if condition {
    // do something  
} else {
    // do something else
};  // Required semicolon
next_statement();
}

No Break or Continue

Psy loops do not support break or continue statements. Loop control must be managed through conditions:

#![allow(unused)]
fn main() {
// ❌ Not supported
// for i in 0u32..10u32 {
//     if condition {
//         break;
//     }
// }

// ✅ Use conditional logic instead
fn controlled_loop() {
    let mut should_continue = true;
    let mut i = 0;
    
    while i < 10 && should_continue {
        // Process logic
        if some_condition {
            should_continue = false;  // Equivalent to break
        } else {
            // Process normally
        }
        i += 1;
    }
}
}

Loop Unrolling for ZK

Since Psy compiles to zero-knowledge circuits, loops are unrolled at compile time. This means:

  1. Fixed iterations: Loop bounds must be known at compile time
  2. No dynamic bounds: Cannot loop based on runtime values
  3. Resource usage: Each loop iteration becomes part of the circuit
#![allow(unused)]
fn main() {
// ✅ Valid: Fixed bounds known at compile time
for i in 0u32..100u32 {
    // Loop body
}

// ❌ Invalid: Runtime-dependent bounds
fn invalid_loop(n: Felt) {
    // This would not work as n is not known at compile time
    // for i in 0u32..n {  
    //     // Loop body
    // }
}
}

Key Points

  1. While loops execute while a condition is true
  2. For loops iterate over fixed ranges with u32 indices
  3. No break/continue - use conditional logic instead
  4. Fixed bounds - loop iterations must be compile-time constant
  5. Type conversion - convert u32 loop indices to Felt for array access
  6. Nested loops are fully supported
  7. Loop unrolling - all loops are unrolled in the final circuit
  8. Scope rules - loop variables have block scope
  9. Syntax differences from Rust:
    • u32 arithmetic requires explicit u32 literals (e.g., i + 1u32)
    • If-else statements used as statements need trailing semicolons
    • Empty else blocks should contain at least one statement

Functions

Functions are the building blocks of Psy programs. This chapter covers how to define functions and call them with various parameter types.

Function Definition

Functions are defined using the fn keyword:

#![allow(unused)]
fn main() {
// Basic function definition
fn add(a: Felt, b: Felt) -> Felt {
    return a + b;
}

// Function without explicit return value (returns unit type)
fn validate_inputs(a: Felt, b: Felt) {
    assert(a >= 0, "a must be non-negative");
    assert(b >= 0, "b must be non-negative");
    // Implicit return of unit type
}

// Function with multiple parameters of different types
fn calculate(x: Felt, y: Felt, flag: bool, count: u32) -> Felt {
    if flag {
        x + y + (count as Felt)
    } else {
        x - y
    }
}
}

Function Calls

Parameter Passing

In Psy, function parameters are passed using different mechanisms depending on the data type:

Value Types (Copy Semantics):

  • Felt, u32, bool - These are copied when passed to functions
  • The original variable is not affected when the parameter is modified inside the function

Reference Types (Move/Borrow Semantics):

  • Arrays [T; N] - Passed by reference/move
  • Structs - Passed by reference/move

Tuples - Mixed Semantics:

  • Tuples pass each member according to its own type
  • (Felt, Felt) - Both members copied (value types)
  • (Felt, [Felt; 3]) - First member copied, second member moved
  • ([Felt; 2], [Felt; 3]) - Both members moved (reference types)
#![allow(unused)]
fn main() {
fn test_parameter_passing() {
    // Value types - copied
    let x: Felt = 10;
    let flag: bool = true;
    let count: u32 = 5u32;
    
    modify_values(x, flag, count);
    // x, flag, count remain unchanged
    
    // Reference types - moved/borrowed
    let mut arr = [1, 2, 3];
    let mut point = new Point { x: 1, y: 2 };
    
    modify_references(arr, point);
    // arr and point are moved into the function
}

fn modify_values(mut a: Felt, mut b: bool, mut c: u32) {
    a = 99;    // Only affects the local copy
    b = false; // Only affects the local copy
    c = 0u32;  // Only affects the local copy
}

fn modify_references(mut arr: [Felt; 3], mut point: Point) {
    arr[0] = 99;  // Modifies the actual array
    point.x = 99; // Modifies the actual struct
}

// Examples of tuple parameter semantics
fn process_value_tuple(tuple: (Felt, Felt)) -> Felt {
    // Both members copied - original tuple unaffected
    return tuple.0 + tuple.1;
}

fn process_mixed_tuple(tuple: (Felt, [Felt; 2])) -> Felt {
    // First member copied, array moved
    return tuple.0 + tuple.1[0];
}

fn process_ref_tuple(tuple: ([Felt; 2], [Felt; 2])) -> Felt {
    // Both arrays moved
    return tuple.0[0] + tuple.1[1];
}
}

Syntax Limitations

Important: Psy does not support early returns within control flow statements:

#![allow(unused)]
fn main() {
// ❌ Invalid: Early return in if statement
fn invalid_early_return(x: Felt) -> Felt {
    if x > 10 {
        return x * 2; // This causes a compilation error
    };
    return x;
}

// ✅ Valid: Use expression-based returns
fn valid_conditional_return(x: Felt) -> Felt {
    if x > 10 {
        x * 2
    } else {
        x
    }
}

// ✅ Valid: Set variable then return
fn valid_variable_return(x: Felt) -> Felt {
    let mut result = x;
    if x > 10 {
        result = x * 2;
    };
    return result;
}
}

Basic Function Calls

#![allow(unused)]
fn main() {
fn add(a: Felt, b: Felt) -> Felt {
    return a + b;
}

fn subtract(a: Felt, b: Felt) -> Felt {
    return a - b;
}

#[test]
fn test_basic_calls() {
    // Simple function calls
    let sum = add(5, 3);
    let diff = subtract(10, 4);
    
    assert_eq(sum, 8, "5 + 3 should be 8");
    assert_eq(diff, 6, "10 - 4 should be 6");
}
}

Function Calls with Different Parameter Types

#![allow(unused)]
fn main() {
fn process_data(value: Felt, enabled: bool, iterations: u32) -> Felt {
    let mut result = value;
    if enabled {
        for i in 0u32..iterations {
            result = result + 1;
        }
    } else {
        result = 0;
    };
    return result;
}

#[test]
fn test_mixed_parameters() {
    // Calling function with mixed parameter types
    let result1 = process_data(10, true, 3u32);
    let result2 = process_data(10, false, 3u32);
    
    assert_eq(result1, 13, "10 + 3 iterations should be 13");
    assert_eq(result2, 0, "disabled should return 0");
}
}

Function Calls with Arrays

#![allow(unused)]
fn main() {
fn sum_array(arr: [Felt; 3]) -> Felt {
    return arr[0] + arr[1] + arr[2];
}

fn modify_array(mut arr: [Felt; 3]) -> [Felt; 3] {
    arr[0] = arr[0] * 2;
    arr[1] = arr[1] * 2;
    arr[2] = arr[2] * 2;
    return arr;
}

#[test]
fn test_array_parameters() {
    let numbers: [Felt; 3] = [1, 2, 3];
    
    // Pass array to function
    let total = sum_array(numbers);
    let doubled = modify_array(numbers);
    
    assert_eq(total, 6, "1 + 2 + 3 should be 6");
    assert_eq(doubled[0], 2, "first element doubled should be 2");
    assert_eq(doubled[1], 4, "second element doubled should be 4");
}
}

Function Calls with Tuples

#![allow(unused)]
fn main() {
fn distance(point1: (Felt, Felt), point2: (Felt, Felt)) -> Felt {
    let dx = point2.0 - point1.0;
    let dy = point2.1 - point1.1;
    // Simplified distance calculation (not actual Euclidean distance)
    return dx + dy;
}

fn create_point(x: Felt, y: Felt) -> (Felt, Felt) {
    return (x, y);
}

#[test]
fn test_tuple_parameters() {
    let p1: (Felt, Felt) = (0, 0);
    let p2 = create_point(3, 4);
    
    let dist = distance(p1, p2);
    
    assert_eq(p2.0, 3, "x coordinate should be 3");
    assert_eq(p2.1, 4, "y coordinate should be 4");
    assert_eq(dist, 7, "distance should be 3 + 4 = 7");
}
}

Function Calls with Structs

#![allow(unused)]
fn main() {
struct Point {
    pub x: Felt,
    pub y: Felt,
}

fn create_point_struct(x: Felt, y: Felt) -> Point {
    return new Point { x: x, y: y };
}

fn move_point(mut point: Point, dx: Felt, dy: Felt) -> Point {
    point.x = point.x + dx;
    point.y = point.y + dy;
    return point;
}

fn get_x_coordinate(point: Point) -> Felt {
    return point.x;
}

#[test]
fn test_struct_parameters() {
    let origin = create_point_struct(0, 0);
    let moved = move_point(origin, 5, 3);
    let x_coord = get_x_coordinate(moved);
    
    assert_eq(x_coord, 5, "x coordinate should be 5 after moving");
    assert_eq(moved.y, 3, "y coordinate should be 3 after moving");
}
}

Nested Function Calls

Functions can call other functions, creating nested calls:

#![allow(unused)]
fn main() {
fn double(x: Felt) -> Felt {
    return x * 2;
}

fn square(x: Felt) -> Felt {
    return x * x;
}

fn complex_calculation(x: Felt) -> Felt {
    // Nested function calls
    let doubled = double(x);
    let squared = square(doubled);
    return squared + double(x);
}

#[test]
fn test_nested_calls() {
    let result = complex_calculation(3);
    // double(3) = 6, square(6) = 36, double(3) = 6
    // result = 36 + 6 = 42
    assert_eq(result, 42, "complex calculation should be 42");
}
}

Function Inlining

Important: In Psy, functions are inlined by default during compilation. This means:

  • Function calls are replaced with the function body during compilation
  • Each function generates its own DPN opcodes
  • There is no runtime function call overhead
  • Recursive functions have limitations due to inlining
#![allow(unused)]
fn main() {
fn inline_example(x: Felt) -> Felt {
    return x + 1;
}

#[test]
fn test_inlining() {
    // This call will be inlined during compilation
    let result = inline_example(5);
    assert_eq(result, 6, "5 + 1 should be 6");
}
}

Storage and Contracts

This chapter covers Psy's storage system and contract development, including automatic storage generation, storage references, and contract architecture.

Storage Architecture Overview

Psy provides a sophisticated storage system with the following characteristics:

  • Slot-based Storage: Maximum 2^32 slots per contract, each slot is a Hash type (4 Felt values)
  • User Isolation: Each user has completely separate storage space within each contract
  • Sequential Layout: Data is laid out in slots according to struct field order
  • No Dynamic Types: No dynamic arrays or mappings - all storage is statically sized
  • Automatic Code Generation: Use #[derive(Storage, StorageRef)] for automated storage management

On-Chain Storage Architecture

Psy's on-chain storage follows a hierarchical tree structure that enables scalable and isolated user state management:

Global User Tree

At the top level, there is a Global User Tree that stores information for all users in the system. Each user has a User Leaf in this tree:

#![allow(unused)]
fn main() {
pub struct PsyUserLeaf {
    pub public_key: Hash,              // User's public key for authentication
    pub user_state_tree_root: Hash,    // Root of this user's state tree
    pub balance: Felt,                 // User's native token balance
    pub nonce: Felt,                   // Transaction nonce for replay protection
    pub last_checkpoint_id: Felt,      // Last checkpoint this user participated in
    pub event_index: Felt,             // Index for event ordering
    pub user_id: Felt,                 // Unique identifier for this user
}
}

User Contract State Tree Structure

Each user has their own User Contract State Tree with the following hierarchy:

Global User Tree
├── User 1 Leaf
│   ├── public_key
│   ├── user_state_tree_root ──┐
│   ├── balance               │
│   ├── nonce                 │
│   ├── last_checkpoint_id    │
│   ├── event_index           │
│   └── user_id               │
├── User 2 Leaf               │
└── ...                       │
                              │
User Contract State Tree (per user) ──┘
├── Contract 1 State (2^32 slots max)
│   ├── Slot 0: [Felt; 4]
│   ├── Slot 1: [Felt; 4]
│   └── ...
├── Contract 2 State (2^32 slots max)
│   ├── Slot 0: [Felt; 4]
│   ├── Slot 1: [Felt; 4]
│   └── ...
└── ...

Storage Isolation and Access

This architecture provides several key benefits:

#![allow(unused)]
fn main() {
// Example: Understanding storage isolation
fn storage_isolation_example() {
    // Each user has completely isolated storage
    let user_a_id = 123;
    let user_b_id = 456;
    let contract_id = 789;
    
    // User A and User B each have their own user leaf in the Global User Tree
    // The user leaf contains user_state_tree_root pointing to their contract state tree
    
    // Even in the same contract, users have separate storage
    let user_a_balance = get_other_user_contract_state_hash_at(
        32,           // tree height
        user_a_id,    // User A
        contract_id,  // same contract
        0             // balance slot
    )[0];
    
    let user_b_balance = get_other_user_contract_state_hash_at(
        32,           // tree height  
        user_b_id,    // User B
        contract_id,  // same contract
        0             // balance slot - completely separate from User A
    )[0];
    
    // user_a_balance and user_b_balance are independent
}
}

Storage Access Permissions

Read Access: You can read from any user's storage in any contract
Write Access: You can only write to your own storage

#![allow(unused)]
fn main() {
fn storage_access_permissions() {
    // Get current user's information
    let my_user_id = get_user_id();
    let my_public_key = get_user_public_key_hash();
    let my_nonce = get_last_nonce();
    
    // ✅ READ: Can read from any user's storage
    let other_user_id = 456;
    let contract_id = get_contract_id();
    let slot_index = 0;
    
    // Read from any user's contract storage
    let other_user_slot_data = get_other_user_contract_state_hash_at(
        32,              // contract_state_tree_height
        other_user_id,   // any user
        contract_id,     // any contract  
        slot_index       // any slot
    );
    
    // Read from current user's storage
    let my_slot_data = get_state_hash_at(slot_index);
    
    // ✅ WRITE: Can only write to own storage
    let new_data = [1, 2, 3, 4];
    cset_state_hash_at(slot_index, new_data); // Only writes to current user's storage
    
    // ❌ CANNOT: Write to other users' storage
    // There is no function like cset_other_user_state_hash_at()
    // The architecture prevents writing to other users' storage
}
}

Contract Storage within User Contract State Tree

Each user's contract state tree contains their storage for all contracts:

#![allow(unused)]
fn main() {
#[contract]
#[derive(Storage, StorageRef)]
pub struct TokenContract {
    pub balance: Felt,           // Slot 0 in this user's contract state
    pub allowances: [Felt; 100], // Slots 1-100 in this user's contract state
}

fn contract_storage_in_user_tree() {
    let contract = TokenContractRef::new(ContractMetadata::current());
    
    // This accesses:
    // Global User Tree -> Current User Leaf -> User Contract State Tree Root
    // -> This Contract's State -> Specific Slots
    contract.balance.set(100);
    
    // Reading from another user requires traversing their tree:
    // Global User Tree -> Other User Leaf -> Their User Contract State Tree Root  
    // -> This Contract's State in Their Tree -> Specific Slots
    let other_contract = TokenContractRef::new(ContractMetadata::new(
        get_contract_id(),
        456  // other user's ID
    ));
    let other_balance = other_contract.balance.get();
}
}

Storage Tree Limits

Each level has specific capacity limits:

#![allow(unused)]
fn main() {
// System limits
const MAX_USERS: u64 = 2^64;              // Global User Tree capacity
const MAX_CONTRACTS_PER_USER: u32 = 2^32; // User Contract State Tree capacity
const MAX_SLOTS_PER_CONTRACT: u32 = 2^32; // Contract Storage capacity

fn storage_limits_example() {
    // Each user can have up to 2^32 contracts
    // Each contract can have up to 2^32 slots
    // Each slot stores Hash = [Felt; 4]
    
    // Total storage per user: 2^32 contracts × 2^32 slots × 4 Felt = 2^66 Felt values
    // Global capacity: 2^64 users × 2^66 Felt per user = 2^130 total Felt values
}
}

Contract Interaction Pattern: "Read Others, Write Self"

Psy contracts follow a "read others, write self" interaction pattern. Users can read from other users' storage but can only write to their own storage. This enables secure cross-user interactions:

#![allow(unused)]
fn main() {
impl TokenContract {
    // Transfer tokens: "read others, write self" pattern
    pub fn transfer_tokens(recipient: Felt, amount: Felt) {
        let sender_id = get_user_id();
        
        // Write to sender's own storage (current user)
        let sender_contract = TokenContractRef::new(ContractMetadata::current());
        let sender_balance = sender_contract.balance.get();
        assert(sender_balance >= amount, "insufficient balance");
        sender_contract.balance.set(sender_balance - amount);
        
        // Write transfer record to sender's storage using recipient as key
        sender_contract.transfers_sent.index(recipient).set(
            sender_contract.transfers_sent.index(recipient).get() + amount
        );
    }
    
    pub fn claim_tokens(sender: Felt) {
        let recipient_id = get_user_id();
        
        // Read from sender's storage (read others)
        let sender_contract = TokenContractRef::new(
            ContractMetadata::new(get_contract_id(), sender)
        );
        let amount_sent = sender_contract.transfers_sent.index(recipient_id).get();
        
        // Write to recipient's own storage (write self)
        let recipient_contract = TokenContractRef::new(ContractMetadata::current());
        let amount_claimed = recipient_contract.transfers_claimed.index(sender).get();
        
        let claimable = amount_sent - amount_claimed;
        assert(claimable > 0, "no tokens to claim");
        
        // Update recipient's own storage
        recipient_contract.transfers_claimed.index(sender).set(amount_sent);
        recipient_contract.balance.set(
            recipient_contract.balance.get() + claimable
        );
    }
}
}

Cross-User Interaction Examples

#![allow(unused)]
fn main() {
#[contract]
#[derive(Storage, StorageRef)]
pub struct MessageContract {
    pub messages_sent: [Hash; 1000000],      // Messages sent to others
    pub messages_received: [Hash; 1000000],  // Messages received from others
}

impl MessageContract {
    // Send message: write to own storage with recipient as index
    pub fn send_message(recipient: Felt, message_hash: Hash) {
        let sender = MessageContractRef::new(ContractMetadata::current());
        sender.messages_sent.index(recipient).set(message_hash);
    }
    
    // Read message: read from sender's storage, write to own
    pub fn receive_message(sender: Felt) -> Hash {
        // Read from sender's storage
        let sender_contract = MessageContractRef::new(
            ContractMetadata::new(get_contract_id(), sender)
        );
        let message = sender_contract.messages_sent.index(get_user_id()).get();
        
        // Write to own storage to mark as received
        let recipient = MessageContractRef::new(ContractMetadata::current());
        recipient.messages_received.index(sender).set(message);
        
        message
    }
}
}

Interaction Security Model

This pattern provides several security benefits:

  1. Write Isolation: Users can only modify their own storage
  2. Read Transparency: Users can read from any user's storage
  3. Consent-based Transfers: Recipients must actively claim transfers
  4. Audit Trail: All interactions are recorded in sender's storage
#![allow(unused)]
fn main() {
impl SecureContract {
    pub fn secure_interaction_example() {
        // ✅ Allowed: Read from any user's storage
        let other_user_data = OtherContractRef::new(
            ContractMetadata::new(get_contract_id(), 456)
        ).any_field.get();
        
        // ✅ Allowed: Read from any user's any contract
        let external_data = ExternalContractRef::new(
            ContractMetadata::new(789, 456) // different contract, different user
        ).some_field.get();
        
        // ✅ Allowed: Write to own storage only
        let my_contract = MyContractRef::new(ContractMetadata::current());
        my_contract.my_data.set(42);
        
        // ❌ NOT Possible: Cannot write to another user's storage
        // The system architecture prevents this:
        // - No cset_other_user_state_hash_at() function exists
        // - Storage references only write to current user's storage
        // - Cross-user writes are architecturally impossible
    }
}
}

State Tree Operations

Understanding the tree structure helps optimize storage operations:

#![allow(unused)]
fn main() {
impl OptimizedContract {
    // Batch operations within same user are efficient
    pub fn batch_user_operations() {
        let contract = OptimizedContractRef::new(ContractMetadata::current());
        
        // All these operations work within the same user contract state tree
        for i in 0u32..10u32 {
            contract.data.index(i as Felt).set(i as Felt);
        }
        // Efficient: single user contract tree, multiple contract slots
    }
    
    // Cross-user operations require multiple tree accesses
    pub fn cross_user_operation(other_user: Felt) {
        let my_contract = OptimizedContractRef::new(ContractMetadata::current());
        let other_contract = OptimizedContractRef::new(
            ContractMetadata::new(get_contract_id(), other_user)
        );
        
        // Less efficient: requires accessing two different user contract state trees
        let my_value = my_contract.data.index(0).get();
        let other_value = other_contract.data.index(0).get();
    }
}

## Storage Capacity

```rust
// Each contract supports up to 2^32 storage slots
// Each slot is Hash type = [Felt; 4] = 4 u64 values
const MAX_SLOTS: u32 = 2^32;
type StorageSlot = Hash; // [Felt; 4]
}

Storage Layout

Storage fields are arranged sequentially in the order they appear in the struct:

#![allow(unused)]
fn main() {
#[derive(Storage)]
struct ExampleContract {
    pub field_a: Felt,        // Slot 0
    pub field_b: [Felt; 3],   // Slots 1-3
    pub field_c: CustomType,  // Slots 4-6 (assuming CustomType::size() == 3)
    pub field_d: bool,        // Slot 7
}

// Total storage: 8 slots (0-7)
}

Manual Storage Operations

Developers can manually manage storage slots using low-level functions:

Direct Slot Access

#![allow(unused)]
fn main() {
#[contract]
struct TokenContract {
}

impl TokenContract {
    pub fn manual_storage_example() {
        let user_id = get_user_id();
        
        // Read current user's balance from slot 0
        // Each slot is a Hash containing [balance, reserved1, reserved2, reserved3]
        let user_leaf: Hash = get_state_hash_at(user_id);
        let current_balance: Felt = user_leaf[0];
        
        // Update balance while preserving other fields
        let new_balance = current_balance + 100;
        cset_state_hash_at(user_id, [
            new_balance,    // New balance
            user_leaf[1],   // Preserve reserved1
            user_leaf[2],   // Preserve reserved2  
            user_leaf[3],   // Preserve reserved3
        ]);
        
        // Access another user's storage
        let other_user = 456;
        let other_leaf: Hash = get_state_hash_at(other_user);
        let other_balance: Felt = other_leaf[0];
    }
    
    pub fn cross_user_access() {
        let sender = 123;
        let recipient = get_user_id();
        let contract_id = get_contract_id();
        
        // Read sender's data from their storage in this contract
        let sender_data: Hash = get_other_user_contract_state_hash_at(
            32,           // contract_state_tree_height
            sender,       // user_id of sender
            contract_id,  // current contract
            recipient     // slot index (using recipient as slot key)
        );
        
        // sender_data[2] might contain amount sent to recipient
        let amount_sent = sender_data[2];
    }
}
}

Manual Storage Layout Design

When using manual storage, developers design their own slot layout:

#![allow(unused)]
fn main() {
// Example: Token contract with manual slot management
// Slot layout per user:
// [balance, total_sent, total_received, reserved]

impl TokenContract {
    pub fn transfer(recipient: Felt, amount: Felt) -> Felt {
        let sender_id = get_user_id();
        
        // Read sender's state
        let sender_leaf: Hash = get_state_hash_at(sender_id);
        let sender_balance = sender_leaf[0];
        let sender_total_sent = sender_leaf[1];
        
        assert(amount <= sender_balance, "insufficient balance");
        
        // Update sender's state
        cset_state_hash_at(sender_id, [
            sender_balance - amount,      // New balance
            sender_total_sent + amount,   // Updated total sent
            sender_leaf[2],               // Preserve total received
            sender_leaf[3],               // Preserve reserved
        ]);
        
        // Read recipient's transfer tracking (from sender's perspective)
        let transfer_leaf: Hash = get_state_hash_at(recipient);
        let previous_sent_to_recipient = transfer_leaf[2];
        
        // Update transfer tracking
        cset_state_hash_at(recipient, [
            transfer_leaf[0],                         // Preserve balance
            transfer_leaf[1],                         // Preserve total sent  
            previous_sent_to_recipient + amount,      // Update sent to this recipient
            transfer_leaf[3],                         // Preserve reserved
        ]);
        
        sender_balance - amount
    }
}
}

Automatic Storage Generation

Use #[derive(Storage, StorageRef)] to automatically generate storage management code:

Storage Derive

The #[derive(Storage)] attribute automatically implements the Storage trait:

#![allow(unused)]
fn main() {
#[derive(Storage)]
pub struct Person {
    pub age: Felt,           // Size: 1 slot
    pub height: Felt,        // Size: 1 slot  
    pub birth_year: Felt,    // Size: 1 slot
}
// Total size: 3 slots

#[derive(Storage)]
pub struct PersonArray {
    pub people: [Person; 10], // Size: 30 slots (10 * 3)
    pub count: Felt,          // Size: 1 slot
}
// Total size: 31 slots
}

Generated methods by #[derive(Storage)]:

  • size() -> Felt - Returns number of slots occupied
  • read(height: Felt, user_id: Felt, contract_id: Felt, offset: Felt) -> Self
  • write(offset: Felt, value: Self)

The #[derive(Storage)] attribute automatically implements the Storage trait by:

  1. Calculating size(): Sums up sizes of all fields
  2. Generating read(): Aggregates individual field reads from their slot offsets
  3. Generating write(): Aggregates individual field writes to their slot offsets
  4. Computing offsets: Each field gets an offset based on its position in the layout
#![allow(unused)]
fn main() {
// Example of what #[derive(Storage)] generates internally
#[derive(Storage)]
pub struct Person {
    pub age: Felt,           // Offset 0, Size 1
    pub height: Felt,        // Offset 1, Size 1  
    pub birth_year: Felt,    // Offset 2, Size 1
}
// Total size: 3 slots

// Generated implementation (simplified):
impl Storage for Person {
    pub fn size() -> Felt {
        3  // age(1) + height(1) + birth_year(1)
    }
    
    pub fn read(height: Felt, user_id: Felt, contract_id: Felt, offset: Felt) -> Self {
        // Read from slot offsets: offset+0, offset+1, offset+2
        let age = Felt::read(height, user_id, contract_id, offset + 0);
        let height_val = Felt::read(height, user_id, contract_id, offset + 1); 
        let birth_year = Felt::read(height, user_id, contract_id, offset + 2);
        
        new Person { age, height: height_val, birth_year }
    }
    
    pub fn write(offset: Felt, value: Self) {
        // Write to slot offsets: offset+0, offset+1, offset+2  
        Felt::write(offset + 0, value.age);
        Felt::write(offset + 1, value.height);
        Felt::write(offset + 2, value.birth_year);
    }
}

## StorageRef Derive

The `#[derive(StorageRef)]` attribute generates xxxRef types that provide `get` and `set` helper methods:

```rust
#[derive(Storage, StorageRef)]
pub struct TokenData {
    pub balance: Felt,          // Offset 0, Size 1
    pub locked_amount: Felt,    // Offset 1, Size 1
}
// Total size: 2 slots

#[contract]
#[derive(Storage, StorageRef)]  
pub struct TokenContract {
    pub total_supply: Felt,                 // Offset 0, Size 1
    pub user_data: [TokenData; 1000000],    // Offset 1, Size 2000000 (1M * 2)
    pub admin: Felt,                        // Offset 2000001, Size 1
}
// Total size: 2000002 slots

// Automatically generates TokenContractRef struct:
pub struct TokenContractRef {
    pub total_supply: StorageRef<Felt, 1u32>,
    pub user_data: StorageRef<[TokenData; 1000000], 1u32>,
    pub admin: StorageRef<Felt, 1u32>,
}

impl TokenContractRef {
    // Generated constructor
    pub fn new(metadata: ContractMetadata) -> Self {
        new TokenContractRef {
            total_supply: StorageRef::<Felt, 1u32>::new(0, metadata),          // Offset 0
            user_data: StorageRef::<[TokenData; 1000000], 1u32>::new(1, metadata), // Offset 1  
            admin: StorageRef::<Felt, 1u32>::new(2000001, metadata),           // Offset 2000001
        }
    }
}
}

Generated xxxRef struct features:

  • Virtual pointer: No data loaded until get() is called
  • Automatic offsets: Each field reference points to correct slot offset
  • Field access: Direct access to nested structure fields
  • Array indexing: index(i) method for array elements
  • get()/set() helper methods: Read and write individual values

Using Storage References

#![allow(unused)]
fn main() {
impl TokenContractRef {
    pub fn mint(user_id: Felt, amount: Felt) {
        // Create storage reference for current contract
        let contract = TokenContractRef::new(ContractMetadata::current());
        
        // Access and modify total supply
        let current_supply = contract.total_supply.get();
        contract.total_supply.set(current_supply + amount);
        
        // Access specific user's data through array indexing
        let mut user_data = contract.user_data.index(user_id).get();
        contract.user_data.index(user_id).set(new TokenData {
            balance: user_data.balance + amount,
            locked_amount: user_data.locked_amount
        });
    }
    
    pub fn transfer(from: Felt, to: Felt, amount: Felt) {
        let contract = TokenContractRef::new(ContractMetadata::current());
        
        // Access sender's data
        let mut sender_data = contract.user_data.index(from).get();
        assert(sender_data.balance >= amount, "insufficient balance");
        
        // Update sender's balance
        contract.user_data.index(from).set(new TokenData {
            balance: sender_data.balance - amount,
            locked_amount: sender_data.locked_amount
        });
        
        // Update recipient's balance
        let mut recipient_data = contract.user_data.index(to).get();
        contract.user_data.index(to).set(new TokenData {
            balance: recipient_data.balance + amount,
            locked_amount: recipient_data.locked_amount
        });
    }
}
}

Cross-User Storage Access

Storage references can access other users' storage:

#![allow(unused)]
fn main() {
pub fn claim_tokens(sender: Felt) {
    let current_user = get_user_id();
    let contract_id = get_contract_id();
    
    // Create reference to current user's contract storage
    let my_contract = ContractRef::new(ContractMetadata::current());
    
    // Create reference to sender's contract storage  
    let sender_metadata = ContractMetadata::new(contract_id, sender);
    let sender_contract = ContractRef::new(sender_metadata);
    
    // Read how much sender sent to current user  
    let amount_sent = sender_contract.user_data.index(current_user).get().balance;
    
    // Read how much current user has already claimed from sender
    let amount_claimed = my_contract.user_data.index(sender).get().locked_amount;
    
    let claimable = amount_sent - amount_claimed;
    assert(claimable > 0, "nothing to claim");
    
    // Update claimed amount and balance
    let mut my_data = my_contract.user_data.index(sender).get();
    my_contract.user_data.index(sender).set(new TokenData {
        balance: my_data.balance,
        locked_amount: amount_sent  // Update claimed amount
    });
    my_contract.balance.set(my_contract.balance.get() + claimable);
}
}

Creating Storage Pointers

Storage references are virtual pointers that can be created without loading actual data:

#![allow(unused)]
fn main() {
fn storage_pointer_example() {
    // Create storage pointer for current user's contract
    let contract = ContractRef::new(ContractMetadata::current());
    
    // Create pointer for different user's storage
    let other_user_contract = ContractRef::new(ContractMetadata::new(
        get_contract_id(), // same contract
        456                // different user
    ));
    
    // Create pointer for different contract and user  
    let external_contract = ContractRef::new(ContractMetadata::new(
        789, // different contract ID
        123  // different user ID
    ));
    
    // All pointers are lightweight - no data loaded until get() is called
    let my_balance = contract.balance.get();              // Load from current user
    let other_balance = other_user_contract.balance.get(); // Load from user 456
    let external_balance = external_contract.balance.get(); // Load from user 123 in contract 789
}
}

Contract Definition

Contract Attributes

#![allow(unused)]
fn main() {
// Basic contract struct
#[contract]
#[derive(Storage, StorageRef)]
pub struct MyContract {
    pub state_var1: Felt,
    pub state_var2: [Felt; 100],
}

// Alternative: storage-only struct (no automatic contract generation)
#[storage] 
#[derive(Storage)]
pub struct DataStorage {
    pub data: [Felt; 1000],
}
}

Contract Implementation

#![allow(unused)]
fn main() {
impl MyContract {
    // Constructor pattern
    pub fn new() -> Self {
        new MyContract {
            state_var1: 0,
            state_var2: [0; 100],
        }
    }
    
    // Generated getter/setter methods (when using #[derive(Storage)])
    pub fn get_state_var1(height: Felt, user_id: Felt, contract_id: Felt) -> Felt {
        // Auto-generated
    }
    
    pub fn set_state_var1(value: Felt) {
        // Auto-generated - writes to current user's storage
    }
}

impl MyContractRef {
    // Storage reference methods
    pub fn initialize() {
        let contract = MyContractRef::new(ContractMetadata::current());
        contract.state_var1.set(42);
        
        // Initialize array elements
        for i in 0u32..100u32 {
            contract.state_var2.index(i as Felt).set(i as Felt);
        }
    }
}
}

Nested Structures and References

Nested Structure Access

#![allow(unused)]
fn main() {
#[derive(Storage, StorageRef)]
pub struct UserProfile {
    pub name_hash: Hash,
    pub age: Felt,
    pub balance: Felt,
}

#[derive(Storage, StorageRef)]
pub struct GameData {
    pub level: Felt,
    pub score: Felt,
}

#[contract]
#[derive(Storage, StorageRef)]
pub struct UserContract {
    pub profile: UserProfile,        // Slots 0-5 (Hash=4 + Felt + Felt)
    pub game: GameData,              // Slots 6-7
    pub friends: [Felt; 10],         // Slots 8-17
}

impl UserContractRef {
    pub fn update_profile_age(new_age: Felt) {
        let user = UserContractRef::new(ContractMetadata::current());
        
        // Update profile age
        let mut profile = user.profile.get();
        profile.age = new_age;
        user.profile.set(profile);
        
        // Access nested fields  
        let mut current_profile = user.profile.get();
        current_profile.balance = current_profile.balance + 10;
        user.profile.set(current_profile);
        
        // Array access
        user.friends.index(0).set(123); // Set first friend ID
    }
}
}

Ref Attribute for Nested Data Access

Use #[ref] to create references for nested data:

#![allow(unused)]
fn main() {
#[derive(Storage, StorageRef)]
pub struct NestedData {
    pub counter: Felt,
    pub last_update: Felt,
}

#[contract]
#[derive(Storage, StorageRef)]
pub struct MyContract {
    pub basic_data: Felt,
    #[ref]
    pub nested_data: NestedData,
    pub array_data: [Felt; 1000],
}

impl MyContractRef {
    pub fn increment_counter() {
        let contract = MyContractRef::new(ContractMetadata::current());
        
        // Access nested data through #[ref]
        let mut data = contract.nested_data.get();
        data.counter = data.counter + 1;
        data.last_update = get_checkpoint_id();
        contract.nested_data.set(data);
    }
}
}

Storage Size Calculation

#![allow(unused)]
fn main() {
#[test]
fn test_storage_sizes() {
    // Primitive types
    assert_eq(Felt::size(), 1, "Felt occupies 1 slot");
    assert_eq(bool::size(), 1, "bool occupies 1 slot");
    assert_eq(Hash::size(), 4, "Hash occupies 4 slots");
    
    // Arrays
    assert_eq(<[Felt; 10]>::size(), 10, "Array size = element_count");
    assert_eq(<[Hash; 5]>::size(), 20, "Hash array: 5 * 4 = 20 slots");
    
    // Custom structures
    // Person has 3 Felt fields = 3 slots
    assert_eq(Person::size(), 3, "Person occupies 3 slots");
    
    // Contract with mixed types
    assert_eq(UserContract::size(), 18, "Total contract storage");
}
}

Storage Best Practices

Layout Organization

#![allow(unused)]
fn main() {
// Good: Related data together
#[derive(Storage, StorageRef)]
pub struct WellDesignedContract {
    pub balance: Felt,
    pub last_active: Felt,
    pub user_level: Felt,
    pub experience: Felt,
    pub historical_data: [Felt; 1000],
}
}

Choosing Manual vs Automatic Storage

Use Manual Storage When:

  • Need precise control over slot layout
  • Implementing complex storage patterns
  • Working with legacy storage layouts
  • Optimizing for specific access patterns
#![allow(unused)]
fn main() {
// Manual storage for precise control
impl AdvancedTokenContract {
    // Custom layout: [balance, allowance, metadata, reserved]
    pub fn get_user_balance(user_id: Felt) -> Felt {
        let user_leaf = get_state_hash_at(user_id);
        user_leaf[0] // Balance is always first element
    }
}
}

Use Automatic Storage When:

  • Developing new contracts
  • Need clean, maintainable code
  • Working with structured data
  • Want type safety and error prevention
#![allow(unused)]
fn main() {
// Automatic storage for clean code
#[derive(Storage, StorageRef)]
pub struct CleanTokenContract {
    pub balances: [Felt; 1000000],
    pub allowances: [Hash; 1000000], // [owner, spender, amount, expiry]
    pub metadata: TokenMetadata,
}
}

Key Points

  1. Hierarchical Storage: Global User Tree → User Contract State Tree → Contract Storage (2^32 slots)
  2. User Leaf Structure: Each user has a leaf containing public key, contract state tree root, balance, nonce, etc.
  3. Complete Isolation: Each user has separate contract state trees - no cross-contamination
  4. Storage Capacity: 2^64 users × 2^32 contracts × 2^32 slots × 4 Felt per slot
  5. Access Permissions: Read any user's storage, write only to own storage
  6. Interaction Pattern: "Read Others, Write Self" - enables secure cross-user interactions
  7. Security Model: Write isolation ensures users cannot modify others' storage directly
  8. Consent-based Transfers: Recipients must actively claim transfers from senders
  9. Two Approaches: Manual slot management vs automatic Storage/StorageRef derives
  10. Storage References: Virtual pointers enable efficient access without full data loading
  11. Cross-User Access: Create ContractMetadata with different user_id for cross-user reads
  12. Type Safety: Automatic derives provide compile-time layout validation
  13. No Dynamic Types: All storage must be statically sized at compile time
  14. Storage Pointers: Use ContractRef::new(ContractMetadata) to create lightweight virtual pointers
  15. Generated Helper Methods: #[derive(StorageRef)] generates xxxRef types with get and set methods

Built-in Functions and Standard Library

This chapter covers Psy's built-in functions and standard library, including context functions, storage operations, memory utilities, and low-level operations.

Standard Library Overview

Psy provides a comprehensive standard library (psy-std) that includes:

  • Context Functions: Access blockchain and execution context information
  • Storage Operations: Read and write persistent contract storage
  • Memory Utilities: Low-level memory operations and type utilities
  • Default Trait: Provide default values for primitive types

All standard library functions are automatically available in every module - the prelude is automatically imported without requiring manual import statements.

Context Functions

Context functions provide access to execution environment information, including user data, contract metadata, and checkpoint information.

User and Contract Context

fn main() {
    // Get current user ID
    let user_id = get_user_id();
    
    // Get current contract ID
    let contract_id = get_contract_id();
    
    // Get the caller contract ID (for cross-contract calls)
    let caller_id = get_caller_contract_id();
    
    // Get user's public key hash
    let user_public_key = get_user_public_key_hash();
    
    // Get last nonce used by the user
    let nonce = get_last_nonce();
}

Contract Deployment Information

#![allow(unused)]
fn main() {
fn check_contract_deployer() {
    let contract_id = get_contract_id();
    
    // Get the deployer hash of a contract
    let deployer_hash = get_contract_deployer(contract_id);
    
    // Hash is [Felt; 4] representing the deployer's public key hash
}
}

Checkpoint and State Information

Checkpoint functions provide access to blockchain state at specific checkpoints:

fn main() {
    // Get current checkpoint ID
    let checkpoint_id = get_checkpoint_id();
    
    // Get various checkpoint data
    let users_root = get_register_users_root(checkpoint_id);
    let gutas_root = get_gutas_root(checkpoint_id);
    let contracts_root = get_deploy_contracts_root(checkpoint_id);
    
    // Get checkpoint statistics
    let fees_collected = get_fees_collected(checkpoint_id);
    let ops_processed = get_user_ops_processed(checkpoint_id);
    let total_txs = get_total_transactions(checkpoint_id);
    let slots_modified = get_slots_modified(checkpoint_id);
    
    // Get completion counts
    let contracts_completed = get_deploy_contracts_completed(checkpoint_id);
    let users_completed = get_register_users_completed(checkpoint_id);
    let gutas_completed = get_gutas_completed(checkpoint_id);
}

State Access Functions

Important: All state in Psy is user-separated. Each user has their own isolated state tree for each contract, ensuring complete data isolation between users.

#![allow(unused)]
fn main() {
fn access_state() {
    // Read state hash from CURRENT USER's storage slot in current contract
    let slot_hash = get_state_hash_at(0); // Slot 0 for current user
    
    // Read state from another contract (still current user's state in that contract)
    let contract_height = 32;
    let other_contract_id = 123;
    let other_state = get_other_contract_state_hash_at(
        contract_height,
        other_contract_id,
        0  // slot index - still current user's state
    );
    
    // Read state from ANOTHER USER's contract instance
    // This requires explicit user_id parameter for cross-user access
    let other_user_id = 456;
    let user_contract_state = get_other_user_contract_state_hash_at(
        contract_height,
        other_user_id,      // Explicit user whose state to read
        other_contract_id,  // Contract to read from
        0                   // Slot index in that user's state
    );
    
    // Set state hash in CURRENT USER's storage (returns previous value)
    let new_hash = [1, 2, 3, 4]; // Hash type is [Felt; 4]
    let old_hash = cset_state_hash_at(0, new_hash); // Updates current user's state
}

fn user_isolation_example() {
    // Each user has completely separate state trees
    let user_a_state = get_state_hash_at(0);  // User A's slot 0
    let user_b_state = get_other_user_contract_state_hash_at(
        32,
        456,  // User B's ID
        get_contract_id(),
        0     // Same slot 0, but from User B's state tree
    );
    
    // user_a_state and user_b_state are completely independent
    // even though they're from the same contract and same slot
}
}

Storage Operations

The storage system provides persistent data storage for contracts with built-in serialization for primitive types and arrays.

Key Principle: All storage is user-isolated. Each user has their own separate storage space within each contract, ensuring complete data privacy and isolation between users.

Storage Trait

All storage-compatible types implement the Storage trait:

#![allow(unused)]
fn main() {
// Storage trait provides size and read/write operations
pub trait Storage {
    pub fn size() -> Felt;
    pub fn read(
        contract_state_tree_height: Felt,  // Tree height (usually 32)
        user_id: Felt,                     // Target user's ID
        contract_id: Felt,                 // Target contract's ID
        offset: Felt                       // Storage slot index
    ) -> Self;
    pub fn write(
        offset: Felt,                      // Storage slot index
        value: Self                        // Value to write
    );
}
}

Parameter Explanations

For read() function:

  • contract_state_tree_height: Merkle tree height for the contract state (typically 32)
  • user_id: The ID of the user whose storage to read from
  • contract_id: The ID of the contract whose storage to access
  • offset: The storage slot index (starting from 0)

For write() function:

  • offset: The storage slot index where to write the value
  • value: The data to store in that slot

Important: write() always writes to the current user's storage in the current contract.

Basic Storage Operations

#![allow(unused)]
fn main() {
// Storage is automatically available for primitive types
fn storage_example() {
    let metadata = ContractMetadata::current();
    
    // Write a Felt to CURRENT USER's storage slot 0
    let value: Felt = 42;
    Felt::write(
        0,      // offset: storage slot index 
        value   // value: data to store
    );
    
    // Read from CURRENT USER's storage slot 0
    let read_value = Felt::read(
        metadata.contract_state_tree_height,  // 32 (tree height)
        metadata.user_id,                     // current user's ID
        metadata.contract_id,                 // current contract ID
        0                                     // offset: slot index to read
    );
    
    // Read from ANOTHER USER's storage
    let other_user_id = 456;
    let other_value = Felt::read(
        32,                   // contract_state_tree_height
        other_user_id,        // different user's ID
        metadata.contract_id, // same contract
        0                     // same slot, but from other user's storage
    );
    
    // Each user has their own isolated storage
    // User A writing to slot 0 does not affect User B's slot 0
    
    // Works with other primitive types (still user-isolated)
    let bool_val = true;
    bool::write(1, bool_val);  // offset=1, value=true
    
    let u32_val = 123u32;
    u32::write(2, u32_val);    // offset=2, value=123u32
}
}

Array Storage

Arrays automatically implement storage with proper serialization:

#![allow(unused)]
fn main() {
fn array_storage_example() {
    let metadata = ContractMetadata::current();
    
    // Store an array - occupies multiple consecutive slots
    let arr: [Felt; 5] = [1, 2, 3, 4, 5];
    <[Felt; 5]>::write(
        10,  // offset: starting slot (will use slots 10-14)
        arr  // value: array data
    );
    
    // Read the entire array back
    let read_arr = <[Felt; 5]>::read(
        32,                       // contract_state_tree_height
        metadata.user_id,         // user_id: current user
        metadata.contract_id,     // contract_id: current contract
        10                        // offset: starting slot (reads slots 10-14)
    );
    
    // Arrays have size() method - returns number of slots occupied
    let array_size = <[Felt; 5]>::size(); // Returns 5 (slots)
    
    // Reading from another user's array storage
    let other_user_array = <[Felt; 5]>::read(
        32,              // contract_state_tree_height
        456,             // user_id: different user
        metadata.contract_id,  // same contract
        10               // offset: same slot range, different user's data
    );
}
}

Storage References

StorageRef provides convenient access to storage with automatic metadata handling:

#![allow(unused)]
fn main() {
fn storage_ref_example() {
    let metadata = ContractMetadata::current();
    
    // Create a storage reference for a single Felt at slot 0
    let storage_ref = StorageRef::<Felt, 1u32>::new(
        0,        // offset: storage slot index
        metadata  // metadata: contains user_id, contract_id, tree_height
    );
    
    // Set value through reference (writes to current user's storage)
    storage_ref.set(42);
    
    // Get value through reference (reads from metadata-specified location)
    let value = storage_ref.get();
    
    // Array storage references support indexing
    let array_ref = StorageRef::<[Felt; 10], 1u32>::new(
        20,       // offset: starting slot for array (slots 20-29)
        metadata  // metadata: specifies which user/contract
    );
    
    // Access individual array elements
    let element_ref = array_ref.index(3); // Access element at index 3 (slot 23)
    element_ref.set(99);                  // Write to that specific element
    let element_value = element_ref.get(); // Read from that specific element
    
    // Create reference for different user's storage
    let other_metadata = ContractMetadata::new(
        metadata.contract_id, // same contract
        456                   // different user_id
    );
    let other_user_ref = StorageRef::<Felt, 1u32>::new(0, other_metadata);
    let other_value = other_user_ref.get(); // Reads from user 456's slot 0
}
}

Contract Metadata

#![allow(unused)]
fn main() {
fn metadata_example() {
    // Get current execution context metadata
    let current_metadata = ContractMetadata::current();
    // Contains: contract_state_tree_height=32, current contract_id, current user_id
    
    // Create metadata for specific contract/user combination
    let custom_metadata = ContractMetadata::new(
        123, // contract_id: target contract
        456  // user_id: target user
    );
    // Sets: contract_state_tree_height=32, contract_id=123, user_id=456
    
    // Access metadata fields
    let contract_id = current_metadata.get_contract_id(); // Returns current contract ID
    let user_id = current_metadata.get_user_id();         // Returns current user ID
    
    // Metadata is used in storage operations to specify:
    // - Which user's storage space to access
    // - Which contract's storage to read/write
    // - The merkle tree height for state verification
}
}

Memory Utilities

Low-level memory and type utilities for advanced operations.

Type Transmutation

#![allow(unused)]
fn main() {
fn transmute_example() {
    // Convert between compatible types
    let felt_array: [Felt; 4] = [1, 2, 3, 4];
    
    // Transmute to different representation (Hash = [Felt; 4])
    let as_hash: Hash = transmute#<Hash>(felt_array);
    
    // Note: transmute is unsafe and requires exact size matching
}
}

Size Information

#![allow(unused)]
fn main() {
fn size_example() {
    // Get size of types in Felt units
    let felt_size = size_of#<Felt>();     // 1
    let bool_size = size_of#<bool>();     // 1
    let u32_size = size_of#<u32>();       // 1
    let array_size = size_of#<[Felt; 10]>(); // 10
}
}

Cross-Contract Invocation

Deferred Calls

Psy currently supports deferred contract calls. Deferred calls execute after the current function call completes.

#![allow(unused)]
fn main() {
fn deferred_call_example() {
    let target_contract = 123;
    let method_id = 456;
    let inputs = (42, true);
    
    // Deferred call - executes after current function completes
    invoke_deferred#<(Felt, bool)>(
        target_contract,
        method_id,
        inputs
    );
}
}

Note: Synchronous calls (invoke_sync) are not currently supported.

Default Trait

Provides default values for primitive types:

#![allow(unused)]
fn main() {
fn default_example() {
    // Get default values
    let default_felt = Felt::default();   // 0
    let default_bool = bool::default();   // false
    let default_u32 = u32::default();     // 0u32
    
    // Useful for initialization
    let mut storage_value = Felt::default();
}
}

Low-Level Built-in Functions

These functions are prefixed with __ and provide direct access to ZK circuit operations:

Bit Manipulation

#![allow(unused)]
fn main() {
fn bit_operations() {
    let value = 255;
    
    // Split a Felt into individual bits
    let bits = __split_bits(value, 8);  // Split into 8 bits
    
    // Reconstruct from bits
    let reconstructed = __sum_bits(bits);
    
    // bits is [Felt; 8] where each element is 0 or 1
}
}

Context Access (Internal)

#![allow(unused)]
fn main() {
// These are internal functions wrapped by the context module
fn internal_context() {
    let user_id = __ctx_get_user_id();
    let contract_id = __ctx_get_contract_id();
    // ... other __ctx_* functions
}
}

Storage Access (Internal)

#![allow(unused)]
fn main() {
// Internal storage functions
fn internal_storage() {
    // Single slot read/write
    let value = __storage_read(32, 1, 2, 0); // height, user, contract, slot
    __storage_write(0, 42);
    
    // Range operations for arrays
    let range_data = __storage_read_range(32, 1, 2, 0, 5); // Read 5 slots
    __storage_write_range(0, [1, 2, 3, 4, 5]);
}
}

Memory Operations (Internal)

#![allow(unused)]
fn main() {
// Internal memory functions
fn internal_memory() {
    let size = __mem_size_of::<Felt>();
    let converted = __mem_transmute::<Hash>([1, 2, 3, 4]);
}
}

Utility Functions

State Management

#![allow(unused)]
fn main() {
fn state_management() {
    // Clear the entire cached modifications (dangerous operation)
    clear_entire_tree();
    
    // This function removes all stored data - use with extreme caution
}
}

Type Aliases

Common type aliases used throughout the standard library:

#![allow(unused)]
fn main() {
// Hash represents a 4-Felt hash value (equivalent to 4 u64 values)
// This is the fundamental storage slot type - each storage slot is a Hash
pub type Hash = [Felt; 4];
}

Storage Slot Structure

Each storage slot in Psy is a Hash type:

#![allow(unused)]
fn main() {
fn slot_example() {
    // Each slot can hold 4 Felt values (4 u64s)
    let slot_data: Hash = [1, 2, 3, 4];
    
    // When you read/write storage, you're working with Hash-sized slots
    let slot_hash = get_state_hash_at(0);  // Returns Hash = [Felt; 4]
    
    // Individual Felt values are packed into these 4-element slots
}
}

Important Notes

Storage Considerations

  1. User Isolation: Each user has completely separate storage - User A's slot 0 is independent of User B's slot 0
  2. Slot Structure: Each storage slot is a Hash type ([Felt; 4]) that can contain 4 u64 values
  3. Slot Indexing: Storage slots are indexed by Felt values starting from 0 within each user's storage space
  4. Automatic Layout: Arrays and complex types are automatically laid out in sequential slots in user's storage
  5. Size Calculation: Use the size() method to determine how many slots a type occupies
  6. Cross-User Access: Reading another user's storage requires explicit user_id and proper permissions
  7. Cross-Contract Access: Reading from other contracts accesses the current user's state in that contract

Performance Implications

  1. Storage Operations: Reading/writing storage generates ZK constraints
  2. Cross-Contract Calls: Synchronous calls can be expensive in terms of circuit size
  3. Memory Operations: Transmute operations should be used sparingly
  4. Bit Operations: Bit manipulation functions expand to multiple constraints

Security Considerations

  1. Access Control: Context functions return current execution context - ensure proper authorization
  2. State Isolation: Each contract has isolated storage unless explicitly accessed
  3. Transmute Safety: Type transmutation bypasses type safety - use only when necessary
  4. Clear Operations: clear_entire_tree() is irreversible

Key Points

  1. Standard Library: Comprehensive built-in functions for blockchain operations
  2. Context Access: Rich execution environment information available
  3. Storage System: Automatic serialization for primitive types and arrays
  4. Storage References: Convenient high-level interface to storage operations
  5. Cross-Contract Calls: Both synchronous and deferred invocation patterns
  6. Memory Utilities: Low-level operations for advanced use cases
  7. Default Values: Consistent initialization patterns for all primitive types
  8. Internal Functions: Direct access to ZK circuit operations when needed

Contract Deployment Architecture

This document explains how Psy smart contract functions are compiled, deployed, and organized in the blockchain's state trees.

Function Compilation Pipeline

When a Psy smart contract is compiled, each function goes through the following process:

  1. Source CodeDPN OpcodesZK CircuitVerifier Data
  2. The resulting verifier data and function metadata are stored on-chain in a hierarchical tree structure

Contract Function Tree

Each deployed contract maintains a Contract Function Tree that stores information about all its public functions.

Function Storage Layout

Each function occupies two leaves in the Contract Function Tree:

Leaf 1: Function Signature

#![allow(unused)]
fn main() {
// Function metadata including name, parameters, and return types
let function_signature: FunctionSignature = FunctionSignature {
    name: "transfer",
    parameters: vec![("recipient", "Felt"), ("amount", "Felt")],
    return_type: "Felt",
    visibility: "pub",
};
}

Leaf 2: Verifier Data Hash

#![allow(unused)]
fn main() {
// Hash of the ZK circuit verifier data generated from DPN opcodes
let verifier_hash: QHashOut<F> = hash(circuit_verifier_data);
}

Function Tree Organization

Contract Function Tree
├── Function 0
│   ├── Leaf 0: Function Signature
│   └── Leaf 1: Verifier Data Hash
├── Function 1  
│   ├── Leaf 2: Function Signature
│   └── Leaf 3: Verifier Data Hash
├── Function 2
│   ├── Leaf 4: Function Signature
│   └── Leaf 5: Verifier Data Hash
└── ...

Contract Tree Structure

The Contract Function Tree root is then stored as part of a Contract Leaf in the higher-level Contract Tree.

PsyContractLeaf Structure

#![allow(unused)]
fn main() {
pub struct PsyContractLeaf<F: RichField> {
    pub deployer: QHashOut<F>,
    pub function_tree_root: QHashOut<F>,
    pub state_tree_height: F,
}
}

Field Explanations

deployer: QHashOut<F>
  • Purpose: Identifies who deployed this contract
  • Content: The deployer's public key
  • Usage:
    • Access control and permissions
    • Contract ownership verification
    • Audit trails for contract deployment
function_tree_root: QHashOut<F>
  • Purpose: Root hash of the Contract Function Tree
  • Content: Merkle root containing all function verifier data and signatures
  • Usage:
    • Efficient verification of function existence
    • Proof generation for function calls
    • Contract integrity validation
state_tree_height: F
  • Purpose: Defines the maximum depth/capacity of the contract's state tree
  • Content: Height parameter determining how many state slots the contract can use
  • Usage:
    • State tree initialization and validation
    • Memory allocation for contract state
    • Gas/resource calculation for state operations

Complete Storage Hierarchy

Global Contract Tree
├── Contract 0
│   ├── deployer: QHashOut<F>
│   ├── function_tree_root: QHashOut<F>  ──┐
│   └── state_tree_height: F              │
├── Contract 1                            │
│   ├── deployer: QHashOut<F>             │
│   ├── function_tree_root: QHashOut<F>   │
│   └── state_tree_height: F              │
└── ...                                   │
                                          │
            ┌─────────────────────────────┘
            ▼
    Contract Function Tree (for Contract 0)
    ├── mint()
    │   ├── Function Signature
    │   └── Verifier Data Hash
    ├── transfer()  
    │   ├── Function Signature
    │   └── Verifier Data Hash
    ├── burn()
    │   ├── Function Signature
    │   └── Verifier Data Hash
    └── ...

Function Call Verification Process

When a function is called on-chain:

  1. Locate Contract: Find the contract in Global Contract Tree using contract ID
  2. Verify Contract Link: Ensure the contract is linked to the checkpoint tree root
  3. Match Function Signature: Verify the call data matches a function signature in the Contract Function Tree
  4. Validate Circuit: Confirm the function's compiled circuit corresponds to the verifier data hash stored in the Contract Function Tree
  5. Execute: Verify witness satisfies circuit constraints and process the function call (handled by Psy zkVM)

Example: Token Contract Storage

#![allow(unused)]
fn main() {
// Example token contract with 3 functions
contract Token {
    pub fn mint(amount: Felt) -> Felt { ... }
    pub fn transfer(to: Felt, amount: Felt) -> Felt { ... }  
    pub fn burn(amount: Felt) -> Felt { ... }
}
}

Compiled Storage Structure:

Contract Leaf:
├── deployer: deployer_public_key
├── function_tree_root: merkle_root([
│   │   mint_signature,
│   │   mint_verifier_hash,
│   │   transfer_signature, 
│   │   transfer_verifier_hash,
│   │   burn_signature,
│   │   burn_verifier_hash
│   ])
└── state_tree_height: 8  // Supports 2^8 = 256 state slots

This architecture enables Psy to efficiently store, lookup, and verify smart contract functions while maintaining the security properties required for a trustless blockchain system.

Functions and Closures

This chapter covers closures in Psy. For basic function documentation, see the Functions chapter.

Closures

Closures in Psy are anonymous functions that can capture variables from their surrounding scope. They use the |parameters| -> return_type { body } syntax.

Basic Closure Syntax

fn main() {
    // Simple closure that adds two numbers
    let add = |a: Felt, b: Felt| -> Felt {
        a + b
    };
    
    let result = add(5, 3);
    assert_eq(result, 8, "5 + 3 should equal 8");
}

Closure Type Inference

Closures can often infer parameter and return types:

fn main() {
    // Explicit types
    let multiply = |x: Felt, y: Felt| -> Felt { x * y };
    
    // Type inference (when context is clear)
    let numbers = [1, 2, 3, 4, 5];
    let sum = 0;
    
    // The closure type is inferred from usage
    let add_to_sum = |n| { sum + n };
    
    assert_eq(multiply(3, 4), 12, "3 * 4 should equal 12");
}

Variable Capture

Closures can capture variables from their surrounding scope:

fn main() {
    let multiplier = 10;
    let base_value = 5;
    
    // Closure captures multiplier and base_value from outer scope
    let calculate = |input: Felt| -> Felt {
        (base_value + input) * multiplier
    };
    
    let result = calculate(3);
    assert_eq(result, 80, "(5 + 3) * 10 should equal 80");
}

Closures with Conditional Logic

fn main() {
    // Closure that finds maximum of two values
    let max = |a: Felt, b: Felt| -> Felt {
        if a > b {
            a
        } else {
            b
        }
    };
    
    assert_eq(max(10, 7), 10, "max of 10 and 7 should be 10");
    assert_eq(max(3, 9), 9, "max of 3 and 9 should be 9");
}

Using Closures with Arrays

fn main() {
    let numbers = [1, 2, 3, 4, 5];
    let multiplier = 3;
    
    // Closure to transform array elements
    let transform = |x: Felt| -> Felt { x * multiplier };
    
    // Manual application (Psy doesn't have built-in map)
    let mut transformed = [0; 5];
    for i in 0u32..5u32 {
        transformed[i as usize] = transform(numbers[i as usize]);
    }
    
    assert_eq(transformed[0], 3, "1 * 3 should equal 3");
    assert_eq(transformed[4], 15, "5 * 3 should equal 15");
}

Closures as Function Parameters

You can pass closures to functions:

// Function that takes a closure as parameter
fn apply_operation(a: Felt, b: Felt, operation: fn(Felt, Felt) -> Felt) -> Felt {
    operation(a, b)
}

fn main() {
    let add = |x: Felt, y: Felt| -> Felt { x + y };
    let multiply = |x: Felt, y: Felt| -> Felt { x * y };
    
    let sum_result = apply_operation(5, 3, add);
    let product_result = apply_operation(5, 3, multiply);
    
    assert_eq(sum_result, 8, "5 + 3 should equal 8");
    assert_eq(product_result, 15, "5 * 3 should equal 15");
}

Complex Closure Examples

Mathematical Operations

fn main() {
    let base = 2;
    
    // Closure for power calculation
    let power = |exponent: Felt| -> Felt {
        let mut result = 1;
        for i in 0u32..(exponent as u32) {
            result = result * base;
        }
        result
    };
    
    assert_eq(power(3), 8, "2^3 should equal 8");
    assert_eq(power(4), 16, "2^4 should equal 16");
}

Validation Logic

fn main() {
    let min_value = 10;
    let max_value = 100;
    
    // Closure for range validation
    let is_in_range = |value: Felt| -> bool {
        value >= min_value && value <= max_value
    };
    
    assert(is_in_range(50), "50 should be in range");
    assert(!is_in_range(5), "5 should not be in range");
    assert(!is_in_range(150), "150 should not be in range");
}

Closure Limitations

Closures in Psy have some limitations compared to other languages:

  1. No Mutable Captures: Closures cannot mutate captured variables
  2. No Move Semantics: Variables are captured by value, not moved
  3. Simple Type System: Complex generic closures are not supported
fn main() {
    let mut counter = 0;
    
    // This would not work - cannot mutate captured variables:
    // let increment = || { counter = counter + 1; };
    
    // Instead, return new values:
    let next_value = |current: Felt| -> Felt { current + 1 };
    
    counter = next_value(counter);
    assert_eq(counter, 1, "Counter should be 1");
}

Practical Closure Use Cases

Configuration-based Operations

fn main() {
    let config_multiplier = 5;
    let config_offset = 10;
    
    // Closure encapsulates configuration
    let transform_value = |input: Felt| -> Felt {
        (input * config_multiplier) + config_offset
    };
    
    let result1 = transform_value(3);  // (3 * 5) + 10 = 25
    let result2 = transform_value(7);  // (7 * 5) + 10 = 45
    
    assert_eq(result1, 25, "Transform of 3 should be 25");
    assert_eq(result2, 45, "Transform of 7 should be 45");
}

Conditional Processing

fn main() {
    let threshold = 50;
    let bonus_rate = 2;
    
    // Closure for bonus calculation
    let calculate_bonus = |base_amount: Felt| -> Felt {
        if base_amount > threshold {
            base_amount * bonus_rate
        } else {
            base_amount
        }
    };
    
    assert_eq(calculate_bonus(30), 30, "No bonus for 30");
    assert_eq(calculate_bonus(60), 120, "Double bonus for 60");
}

Best Practices

  1. Keep closures simple - Complex logic should be in named functions
  2. Use descriptive variable names - Even in short closures
  3. Prefer explicit types - When the closure interface is important
  4. Consider function alternatives - For reusable logic
// Good: Simple, clear closure
fn main() {
    let double = |x: Felt| -> Felt { x * 2 };
    
    // Good: Explicit types for important interfaces
    let validator = |amount: Felt| -> bool { amount > 0 };
    
    // Consider: Named function for complex logic
    fn complex_calculation(a: Felt, b: Felt, c: Felt) -> Felt {
        if a > b {
            a * c + b
        } else {
            b * c + a
        }
    }
}

Summary

Closures in Psy provide a way to create anonymous functions that can capture variables from their environment. They are useful for:

  • Simple transformations and calculations
  • Configuration-based operations that capture settings
  • Callback-style patterns when passing to other functions
  • Encapsulating logic with captured context

While more limited than closures in some languages, Psy closures are sufficient for most functional programming patterns needed in smart contract development.

Modules and Visibility

Modules in Psy organize code into logical units and control visibility of functions, structs, and other items. They help create clean, maintainable codebases for smart contracts.

Module Basics

Defining Modules

#![allow(unused)]
fn main() {
// Define a module for mathematical operations
mod math {
    pub fn max(a: Felt, b: Felt) -> Felt {
        if a > b {
            a
        } else {
            b
        }
    }
    
    pub fn min(a: Felt, b: Felt) -> Felt {
        if a < b {
            a
        } else {
            b
        }
    }
    
    // Private function - not accessible outside module
    fn internal_calculation(x: Felt) -> Felt {
        x * x + 1
    }
}
}

Using Modules

// Import all public items from math module
use math::*;

fn main() {
    let maximum = max(10, 5);
    let minimum = min(10, 5);
    
    assert_eq(maximum, 10, "Max of 10 and 5 should be 10");
    assert_eq(minimum, 5, "Min of 10 and 5 should be 5");
}

Selective Imports

mod utils {
    pub fn add(a: Felt, b: Felt) -> Felt {
        a + b
    }
    
    pub fn multiply(a: Felt, b: Felt) -> Felt {
        a * b
    }
    
    pub fn subtract(a: Felt, b: Felt) -> Felt {
        a - b
    }
}

// Import specific functions
use utils::add;
use utils::multiply;

fn main() {
    let sum = add(5, 3);
    let product = multiply(4, 7);
    
    // subtract is not imported, so this would not work:
    // let difference = subtract(10, 3);
    
    assert_eq(sum, 8, "5 + 3 should equal 8");
    assert_eq(product, 28, "4 * 7 should equal 28");
}

Visibility Control

Public vs Private Items

mod validation {
    // Public function - accessible outside module
    pub fn is_valid_amount(amount: Felt) -> bool {
        amount > 0 && amount <= MAX_AMOUNT
    }
    
    // Public function
    pub fn validate_user_id(user_id: Felt) -> bool {
        user_id > 0 && check_user_exists(user_id)
    }
    
    // Private constant - only accessible within module
    const MAX_AMOUNT: Felt = 1000000;
    
    // Private function - only accessible within module
    fn check_user_exists(user_id: Felt) -> bool {
        // Implementation details hidden
        user_id < 100000
    }
}

fn main() {
    use validation::*;
    
    // Can use public functions
    assert(is_valid_amount(500), "500 should be valid");
    assert(validate_user_id(123), "User 123 should be valid");
    
    // Cannot access private items:
    // let max = MAX_AMOUNT;  // Error: private constant
    // let exists = check_user_exists(123);  // Error: private function
}

Struct and Trait Visibility

mod data {
    // Public struct with public fields
    pub struct PublicData {
        pub value: Felt,
        pub timestamp: Felt,
    }
    
    // Public struct with private fields
    pub struct PrivateData {
        value: Felt,  // Private field
        pub metadata: Felt,  // Public field
    }
    
    impl PrivateData {
        // Public constructor
        pub fn new(value: Felt, metadata: Felt) -> Self {
            new PrivateData { value, metadata }
        }
        
        // Public getter for private field
        pub fn get_value(self) -> Felt {
            self.value
        }
        
        // Private helper function
        fn validate_value(value: Felt) -> bool {
            value > 0
        }
    }
    
    // Public trait
    pub trait Processable {
        pub fn process() -> Felt;
    }
    
    // Private struct - not accessible outside module
    struct InternalData {
        secret: Felt,
    }
}

fn main() {
    use data::*;
    
    // Can create public struct with public fields
    let public_data = new PublicData {
        value: 100,
        timestamp: 12345
    };
    
    // Can access public fields
    let value = public_data.value;
    
    // Must use constructor for struct with private fields
    let private_data = PrivateData::new(50, 999);
    
    // Can access public field
    let metadata = private_data.metadata;
    
    // Must use getter for private field
    let hidden_value = private_data.get_value();
    
    // Cannot access private field directly:
    // let direct_value = private_data.value;  // Error: private field
    
    // Cannot create private struct:
    // let internal = new InternalData { secret: 123 };  // Error: private struct
}

Nested Modules

mod contracts {
    pub mod token {
        pub fn mint(amount: Felt) -> Felt {
            amount
        }
        
        pub fn burn(amount: Felt) -> Felt {
            amount
        }
        
        mod internal {
            pub fn calculate_fee(amount: Felt) -> Felt {
                amount / 100
            }
        }
        
        // Can use nested private module within parent
        pub fn mint_with_fee(amount: Felt) -> Felt {
            use internal::calculate_fee;
            amount - calculate_fee(amount)
        }
    }
    
    pub mod nft {
        pub fn create_nft(metadata: Hash) -> Felt {
            // Implementation
            1
        }
        
        pub fn transfer_nft(token_id: Felt, to: Felt) -> bool {
            // Implementation
            true
        }
    }
}

fn main() {
    // Access nested module functions
    use contracts::token::*;
    use contracts::nft::*;
    
    let minted = mint(100);
    let minted_with_fee = mint_with_fee(100);
    let nft_id = create_nft([1, 2, 3, 4]);
    
    assert_eq(minted, 100, "Basic mint should return amount");
    assert_eq(minted_with_fee, 99, "Mint with fee should deduct 1%");
    assert_eq(nft_id, 1, "NFT creation should return token ID");
}

Module Organization Patterns

Separation by Functionality

#![allow(unused)]
fn main() {
// Authentication module
mod auth {
    pub fn verify_signature(signature: Hash, message: Hash, public_key: Hash) -> bool {
        // Signature verification logic
        true
    }
    
    pub fn hash_password(password: [u8; 32]) -> Hash {
        // Password hashing logic
        [0, 0, 0, 0]
    }
}

// Storage operations module
mod storage {
    pub fn store_user_data(user_id: Felt, data: Hash) {
        // Storage implementation
    }
    
    pub fn retrieve_user_data(user_id: Felt) -> Hash {
        // Retrieval implementation
        [0, 0, 0, 0]
    }
}

// Business logic module
mod logic {
    use auth::*;
    use storage::*;
    
    pub fn register_user(user_id: Felt, public_key: Hash, initial_data: Hash) -> bool {
        // Combine auth and storage operations
        store_user_data(user_id, initial_data);
        true
    }
    
    pub fn update_user_data(user_id: Felt, new_data: Hash, signature: Hash) -> bool {
        let stored_key = retrieve_user_data(user_id);
        if verify_signature(signature, new_data, stored_key) {
            store_user_data(user_id, new_data);
            true
        } else {
            false
        }
    }
}
}

Contract-Specific Modules

#![allow(unused)]
fn main() {
mod token_contract {
    pub struct TokenState {
        pub total_supply: Felt,
        pub balances: [Felt; 1000000],
    }
    
    pub fn transfer(from: Felt, to: Felt, amount: Felt, state: TokenState) -> TokenState {
        // Transfer logic
        state
    }
    
    pub fn mint(to: Felt, amount: Felt, state: TokenState) -> TokenState {
        // Mint logic
        state
    }
}

mod governance_contract {
    pub struct Proposal {
        pub id: Felt,
        pub description_hash: Hash,
        pub votes_for: Felt,
        pub votes_against: Felt,
    }
    
    pub fn create_proposal(description_hash: Hash) -> Proposal {
        new Proposal {
            id: 1,
            description_hash,
            votes_for: 0,
            votes_against: 0
        }
    }
    
    pub fn vote(proposal: Proposal, vote_for: bool, weight: Felt) -> Proposal {
        if vote_for {
            new Proposal {
                id: proposal.id,
                description_hash: proposal.description_hash,
                votes_for: proposal.votes_for + weight,
                votes_against: proposal.votes_against
            }
        } else {
            new Proposal {
                id: proposal.id,
                description_hash: proposal.description_hash,
                votes_for: proposal.votes_for,
                votes_against: proposal.votes_against + weight
            }
        }
    }
}
}

Module Constants and Types

mod constants {
    // Public constants
    pub const MAX_SUPPLY: Felt = 1000000;
    pub const MIN_TRANSFER: Felt = 1;
    pub const FEE_RATE: Felt = 100; // 1%
    
    // Private constants
    const INTERNAL_MULTIPLIER: Felt = 1337;
    
    pub fn get_adjusted_amount(amount: Felt) -> Felt {
        amount * INTERNAL_MULTIPLIER
    }
}

mod types {
    // Public type aliases
    pub type UserId = Felt;
    pub type TokenId = Felt;
    pub type Amount = Felt;
    
    // Public struct
    pub struct UserData {
        pub id: UserId,
        pub balance: Amount,
        pub last_activity: Felt,
    }
    
    // Public enum-like pattern
    pub struct TransferType {
        pub code: Felt,
    }
    
    impl TransferType {
        pub fn standard() -> Self {
            new TransferType { code: 0 }
        }
        
        pub fn fee_exempt() -> Self {
            new TransferType { code: 1 }
        }
    }
}

fn main() {
    use constants::*;
    use types::*;
    
    let user_data = new UserData {
        id: 123,
        balance: MAX_SUPPLY / 10,
        last_activity: 12345
    };
    
    let transfer_type = TransferType::standard();
    let adjusted = get_adjusted_amount(MIN_TRANSFER);
    
    assert_eq(user_data.balance, 100000, "User balance should be 10% of max supply");
}

Module Re-exports

mod internal {
    pub mod crypto {
        pub fn hash(data: [u8; 32]) -> Hash {
            [0, 0, 0, 0]  // Simplified
        }
        
        pub fn verify(hash: Hash, signature: Hash) -> bool {
            true  // Simplified
        }
    }
    
    pub mod math {
        pub fn abs(x: Felt) -> Felt {
            if x < 0 { 0 - x } else { x }
        }
        
        pub fn sqrt(x: Felt) -> Felt {
            // Simplified square root
            x / 2
        }
    }
}

// Re-export selected functions for easier access
mod utils {
    // Re-export crypto functions
    pub use internal::crypto::hash;
    pub use internal::crypto::verify;
    
    // Re-export math functions with different names
    pub use internal::math::abs as absolute_value;
    pub use internal::math::sqrt as square_root;
    
    // Additional utility function
    pub fn combine_hash(a: Hash, b: Hash) -> Hash {
        let combined = [a[0] + b[0], a[1] + b[1], a[2] + b[2], a[3] + b[3]];
        hash([combined[0] as u8, combined[1] as u8, combined[2] as u8, combined[3] as u8])
    }
}

fn main() {
    use utils::*;
    
    let data = [1u8, 2u8, 3u8, 4u8];
    let hash_result = hash(data);
    let is_valid = verify(hash_result, [0, 0, 0, 0]);
    let abs_result = absolute_value(0 - 5);
    
    assert(is_valid, "Hash verification should succeed");
    assert_eq(abs_result, 5, "Absolute value of -5 should be 5");
}

Best Practices

1. Organize by Domain

#![allow(unused)]
fn main() {
// Good: Clear domain separation
mod user_management {
    pub fn create_user() -> Felt { 1 }
    pub fn delete_user() -> bool { true }
}

mod token_operations {
    pub fn transfer_token() -> bool { true }
    pub fn mint_token() -> Felt { 1 }
}

// Avoid: Mixed responsibilities
mod helpers {
    pub fn create_user() -> Felt { 1 }
    pub fn transfer_token() -> bool { true }
    pub fn random_utility() -> Felt { 42 }
}
}

2. Use Clear Visibility

#![allow(unused)]
fn main() {
mod contract {
    // Public interface
    pub fn public_transfer(from: Felt, to: Felt, amount: Felt) -> bool {
        validate_transfer(from, to, amount)
    }
    
    // Private implementation
    fn validate_transfer(from: Felt, to: Felt, amount: Felt) -> bool {
        from != to && amount > 0
    }
}
}

3. Minimize Public Surface

#![allow(unused)]
fn main() {
mod api {
    // Expose only what's necessary
    pub fn process_transaction(tx_data: Hash) -> bool {
        let validated = validate_transaction(tx_data);
        if validated {
            execute_transaction(tx_data)
        } else {
            false
        }
    }
    
    // Keep implementation details private
    fn validate_transaction(tx_data: Hash) -> bool {
        tx_data[0] != 0
    }
    
    fn execute_transaction(tx_data: Hash) -> bool {
        true
    }
}
}

Module Limitations

Current limitations in Psy modules:

  1. No Conditional Compilation: Cannot conditionally include modules
  2. Static Structure: Module structure must be defined at compile time
  3. No Dynamic Loading: Cannot load modules at runtime
  4. Simple Namespace: No complex namespace operations
#![allow(unused)]
fn main() {
// This works
mod simple_module {
    pub fn function() -> Felt { 42 }
}

// This doesn't work - conditional compilation not supported
// #[cfg(feature = "advanced")]
// mod advanced_module {
//     pub fn advanced_function() -> Felt { 42 }
// }
}

Summary

Modules in Psy provide:

  • Code Organization through logical grouping
  • Visibility Control with pub/private access levels
  • Namespace Management to avoid naming conflicts
  • Encapsulation of implementation details
  • Reusability through selective imports

Use modules to create clean, maintainable smart contract architectures that separate concerns and provide clear interfaces between different parts of your application.

Traits and Generics

This chapter covers traits (defining shared behavior) and generics (type flexibility) in Psy, including generic constraints and advanced patterns.

Traits

Traits define shared behavior that can be implemented by different types. They are similar to interfaces in other languages.

Basic Trait Definition

#![allow(unused)]
fn main() {
// Define a trait with required methods
pub trait Calculable {
    pub fn calculate() -> Felt;
    pub fn multiply(factor: Felt) -> Felt;
}
}

Implementing Traits

pub trait Value {
    pub fn value() -> Felt;
}

// Implement the trait for a struct
struct Two {}
impl Value for Two {
    pub fn value() -> Felt {
        2
    }
}

struct Ten {}
impl Value for Ten {
    pub fn value() -> Felt {
        10
    }
}

fn main() {
    assert_eq(Two::value(), 2, "Two should return 2");
    assert_eq(Ten::value(), 10, "Ten should return 10");
}

Traits with Parameters

pub trait Arithmetic {
    pub fn add(a: Felt, b: Felt) -> Felt;
    pub fn subtract(a: Felt, b: Felt) -> Felt;
    pub fn multiply(a: Felt, b: Felt) -> Felt;
}

struct Calculator {}

impl Arithmetic for Calculator {
    pub fn add(a: Felt, b: Felt) -> Felt {
        a + b
    }
    
    pub fn subtract(a: Felt, b: Felt) -> Felt {
        a - b
    }
    
    pub fn multiply(a: Felt, b: Felt) -> Felt {
        a * b
    }
}

fn main() {
    let sum = Calculator::add(5, 3);
    let product = Calculator::multiply(4, 7);
    
    assert_eq(sum, 8, "5 + 3 should equal 8");
    assert_eq(product, 28, "4 * 7 should equal 28");
}

Traits with Self Parameter

pub trait Comparable {
    pub fn is_greater_than(self, other: Self) -> bool;
    pub fn is_equal_to(self, other: Self) -> bool;
}

struct Number {
    pub value: Felt,
}

impl Comparable for Number {
    pub fn is_greater_than(self, other: Self) -> bool {
        self.value > other.value
    }
    
    pub fn is_equal_to(self, other: Self) -> bool {
        self.value == other.value
    }
}

fn main() {
    let num1 = new Number { value: 10 };
    let num2 = new Number { value: 5 };
    
    assert(num1.is_greater_than(num2), "10 should be greater than 5");
    assert(!num1.is_equal_to(num2), "10 should not equal 5");
}

Generics

Generics allow you to write code that works with multiple types while maintaining type safety.

Generic Functions

// Generic function with type parameter T
fn identity<T>(value: T) -> T {
    value
}

fn main() {
    let felt_value = identity(42);
    let bool_value = identity(true);
    
    assert_eq(felt_value, 42, "Identity should return the same Felt value");
    assert(bool_value, "Identity should return the same bool value");
}

Generic Structs

// Generic struct that can hold any type
struct Container<T> {
    pub item: T,
}

impl<T> Container<T> {
    pub fn new(item: T) -> Self {
        new Container { item }
    }
    
    pub fn get(self) -> T {
        self.item
    }
}

fn main() {
    let felt_container = Container::new(100);
    let bool_container = Container::new(false);
    
    assert_eq(felt_container.get(), 100, "Container should hold Felt value");
    assert(!bool_container.get(), "Container should hold bool value");
}

Multiple Generic Parameters

struct Pair<T, U> {
    pub first: T,
    pub second: U,
}

impl<T, U> Pair<T, U> {
    pub fn new(first: T, second: U) -> Self {
        new Pair { first, second }
    }
    
    pub fn get_first(self) -> T {
        self.first
    }
    
    pub fn get_second(self) -> U {
        self.second
    }
}

fn main() {
    let number_pair = Pair::new(42, 84);
    let mixed_pair = Pair::new(10, true);
    
    assert_eq(number_pair.get_first(), 42, "First should be 42");
    assert_eq(number_pair.get_second(), 84, "Second should be 84");
    assert_eq(mixed_pair.get_first(), 10, "First should be 10");
    assert(mixed_pair.get_second(), "Second should be true");
}

Generic Constraints (Trait Bounds)

Generic constraints allow you to specify that generic types must implement certain traits.

Simple Trait Bounds

pub trait Valuable {
    pub fn get_value() -> Felt;
}

struct Gold {}
impl Valuable for Gold {
    pub fn get_value() -> Felt {
        1000
    }
}

struct Silver {}
impl Valuable for Silver {
    pub fn get_value() -> Felt {
        100
    }
}

// Generic function constrained to types that implement Valuable
fn calculate_total_value<T: Valuable>(items: [T; 3]) -> Felt {
    T::get_value() + T::get_value() + T::get_value()
}

fn main() {
    let gold_items = [new Gold {}, new Gold {}, new Gold {}];
    let total_gold_value = calculate_total_value(gold_items);
    
    assert_eq(total_gold_value, 3000, "Three gold items should be worth 3000");
}

Multiple Trait Bounds

pub trait Addable {
    pub fn add(self, other: Self) -> Self;
}

pub trait Comparable {
    pub fn is_greater(self, other: Self) -> bool;
}

struct Number {
    pub value: Felt,
}

impl Addable for Number {
    pub fn add(self, other: Self) -> Self {
        new Number { value: self.value + other.value }
    }
}

impl Comparable for Number {
    pub fn is_greater(self, other: Self) -> bool {
        self.value > other.value
    }
}

// Function with multiple trait bounds
fn process_numbers<T: Addable + Comparable>(a: T, b: T) -> T {
    let sum = a.add(b);
    if sum.is_greater(a) {
        sum
    } else {
        a
    }
}

fn main() {
    let num1 = new Number { value: 10 };
    let num2 = new Number { value: 5 };
    let result = process_numbers(num1, num2);
    
    assert_eq(result.value, 15, "Result should be the sum: 15");
}

Where Clauses

For complex constraints, you can use where clauses for better readability:

pub trait Convertible<T> {
    pub fn convert(self) -> T;
}

pub trait Validatable {
    pub fn is_valid(self) -> bool;
}

struct Source {
    pub data: Felt,
}

struct Target {
    pub result: Felt,
}

impl Convertible<Target> for Source {
    pub fn convert(self) -> Target {
        new Target { result: self.data * 2 }
    }
}

impl Validatable for Target {
    pub fn is_valid(self) -> bool {
        self.result > 0
    }
}

// Complex generic function with where clause
fn transform_and_validate<T, U>(source: T) -> U 
where 
    T: Convertible<U>,
    U: Validatable,
{
    let target = source.convert();
    if target.is_valid() {
        target
    } else {
        panic("Invalid conversion result")
    }
}

fn main() {
    let source = new Source { data: 10 };
    let target = transform_and_validate(source);
    
    assert_eq(target.result, 20, "Converted value should be 20");
}

Generic Constraints in Struct Definitions

pub trait Numeric {
    pub fn zero() -> Self;
    pub fn add(self, other: Self) -> Self;
}

struct Counter<T: Numeric> {
    pub value: T,
}

impl<T: Numeric> Counter<T> {
    pub fn new() -> Self {
        new Counter { value: T::zero() }
    }
    
    pub fn increment(self, amount: T) -> Self {
        new Counter { value: self.value.add(amount) }
    }
}

struct Felt_{}
impl Numeric for Felt {
    pub fn zero() -> Self {
        0
    }
    
    pub fn add(self, other: Self) -> Self {
        self + other
    }
}

fn main() {
    let counter = Counter#<Felt>::new();
    let incremented = counter.increment(5);
    
    assert_eq(incremented.value, 5, "Counter should be incremented to 5");
}

Advanced Generic Patterns

Associated Types in Traits

pub trait Iterator {
    type Item;
    
    pub fn next(self) -> Self::Item;
    pub fn has_next(self) -> bool;
}

struct NumberIterator {
    pub current: Felt,
    pub max: Felt,
}

impl Iterator for NumberIterator {
    type Item = Felt;
    
    pub fn next(self) -> Self::Item {
        let current = self.current;
        self.current = self.current + 1;
        current
    }
    
    pub fn has_next(self) -> bool {
        self.current < self.max
    }
}

fn collect_items<I: Iterator>(mut iterator: I) -> [I::Item; 3] {
    let item1 = iterator.next();
    let item2 = iterator.next();
    let item3 = iterator.next();
    [item1, item2, item3]
}

fn main() {
    let iterator = new NumberIterator { current: 1, max: 10 };
    let items = collect_items(iterator);
    
    assert_eq(items[0], 1, "First item should be 1");
    assert_eq(items[1], 2, "Second item should be 2");
    assert_eq(items[2], 3, "Third item should be 3");
}

Generic Trait Implementations

pub trait Serializable {
    pub fn serialize() -> [Felt; 4];
}

struct Data<T> {
    pub value: T,
}

// Generic implementation for Data<Felt>
impl Serializable for Data<Felt> {
    pub fn serialize() -> [Felt; 4] {
        [self.value, 0, 0, 0]
    }
}

// Specialized implementation for Data<bool>
impl Serializable for Data<bool> {
    pub fn serialize() -> [Felt; 4] {
        let bool_value = if self.value { 1 } else { 0 };
        [bool_value, 1, 0, 0]  // 1 indicates boolean type
    }
}

fn main() {
    let felt_data = new Data { value: 42 };
    let bool_data = new Data { value: true };
    
    let felt_serialized = felt_data.serialize();
    let bool_serialized = bool_data.serialize();
    
    assert_eq(felt_serialized[0], 42, "Felt data should serialize correctly");
    assert_eq(bool_serialized[0], 1, "Bool data should serialize correctly");
    assert_eq(bool_serialized[1], 1, "Bool type indicator should be 1");
}

Default Implementations

pub trait Describable {
    pub fn name() -> [u8; 32];
    
    // Default implementation
    pub fn description() -> [u8; 64] {
        let name_bytes = Self::name();
        // Simple default description
        let mut desc = [0u8; 64];
        // Copy name to description (simplified)
        desc[0] = name_bytes[0];
        desc[1] = name_bytes[1];
        desc
    }
}

struct Product {
    pub id: Felt,
}

impl Describable for Product {
    pub fn name() -> [u8; 32] {
        // Simplified name representation
        [80u8, 114u8, 111u8, 100u8, 117u8, 99u8, 116u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8, 0u8] // "Product"
    }
    
    // description() is inherited from default implementation
}

fn main() {
    let name = Product::name();
    let desc = Product::description();
    
    assert_eq(name[0], 80u8, "Name should start with 'P'");
    assert_eq(desc[0], 80u8, "Description should start with name");
}

Best Practices

1. Use Descriptive Trait Names

#![allow(unused)]
fn main() {
// Good: Clear intent
pub trait Validateable {
    pub fn is_valid(self) -> bool;
}

pub trait Convertible<T> {
    pub fn convert(self) -> T;
}

// Avoid: Unclear purpose
pub trait Helper {
    pub fn help();
}
}

2. Keep Trait Interfaces Small

#![allow(unused)]
fn main() {
// Good: Single responsibility
pub trait Readable {
    pub fn read() -> [u8; 32];
}

pub trait Writable {
    pub fn write(data: [u8; 32]);
}

// Better than: Large interface
pub trait FileHandler {
    pub fn read() -> [u8; 32];
    pub fn write(data: [u8; 32]);
    pub fn delete();
    pub fn copy();
    // ... too many responsibilities
}
}

3. Use Generic Constraints Wisely

#![allow(unused)]
fn main() {
// Good: Specific constraints
fn process_numeric<T: Addable + Comparable>(data: T) -> T {
    // Implementation
    data
}

// Avoid: Over-constraining
fn simple_function<T: TraitA + TraitB + TraitC + TraitD>(data: T) -> T {
    // Only uses TraitA functionality
    data
}
}

4. Prefer Associated Types for Single Relationships

#![allow(unused)]
fn main() {
// Good: One-to-one relationship
pub trait Iterator {
    type Item;
    pub fn next() -> Self::Item;
}

// Less ideal: Generic parameter when association is clear
pub trait Iterator2<Item> {
    pub fn next() -> Item;
}
}

Limitations

Current limitations of traits and generics in Psy:

  1. No Higher-Kinded Types: Cannot abstract over type constructors
  2. Limited Type Inference: Explicit type annotations often required
  3. No Conditional Compilation: Cannot conditionally implement traits
  4. Simple Constraint Syntax: More complex constraint expressions not supported
#![allow(unused)]
fn main() {
// This works
fn simple_generic<T: SomeTrait>(value: T) -> T {
    value
}

// This doesn't work - complex constraints not supported
// fn complex_generic<T>(value: T) -> T 
// where 
//     T: TraitA,
//     T::Associated: TraitB,
//     for<'a> &'a T: TraitC,
// {
//     value
// }
}

Summary

Traits and generics in Psy provide:

  • Code reuse through shared behavior definitions
  • Type safety with compile-time type checking
  • Flexibility through generic programming
  • Constraints to ensure types meet requirements
  • Default implementations for common functionality

These features enable building robust, reusable smart contract components while maintaining the security and predictability required for blockchain applications.

Testing

Testing is an essential part of developing reliable smart contracts. Psy provides built-in support for writing and running tests using the #[test] attribute.

Writing Tests

Tests in Psy are functions marked with the #[test] attribute. Create a test file (e.g., test_math.psy):

#![allow(unused)]
fn main() {
fn add_numbers(a: Felt, b: Felt) -> Felt {
    return a + b;
}

fn multiply_numbers(a: Felt, b: Felt) -> Felt {
    return a * b;
}

#[test]
fn test_addition() {
    let result = add_numbers(2, 3);
    assert_eq(result, 5, "2 + 3 should equal 5");
}

#[test]
fn test_multiplication() {
    let result = multiply_numbers(4, 5);
    assert_eq(result, 20, "4 * 5 should equal 20");
}

#[test]
fn test_zero_multiplication() {
    let result = multiply_numbers(0, 10);
    assert_eq(result, 0, "0 * 10 should equal 0");
}
}

Running Tests

To run tests, use the dargo test command:

dargo test --file test_math.psy

This will execute all functions marked with #[test] and report the results.

Test Assertions

Psy provides several assertion functions for testing:

  • assert(condition, "message") - Assert that a condition is true
  • assert_eq(left, right, "message") - Assert that two values are equal
  • assert_ne(left, right, "message") - Assert that two values are not equal

Example:

#![allow(unused)]
fn main() {
#[test]
fn test_assertions() {
    let x = 10;
    let y = 5;
    
    assert(x > y, "x should be greater than y");
    assert_eq(x - y, 5, "difference should be 5");
    assert_ne(x, y, "x and y should not be equal");
}
}

Current Limitations

Important: dargo test currently has the following limitations:

  • Single-file testing only: Each test file must be run individually
  • No module-level testing: You cannot test across multiple modules in a single command
  • No test discovery: You must specify the exact file path

These limitations are being addressed in future releases.

Best Practices

  1. Separate test files: Keep tests in dedicated .psy files separate from your main code
  2. Descriptive names: Use clear, descriptive names for test functions
  3. Clear messages: Provide helpful assertion messages that explain what went wrong
  4. Test edge cases: Include tests for boundary conditions and error cases

Example test structure:

#![allow(unused)]
fn main() {
// Functions to test
fn fibonacci(n: Felt) -> Felt {
    if n <= 1 {
        return n;
    };
    return fibonacci(n - 1) + fibonacci(n - 2);
}

// Test cases
#[test]
fn test_fibonacci_base_cases() {
    assert_eq(fibonacci(0), 0, "fib(0) should be 0");
    assert_eq(fibonacci(1), 1, "fib(1) should be 1");
}

#[test]
fn test_fibonacci_sequence() {
    assert_eq(fibonacci(2), 1, "fib(2) should be 1");
    assert_eq(fibonacci(3), 2, "fib(3) should be 2");
    assert_eq(fibonacci(4), 3, "fib(4) should be 3");
    assert_eq(fibonacci(5), 5, "fib(5) should be 5");
}
}

Real-World Applications

This chapter demonstrates practical applications of Psy language features through real-world smart contract examples. We'll examine actual contract implementations to show how the language concepts work together in production scenarios.

Token Contract - Complete Implementation

Let's examine a complete token contract that demonstrates storage, user interaction patterns, and the "read others, write self" security model.

Contract Structure

#[derive(Storage, StorageRef)]
struct OtherUserInfo {
    pub amount_sent: Felt,
    pub amount_claimed: Felt,
}

#[contract]
#[derive(Storage, StorageRef)]
struct Contract {
    pub balance: Felt,
    pub other_user_info: [OtherUserInfo; 16777216],
}

Key Design Decisions:

  1. Storage Organization: The contract uses a simple two-level structure

    • balance - Current user's token balance
    • other_user_info - Array tracking interactions with other users
  2. User Isolation: Each user has their own instance of this contract storage

    • User A's balance is completely separate from User B's balance
    • Cross-user interactions are tracked in the other_user_info array
  3. Large Array Size: 16777216 slots allow tracking interactions with many users

    • Each slot stores amount_sent and amount_claimed for one other user
    • Uses user ID as array index for O(1) access

Token Minting

impl ContractRef {
    pub fn simple_mint(amount: Felt) {
        let c = ContractRef::new(ContractMetadata::current());
        c.balance.set(c.balance.get() + amount);
    }
}

Implementation Analysis:

  • Storage Access: Uses ContractMetadata::current() to access current user's storage
  • Atomic Operation: Reads current balance, adds amount, writes back
  • Simplicity: No authorization checks - any user can mint for themselves
  • Security: Cannot mint for other users due to storage isolation

Usage Pattern:

ContractRef::simple_mint(100);  // Mint 100 tokens to current user

Token Burning with Validation

pub fn simple_burn(amount: Felt) {
    let c = ContractRef::new(ContractMetadata::current());
    let current_balance = c.balance.get();
    assert(current_balance >= amount, "insufficient balance");
    c.balance.set(current_balance - amount);
}

Key Features:

  1. Validation: Checks sufficient balance before burning
  2. Error Handling: Uses descriptive assertion message
  3. Safe Arithmetic: No underflow risk due to validation
  4. Gas Efficiency: Single read, validate, single write pattern

Security Considerations:

  • Only user can burn their own tokens (storage isolation)
  • Prevents negative balances through validation
  • Clear error messages for debugging

Token Transfer - Cross-User Pattern

pub fn simple_transfer(recipient: Felt, amount: Felt) {
    let c = ContractRef::new(ContractMetadata::current());
    let current_balance = c.balance.get();
    assert(current_balance >= amount, "insufficient balance");

    let mut recipient_info = c.other_user_info.index(recipient).get();
    assert(recipient_info.amount_sent + amount > recipient_info.amount_sent, "amount sent overflow");

    c.other_user_info.index(recipient).set(new OtherUserInfo {
        amount_sent: recipient_info.amount_sent + amount,
        amount_claimed: recipient_info.amount_claimed
    });

    c.balance.set(current_balance - amount);
}

Transfer Mechanics:

  1. Sender Validation: Check sender has sufficient balance
  2. Overflow Protection: Ensure amount_sent doesn't overflow
  3. Record Keeping: Update sender's record of tokens sent to recipient
  4. Balance Update: Deduct from sender's balance

Critical Insight - "Write Self" Pattern:

  • Sender can only update their own storage
  • Transfer is recorded in sender's other_user_info[recipient]
  • Recipient must actively claim tokens (pull pattern)

Why This Design:

  • Security: Prevents unauthorized balance modifications
  • User Consent: Recipients choose when to claim tokens
  • Audit Trail: Complete history of all transfers in sender's storage

Token Claiming - Cross-User Reading

pub fn simple_claim(sender: Felt) {
    let self_user_id = get_user_id();
    assert(sender != self_user_id, "you cannot claim from your self");

    let c = ContractRef::new(ContractMetadata::current());

    let sender_user_contract = ContractRef::new(ContractMetadata::new(get_contract_id(), sender));
    let sender_total_sent = sender_user_contract.other_user_info.index(self_user_id).get().amount_sent;

    let mut sender_info = c.other_user_info.index(sender).get();
    assert(sender_info.amount_claimed < sender_total_sent, "no tokens to claim from this sender");
    let claimed = sender_total_sent - sender_info.amount_claimed;

    c.other_user_info.index(sender).set(new OtherUserInfo {
        amount_sent: sender_info.amount_sent,
        amount_claimed: sender_total_sent
    });

    c.balance.set(c.balance.get() + claimed);
}

Claiming Process:

  1. Identity Validation: Prevent self-claiming
  2. Read Others: Access sender's storage to see amount sent
  3. Check Claimable: Compare sent vs already claimed amounts
  4. Write Self: Update recipient's tracking and balance

Cross-User Storage Access:

// Read from sender's storage
let sender_user_contract = ContractRef::new(ContractMetadata::new(get_contract_id(), sender));
let sender_total_sent = sender_user_contract.other_user_info.index(self_user_id).get().amount_sent;

Security Properties:

  • Read Transparency: Can read any user's storage
  • Write Isolation: Can only write to own storage
  • Double-Spend Prevention: Tracks amount_claimed to prevent re-claiming
  • User Control: Recipients decide when to claim

Complete Usage Example

fn main() {
    let c = ContractRef::new(ContractMetadata::current());
    
    // Mint initial tokens
    ContractRef::simple_mint(100);
    assert_eq(c.balance.get(), 100, "c.balance == 100");

    // Transfer to another user
    ContractRef::simple_transfer(10, 50);  // Send 50 tokens to user 10
    assert_eq(c.balance.get(), 50, "c.balance == 50");
    
    // Verify transfer was recorded
    assert_eq(c.other_user_info.index(10).get().amount_sent, 50, "recorded 50 tokens sent to user 10");
}

Advanced Patterns

Multi-Token Contract with Modules

mod token_types {
    pub const GOLD: Felt = 1;
    pub const SILVER: Felt = 2;
    pub const BRONZE: Felt = 3;
}

mod validation {
    use token_types::*;
    
    pub fn is_valid_token_type(token_type: Felt) -> bool {
        token_type == GOLD || token_type == SILVER || token_type == BRONZE
    }
    
    pub fn get_exchange_rate(from_type: Felt, to_type: Felt) -> Felt {
        if from_type == GOLD && to_type == SILVER {
            10  // 1 Gold = 10 Silver
        } else if from_type == SILVER && to_type == BRONZE {
            5   // 1 Silver = 5 Bronze
        } else {
            1   // 1:1 for same type
        }
    }
}

#[derive(Storage, StorageRef)]
struct TokenBalance {
    pub gold: Felt,
    pub silver: Felt,
    pub bronze: Felt,
}

#[contract]
#[derive(Storage, StorageRef)]
struct MultiTokenContract {
    pub balances: TokenBalance,
    pub exchange_history: [Felt; 1000],
}

impl MultiTokenContractRef {
    pub fn mint_token(token_type: Felt, amount: Felt) {
        use validation::*;
        use token_types::*;
        
        assert(is_valid_token_type(token_type), "invalid token type");
        
        let contract = MultiTokenContractRef::new(ContractMetadata::current());
        let mut balances = contract.balances.get();
        
        if token_type == GOLD {
            balances.gold = balances.gold + amount;
        } else if token_type == SILVER {
            balances.silver = balances.silver + amount;
        } else if token_type == BRONZE {
            balances.bronze = balances.bronze + amount;
        }
        
        contract.balances.set(balances);
    }
    
    pub fn exchange_tokens(from_type: Felt, to_type: Felt, amount: Felt) {
        use validation::*;
        
        assert(is_valid_token_type(from_type) && is_valid_token_type(to_type), "invalid token types");
        
        let rate = get_exchange_rate(from_type, to_type);
        let converted_amount = amount * rate;
        
        // Implementation would update balances accordingly
        // This demonstrates how modules organize related functionality
    }
}

Module Benefits:

  • Constants Organization: token_types module centralizes token definitions
  • Validation Logic: validation module provides reusable checks
  • Clean Implementation: Contract logic focuses on business rules, not validation details

NFT-Style Contract with Traits

pub trait Transferable {
    pub fn can_transfer(token_id: Felt, from: Felt, to: Felt) -> bool;
    pub fn transfer(token_id: Felt, to: Felt);
}

pub trait Metadata {
    pub fn get_name(token_id: Felt) -> Hash;
    pub fn get_description(token_id: Felt) -> Hash;
}

#[derive(Storage, StorageRef)]
struct TokenInfo {
    pub owner: Felt,
    pub name_hash: Hash,
    pub description_hash: Hash,
    pub transferable: bool,
}

#[contract]
#[derive(Storage, StorageRef)]
struct NFTContract {
    pub tokens: [TokenInfo; 1000000],
    pub next_token_id: Felt,
}

impl Transferable for NFTContractRef {
    pub fn can_transfer(token_id: Felt, from: Felt, to: Felt) -> bool {
        let contract = NFTContractRef::new(ContractMetadata::current());
        let token = contract.tokens.index(token_id).get();
        token.owner == from && token.transferable
    }
    
    pub fn transfer(token_id: Felt, to: Felt) {
        let from = get_user_id();
        assert(Self::can_transfer(token_id, from, to), "transfer not allowed");
        
        let contract = NFTContractRef::new(ContractMetadata::current());
        let mut token = contract.tokens.index(token_id).get();
        token.owner = to;
        contract.tokens.index(token_id).set(token);
    }
}

impl Metadata for NFTContractRef {
    pub fn get_name(token_id: Felt) -> Hash {
        let contract = NFTContractRef::new(ContractMetadata::current());
        contract.tokens.index(token_id).get().name_hash
    }
    
    pub fn get_description(token_id: Felt) -> Hash {
        let contract = NFTContractRef::new(ContractMetadata::current());
        contract.tokens.index(token_id).get().description_hash
    }
}

Trait Benefits:

  • Interface Separation: Transferable and Metadata define clear contracts
  • Implementation Flexibility: Different NFT contracts can implement these traits differently
  • Code Reusability: Other contracts can implement the same traits

Governance Contract with Generics

pub trait Votable {
    pub fn get_voting_power(user_id: Felt) -> Felt;
}

struct ProposalData<T> {
    pub id: Felt,
    pub description_hash: Hash,
    pub votes_for: Felt,
    pub votes_against: Felt,
    pub metadata: T,
}

#[contract]
#[derive(Storage, StorageRef)]
struct GovernanceContract {
    pub proposals: [ProposalData<Hash>; 10000],
    pub user_votes: [Felt; 1000000],  // Track user voting history
}

impl GovernanceContractRef {
    pub fn create_proposal(description_hash: Hash, metadata: Hash) -> Felt {
        let contract = GovernanceContractRef::new(ContractMetadata::current());
        let proposal_id = get_next_proposal_id();
        
        contract.proposals.index(proposal_id).set(new ProposalData {
            id: proposal_id,
            description_hash,
            votes_for: 0,
            votes_against: 0,
            metadata
        });
        
        proposal_id
    }
    
    pub fn vote<V: Votable>(proposal_id: Felt, vote_for: bool) {
        let user_id = get_user_id();
        let voting_power = V::get_voting_power(user_id);
        
        assert(voting_power > 0, "no voting power");
        
        let contract = GovernanceContractRef::new(ContractMetadata::current());
        let mut proposal = contract.proposals.index(proposal_id).get();
        
        if vote_for {
            proposal.votes_for = proposal.votes_for + voting_power;
        } else {
            proposal.votes_against = proposal.votes_against + voting_power;
        }
        
        contract.proposals.index(proposal_id).set(proposal);
        
        // Record user's vote to prevent double-voting
        contract.user_votes.index(user_id).set(proposal_id);
    }
}

fn get_next_proposal_id() -> Felt {
    // Implementation would increment and return next available ID
    1
}

Generic Benefits:

  • Type Flexibility: ProposalData<T> can hold different metadata types
  • Constraint Power: V: Votable ensures voting power can be calculated
  • Extensibility: New voting systems can be plugged in via traits

Design Patterns Summary

1. Storage Patterns

Single-User Data:

pub struct UserData {
    pub balance: Felt,
    pub last_activity: Felt,
}

Cross-User Tracking:

pub struct UserInteractions {
    pub sent: [Felt; MAX_USERS],
    pub received: [Felt; MAX_USERS],
}

Hierarchical Data:

pub struct ComplexData {
    pub metadata: TokenMetadata,
    pub balances: TokenBalances,
    pub history: [Transaction; 1000],
}

2. Access Patterns

Current User Access:

let contract = ContractRef::new(ContractMetadata::current());
let value = contract.field.get();

Cross-User Reading:

let other_contract = ContractRef::new(ContractMetadata::new(contract_id, other_user_id));
let other_value = other_contract.field.get();

Safe Writing:

// Only writes to current user's storage
let my_contract = ContractRef::new(ContractMetadata::current());
my_contract.field.set(new_value);

3. Security Patterns

Input Validation:

assert(amount > 0, "amount must be positive");
assert(user_id != get_user_id(), "cannot target self");

Overflow Protection:

assert(balance + amount > balance, "addition overflow");
assert(total_supply + minted > total_supply, "supply overflow");

State Consistency:

let old_state = contract.state.get();
let new_state = update_state(old_state);
contract.state.set(new_state);

These patterns demonstrate how Psy's language features combine to create secure, efficient smart contracts that take advantage of zero-knowledge proofs while maintaining clear, auditable code.

Dargo - Psy Language Package Manager and Build Tool

Dargo is the official package manager and build system for the Psy Smart Contract Language. It handles project creation, dependency management, compilation, testing, and execution of Psy smart contracts.

Project Management

dargo new

Creates a new Psy language project with a standard directory structure.

dargo new my_project

This creates:

my_project/
├── Dargo.toml          # Project configuration
├── src/
│   ├── main.psy        # Main source file
│   └── lib.psy         # Library source file
└── target/             # Build output directory

Project Structure:

  • Dargo.toml - Project metadata and dependencies
  • src/main.psy - Entry point for binary projects
  • src/lib.psy - Library code (if applicable)
  • target/ - Compiled artifacts and intermediate files

dargo init

Initializes a Psy project in the current directory.

cd existing_directory
dargo init

Use this when you want to add Psy project structure to an existing directory.

Compilation

dargo compile

Compiles your Psy project to zero-knowledge proof (ZKP) circuits.

Basic Compilation

# Compile main function
dargo compile

# Compile specific contract methods
dargo compile --contract-name MyContract --method-names transfer mint

# Compile multiple methods
dargo compile -c TokenContract -m transfer burn mint approve

Contract Compilation Examples

Simple Contract:

#[contract]
#[derive(Storage, StorageRef)]
pub struct TokenContract {
    pub total_supply: Felt,
    pub balances: [Felt; 1000000],
}

impl TokenContractRef {
    pub fn mint(to: Felt, amount: Felt) {
        let contract = TokenContractRef::new(ContractMetadata::current());
        let current_supply = contract.total_supply.get();
        contract.total_supply.set(current_supply + amount);
    }
    
    pub fn transfer(from: Felt, to: Felt, amount: Felt) {
        // Transfer implementation
    }
}

fn main() {
    TokenContractRef::mint(123, 100);
}

Compilation Commands:

# Compile just the main function
dargo compile

# Compile specific contract method
dargo compile --contract-name TokenContract --method-names mint

# Compile multiple contract methods
dargo compile -c TokenContract -m mint transfer

# This will generate ZK circuits for the specified methods

Compilation Output

The compile command generates:

  • Formatted Code: Shows the processed and formatted version of your code
  • Circuit Files: ZK circuit representations in target/ directory
  • Constraint System: Mathematical constraints for zero-knowledge proofs

Execution and Testing

dargo execute

Compiles and executes Psy programs with specified parameters.

# Execute main function
dargo execute

# Execute with parameters
dargo execute --parameters 100 200

# Execute contract method with parameters
dargo execute --contract-name TokenContract --method-names mint --parameters 123 50

# Execute multiple methods
dargo execute -c TokenContract -m transfer burn -p 123 456 100

Execution Examples

Basic Function Execution:

fn add(a: Felt, b: Felt) -> Felt {
    a + b
}

fn main(x: Felt, y: Felt) -> Felt {
    add(x, y)
}
# Execute with parameters x=10, y=20
dargo execute --parameters 10 20
# Returns: 30

Contract Method Execution:

#[contract] 
#[derive(Storage, StorageRef)]
pub struct Calculator {
    pub result: Felt,
}

impl CalculatorRef {
    pub fn multiply(a: Felt, b: Felt) -> Felt {
        a * b
    }
    
    pub fn divide(a: Felt, b: Felt) -> Felt {
        a / b  // Modular inverse in finite field
    }
}
# Execute multiply method
dargo execute -c Calculator -m multiply -p 6 7
# Returns: 42

# Execute divide method  
dargo execute -c Calculator -m divide -p 15 3
# Returns: modular inverse result

dargo test

Runs tests for your Psy project.

# Test specific file
dargo test --file src/main.psy

# Test with short flag
dargo test -f tests/token_test.psy

Test File Example

#[test]
fn test_addition() {
    let result = 2 + 3;
    assert_eq(result, 5, "2 + 3 should equal 5");
}

#[test]
fn test_contract_mint() {
    TokenContractRef::mint(123, 100);
    let contract = TokenContractRef::new(ContractMetadata::current());
    let supply = contract.total_supply.get();
    assert_eq(supply, 100, "Total supply should be 100 after minting");
}

fn main() {
    // Main function for regular execution
}

ABI Generation

dargo generate-abi

Generates ABI (Application Binary Interface) files for contracts, which define the contract's interface for external interaction.

# Generate ABI for specific contract
dargo generate-abi --contract-name TokenContract

# Generate with short flag
dargo generate-abi -c TokenContract

# Specify output directory
dargo generate-abi -c TokenContract --output-dir ./abi

# Pretty print the ABI JSON
dargo generate-abi -c TokenContract --pretty

ABI Structure

The generated ABI contains:

Contract Definition:

{
  "version": "1.0.0",
  "structs": [
    {
      "name": "TokenContract",
      "is_contract": true,
      "fields": [
        {
          "name": "balance",
          "type": "Felt"
        }
      ],
      "functions": [
        {
          "name": "transfer",
          "params": [
            {
              "name": "recipient",
              "type": "Felt"
            },
            {
              "name": "amount", 
              "type": "Felt"
            }
          ],
          "return": []
        }
      ]
    }
  ]
}

ABI Components:

  • structs: Contract and data structure definitions
  • is_contract: Indicates if struct is a contract
  • fields: Contract storage fields and their types
  • functions: Public methods with parameters and return types
  • params: Function parameter names and types
  • return: Function return types (empty array for void)

Type Mapping

Psy types map to ABI types as follows:

// Psy Code
#[contract]
#[derive(Storage, StorageRef)]
pub struct TokenContract {
    pub balance: Felt,
    pub users: [UserInfo; 1000],
}

impl TokenContractRef {
    pub fn transfer(to: Felt, amount: Felt) {
        // Implementation
    }
    
    pub fn get_balance() -> Felt {
        // Implementation
    }
}
// Generated ABI
{
  "structs": [
    {
      "name": "TokenContract",
      "is_contract": true,
      "fields": [
        {
          "name": "balance",
          "type": "Felt"
        },
        {
          "name": "users",
          "type": {
            "type": "Array",
            "inner_type": "UserInfo",
            "length": 1000
          }
        }
      ],
      "functions": [
        {
          "name": "transfer",
          "params": [
            {"name": "to", "type": "Felt"},
            {"name": "amount", "type": "Felt"}
          ],
          "return": []
        },
        {
          "name": "get_balance",
          "params": [],
          "return": ["Felt"]
        }
      ]
    }
  ]
}

Usage with TypeScript SDK

The generated ABI is used by the TypeScript SDK to create typed contract interfaces:

# 1. Generate ABI
dargo generate-abi -c TokenContract -o ./target

# 2. Copy to TypeScript SDK
cp target/TokenContract.abi.json psy_sdk/psy-ts-sdk/packages/contract-sdk/abi/

# 3. Generate TypeScript bindings
cd psy_sdk/psy-ts-sdk/packages/contract-sdk
pnpm generate

Code Formatting

dargo fmt

Formats Psy source code according to standard style guidelines.

# Format specific file
dargo fmt src/main.psy

# Format all files in src directory
dargo fmt src/*.psy

Before formatting:

fn messy_function(a:Felt,b:Felt)->Felt{
if a>b{a}else{b}
}

After formatting:

fn messy_function(a: Felt, b: Felt) -> Felt {
    if a > b {
        a
    } else {
        b
    }
}

Advanced Usage

Contract-Specific Compilation

When working with multiple contracts, you can compile specific contracts and methods:

// File: src/contracts.psy

#[contract]
#[derive(Storage, StorageRef)]
pub struct TokenContract {
    pub supply: Felt,
}

#[contract]  
#[derive(Storage, StorageRef)]
pub struct GovernanceContract {
    pub proposals: [Felt; 1000],
}

impl TokenContractRef {
    pub fn mint(amount: Felt) { /* implementation */ }
    pub fn burn(amount: Felt) { /* implementation */ }
}

impl GovernanceContractRef {
    pub fn propose(proposal_id: Felt) { /* implementation */ }
    pub fn vote(proposal_id: Felt, vote: bool) { /* implementation */ }
}

Compilation Commands:

# Compile only token contract methods
dargo compile -c TokenContract -m mint burn

# Compile only governance contract methods  
dargo compile -c GovernanceContract -m propose vote

# Compile specific method from specific contract
dargo compile -c TokenContract -m mint

Parameter Types and Formats

When using --parameters, different types are supported:

# Felt parameters
dargo execute -p 123 456 789

# Boolean parameters (represented as 0/1)
dargo execute -p 1 0  # true false

# Array parameters (space-separated elements)
dargo execute -m process_array -p 1 2 3 4 5

# Mixed parameter types
dargo execute -m complex_function -p 100 1 42

Environment Variables

Dargo respects certain environment variables:

# Set default file for test command
export FILE=tests/my_test.psy
dargo test

# Build optimization level
export DARGO_OPTIMIZATION=release
dargo compile

Project Configuration

Dargo.toml

The project configuration file supports various settings:

[package]
name = "my_contract"
type = "bin"  # or "lib"
authors = ["Your Name <[email protected]>"]

[dependencies]
# Future: dependency management

[build]
# Future: build configuration

Common Workflows

Development Workflow

  1. Create Project:
dargo new token_contract
cd token_contract
  1. Write Code:
// src/main.psy
#[contract]
#[derive(Storage, StorageRef)]
pub struct Token {
    pub balances: [Felt; 1000000],
}

impl TokenRef {
    pub fn transfer(to: Felt, amount: Felt) {
        // Implementation
    }
}
  1. Compile and Test:
# Check compilation
dargo compile

# Test specific methods
dargo compile -c Token -m transfer

# Execute with parameters
dargo execute -c Token -m transfer -p 123 50

# Run tests
dargo test -f src/main.psy
  1. Format Code:
dargo fmt src/main.psy

Debugging Workflow

  1. Check Compilation Errors:
dargo compile
# Review any compilation errors in output
  1. Test Individual Methods:
# Test one method at a time
dargo execute -c MyContract -m simple_method -p 10

# Add debug prints in your code and recompile
  1. Verify Circuit Generation:
# Ensure circuits are generated correctly
dargo compile -c MyContract -m target_method
# Check target/ directory for output files

Error Handling

Common Errors and Solutions

Error: "contract not found"

# Make sure contract name matches exactly
dargo compile -c TokenContract  # Case-sensitive!

Error: "method not found"

# Verify method exists in the contract implementation
# Check for typos in method name
dargo compile -c Token -m transfer  # Check spelling

Error: "parameter mismatch"

# Check parameter count and types
# Method expects: fn transfer(to: Felt, amount: Felt)
dargo execute -c Token -m transfer -p 123 50  # Correct: 2 parameters

Compilation Errors:

  • Read the formatted output to see processed code
  • Check for syntax errors in the formatted version
  • Verify all imports and dependencies are available

Performance Tips

  1. Incremental Compilation:

    • Dargo compiles only changed files when possible
    • Use specific method compilation for faster iteration
  2. Method-Specific Testing:

    • Test individual methods instead of entire contracts
    • Use --method-names to focus on specific functionality
  3. Circuit Optimization:

    • Simpler methods generate more efficient circuits
    • Avoid complex control flow when possible

Best Practices

  1. Project Organization:

    my_project/
    ├── src/
    │   ├── main.psy       # Main contract
    │   ├── utils.psy      # Utility functions
    │   └── tests.psy      # Test functions
    └── examples/
        └── usage.psy      # Usage examples
    
  2. Testing Strategy:

    • Write tests for each public contract method
    • Test edge cases and error conditions
    • Use descriptive test function names
  3. Development Process:

    • Compile frequently during development
    • Test methods individually before integration
    • Format code regularly with dargo fmt

Future Features

Planned features for future Dargo versions:

  • Dependency Management: External package dependencies
  • Build Profiles: Debug/release configurations
  • Package Registry: Shared package repository
  • IDE Integration: Enhanced editor support
  • Profiling Tools: Circuit complexity analysis

Summary

Dargo provides a complete development environment for Psy smart contracts:

  • Project Management: Create and organize Psy projects
  • Compilation: Convert Psy code to ZK circuits
  • Execution: Run and test contract methods
  • Testing: Automated test execution
  • Formatting: Consistent code style

Use Dargo for efficient development, testing, and deployment of zero-knowledge smart contracts in the Psy language.

Psy SDK Overview

The Psy SDK provides both Rust command-line tools and TypeScript libraries for interacting with the Psy network. This enables developers to build applications, deploy contracts, and interact with the blockchain from various environments.

Available SDKs

  • Rust CLI: Command-line tools for contract deployment, user registration, and network interaction
  • TypeScript SDK: JavaScript/TypeScript package for web applications and Node.js environments

Common Use Cases

  • User Management: Register users and manage wallet operations
  • Contract Development: Deploy and interact with smart contracts
  • Network Interaction: Query blockchain state and submit transactions
  • Proof Generation: Generate ZK proofs for transactions and network operations

Getting Started

  1. Choose your preferred SDK based on your development environment
  2. Follow the installation instructions for your chosen SDK
  3. Configure network endpoints in your application
  4. Begin building with the provided APIs and examples

Rust SDK

The Psy Rust SDK provides programmatic access to the Psy network through the psy-rust-sdk crate. It includes the RpcProvider for network communication and data types for blockchain interaction.

Installation

Add to your Cargo.toml:

[dependencies]
psy-rust-sdk = { path = "path/to/psy_sdk/psy-rust-sdk" }

Or include the underlying components:

[dependencies]
psy_provider = { path = "path/to/psy_provider" }
psy_data = { path = "path/to/psy_core/psy_data" }
psy_common = { path = "path/to/psy_core/psy_common" }
psy_config = { path = "path/to/psy_config" }

Core Components

Re-exports

The psy-rust-sdk crate re-exports essential components:

#![allow(unused)]
fn main() {
use psy_rust_sdk::{
    psy_common,
    psy_config::network_constants,
    psy_crypto,
    provider::{RpcProvider, ProveProxyRpcProvider},
    request,
    session,
    wallet,
};
}

Configuration

Create a config.json file with network endpoints:

{
  "networks": {
    "localhost": {
      "coordinator_configs": [
        {"id": 0, "rpc_url": ["http://127.0.0.1:8545"]}
      ],
      "realm_configs": [
        {"id": 0, "rpc_url": ["http://127.0.0.1:8546"]},
        {"id": 1, "rpc_url": ["http://127.0.0.1:8547"]}
      ],
      "prove_proxy_url": ["http://127.0.0.1:9999"],
      "fees": {
        "guta_fee": 5000000000
      }
    }
  }
}

RpcProvider

The RpcProvider is the core component for programmatic interaction with the Psy network:

#![allow(unused)]
fn main() {
use psy_provider::provider::RpcProvider;

// Create from config file
let rpc_provider = RpcProvider::new_with_config_path("config.json")?;

// Or create from network config
let rpc_provider = RpcProvider::new_with_config(&network_config)?;

// Set user context
rpc_provider.set_user_id(user_id);

// Register user
let register_request = QRegisterUserRPCRequest { /* ... */ };
let user_uuid = rpc_provider.register_user(register_request).await?;

// Deploy contract
let deploy_request = QDeployContractRPCRequest { /* ... */ };
let contract_uuid = rpc_provider.deploy_contract(deploy_request).await?;

// Submit end cap (contract call result)
let end_cap_request = QSubmitEndCapRPCRequest { /* ... */ };
let end_cap_uuid = rpc_provider.submit_end_cap_proof(end_cap_request).await?;

// Get block state
let block_state = rpc_provider.get_realm_latest_block_state().await?;
}

ProveProxyRpcProvider

For proof generation operations:

#![allow(unused)]
fn main() {
use psy_provider::provider::ProveProxyRpcProvider;

let prove_provider = ProveProxyRpcProvider::new_with_config(proof_proxy_url).await?;

// Register contract circuits
prove_provider.register_contract_circuits(contract_id, &contract_code).await?;

// Prove contract call
let proof = prove_provider.prove_contract_call(contract_id, fn_id, &input).await?;

// Prove UPS operations
let ups_proof = prove_provider.prove_ups_start(&ups_input).await?;
}

Data Types

Core data structures in psy_data:

#![allow(unused)]
fn main() {
use psy_rust_sdk::psy_data::qdata::{
    checkpoint::PsyCheckpointLeaf,
    contract::{PsyContractLeaf, ContractCodeDefinition},
    user::PsyUserLeaf,
    user_public_key::PsyUserPublicKeyRecord,
};
}

PsyUserLeaf

User account state in the merkle tree:

  • public_key: User's public key hash
  • user_state_tree_root: Root of user's state tree
  • balance: User's token balance
  • nonce: Transaction sequence number
  • user_id: Unique user identifier

PsyContractLeaf

Contract state in the merkle tree:

  • deployer: Contract deployer's public key hash
  • function_tree_root: Root of contract function tree
  • state_tree_height: Height of contract state tree

ContractCodeDefinition

Contract bytecode and metadata:

  • state_tree_height: Contract state tree configuration
  • functions: Array of contract function definitions

PsyCheckpointLeaf

Checkpoint state in the merkle tree:

  • global_chain_root: Root hash of the global chain state
  • stats: Checkpoint statistics and metadata

PsyUserPublicKeyRecord

Public key information for users:

  • Maps user IDs to their public key data

TypeScript SDK

The Psy TypeScript SDK provides JavaScript/TypeScript interfaces for interacting with contracts and the Psy network from web applications and Node.js environments.

Installation

Follow the Node Installation guide to install the required components and build the TypeScript SDK.

Setup Contract SDK

Generate TypeScript bindings for your contracts:

  1. Place your contract.abi.json in psy_sdk/psy-ts-sdk/packages/contract-sdk/abi/
  2. Run the generator:
cd psy_sdk/psy-ts-sdk/packages/contract-sdk
pnpm install
pnpm generate

The generator creates TypeScript bindings based on your contract ABI, which are then used to create typed contract instances.

Core Components

RpcProvider

The RpcProvider handles network communication with Psy nodes:

import { RpcProvider } from "@psy/psy-sdk/rpc-provider/provider.js";

// Initialize provider
const rpcProvider = new RpcProvider(
  coordinator_configs,
  realm_configs,
  users_per_realm
);

// Singleton pattern
let rpcProvider: null | RpcProvider = null;

export function getRpcProvider() {
  if (!rpcProvider) {
    rpcProvider = new RpcProvider(
      rpcConfig.coordinator_configs,
      rpcConfig.realm_configs,
      rpcConfig.users_per_realm
    );
  }
  return rpcProvider;
}

Configuration

Create configuration for network endpoints:

const rpcConfig = {
  coordinator_configs: [
    { id: 0, rpc_url: ["http://127.0.0.1:8545"] }
  ],
  realm_configs: [
    { id: 0, rpc_url: ["http://127.0.0.1:8546"] },
    { id: 1, rpc_url: ["http://127.0.0.1:8547"] }
  ],
  users_per_realm: 1000000
};

Memory Wallet Provider

Create a provider for on-chain data and transactions:

import { createMemoryWalletProvider } from "@psy/psy-sdk";

const walletProvider = createMemoryWalletProvider(privateKey);

Contract Interaction

Basic Usage

import { getRpcProvider } from "./rpcProvider";
// Import generated contract class based on your ABI
import { YourContractSDK } from "./generated/contract-bindings";

// Initialize contract with generated bindings
const contract = new YourContractSDK({
  rpcProvider: getRpcProvider(),
  walletProvider,
  contractId,
  checkpointId,
  userId
});

// Call contract method (typed based on ABI)
const result = await contract.methodName(param1, param2);

Contract Object Management

The contract object automatically handles:

  • checkpointId association
  • userId mapping
  • contractId tracking

User Operations

User Registration

import { 
  PsyUserWalletProvider,
  SignType,
  createMemoryWalletProvider 
} from "@psy/psy-sdk";

// Create wallet provider
const provider = new PsyUserWalletProvider(networkConfig);

// Register user with private key
async function registerUser(privateKeyHex: string, signType: SignType) {
  try {
    await provider.signerProvider.registerUser(privateKeyHex, signType);
    console.log("User registered successfully");
  } catch (error) {
    console.error("Error registering user:", error);
  }
}

// Get user ID from public key
async function getUserId(publicKeyHex: string): Promise<number> {
  const userId = await provider.coordinatorEdgeRpcProvider.getUserId(publicKeyHex);
  return userId;
}

Wallet Management

import { 
  PsyUserWallet,
  PsyUserWalletProvider,
  SignType 
} from "@psy/psy-sdk";

// Create wallet from private key
async function createWallet(
  provider: PsyUserWalletProvider,
  privateKeyHex: string,
  signType: SignType
) {
  // Import private key and get signer
  const signer = await provider.signerProvider.importPrivateKey!(
    privateKeyHex,
    signType,
    ""
  );

  // Get public key
  const publicKeyHex = await signer.getPublicKeyHex();
  
  // Get user ID
  const userId = await provider.coordinatorEdgeRpcProvider.getUserId(publicKeyHex);
  
  // Create wallet instance
  const wallet = new PsyUserWallet(
    provider.networkId,
    signer,
    provider.coordinatorEdgeRpcProvider,
    provider.realmEdgeRpcProvider.getRpcProviderByUserId(userId),
    userId,
    publicKeyHex,
    true
  );

  return wallet;
}

Data Fetching Operations

import { 
  Felt, 
  PsyUserWalletProvider,
  IRealmEdgeRpcProvider 
} from "@psy/psy-sdk";

// Fetch latest checkpoint/block number
async function fetchBlockNumber(
  walletProvider: PsyUserWalletProvider
): Promise<number> {
  const checkpointResponse = 
    await walletProvider.coordinatorEdgeRpcProvider.getLatestCheckpoint();
  
  return checkpointResponse ? Number(checkpointResponse.checkpoint_id) : 0;
}

// Fetch user balance from contract state
async function fetchUserBalance(
  walletProvider: PsyUserWalletProvider,
  checkpointId: Felt,
  userId: Felt,
  userContractId: Felt
): Promise<number> {
  const merkleProof = await walletProvider.realmEdgeRpcProvider
    .getRpcProviderByUserId(userId)
    .getUserContractStateTreeMerkleProof(
      checkpointId,
      userId,
      userContractId,
      32,
      0
    );

  if (merkleProof && merkleProof.value.length === 64) {
    return parseInt(merkleProof.value.substring(48, 64), 16);
  }
  
  return 0;
}

// Get user ID from public key
async function fetchUserId(
  walletProvider: PsyUserWalletProvider,
  publicKeyHex: string
): Promise<number> {
  return await walletProvider.coordinatorEdgeRpcProvider.getUserId(publicKeyHex);
}

Transaction Operations

import { ContractCallArgs, Felt, PsyJSON } from "@psy/psy-sdk";

// Execute contract call using wallet
async function execContractCall(
  wallet: PsyUserWallet,
  address: string,
  args: ContractCallArgs | ContractCallArgs[]
) {
  try {
    const result = await wallet.execContractCall(address, args);
    return result;
  } catch (error) {
    console.error("Contract call failed:", error);
    throw error;
  }
}

// Transfer tokens example
async function transferTokens(
  wallet: PsyUserWallet,
  walletAddress: string,
  recipient: Felt,
  amount: Felt
) {
  const contractCallArgs: ContractCallArgs = {
    contract_id: "token-contract-id",
    method_name: "transfer",
    inputs: [recipient, amount]
  };
  
  return await execContractCall(wallet, walletAddress, contractCallArgs);
}

// Claim rewards from another user
async function claimTokens(
  wallet: PsyUserWallet,
  walletAddress: string,
  senderUserId: Felt
) {
  const contractCallArgs: ContractCallArgs = {
    contract_id: "token-contract-id",
    method_name: "simple_claim",
    inputs: [senderUserId]
  };
  
  return await execContractCall(wallet, walletAddress, contractCallArgs);
}

Examples

Complete Application Setup

import { 
  PsyUserWalletProvider,
  SignType,
  createMemoryWalletProvider 
} from "@psy/psy-sdk";

// 1. Setup configuration
const networkConfig = {
  coordinator_configs: [
    { id: 0, rpc_url: ["http://127.0.0.1:8545"] }
  ],
  realm_configs: [
    { id: 0, rpc_url: ["http://127.0.0.1:8546"] }
  ],
  users_per_realm: 1000000
};

// 2. Initialize wallet provider
const provider = new PsyUserWalletProvider(networkConfig);

// 3. Register and create wallet
const privateKey = "your-private-key";
await provider.signerProvider.registerUser(privateKey, SignType.SECP256K1);

const wallet = await createWallet(provider, privateKey, SignType.SECP256K1);

// 4. Get user info
const userInfo = await wallet.getUserInfo();
console.log("User info:", userInfo);

// 5. Execute transactions
await transferTokens(wallet, recipientId, amount);

Demo Examples

Run the provided demo examples:

cd psy_sdk/psy-ts-sdk/packages/contract-sdk/demo

pnpm install
pnpm example:basic

Important Notes

  1. Ensure config.json is correctly configured with network details
  2. The contract object handles checkpointId, userId, and contractId association automatically
  3. Use createMemoryWalletProvider for wallet operations
  4. Generated contract bindings provide type-safe interfaces for your contracts

SDKeys Overview

Software-Defined Keys (SDKeys) is Psy's innovative signature system that allows users to define custom cryptographic circuits as their signing scheme. Unlike traditional fixed signature schemes, SDKeys enables programmable cryptography where users can implement their own zero-knowledge proof circuits for authentication.

Key Concepts

Software-Defined Signatures

In traditional blockchain systems, signature schemes are hardcoded (e.g., ECDSA, EdDSA). Psy introduces Software-Defined Keys where users can:

  • Define Custom Circuits: Write zero-knowledge circuits that serve as signature schemes
  • Programmable Authentication: Create complex authentication logic using ZK circuits
  • Flexible Security Models: Choose between different trade-offs of security, performance, and complexity

How SDKeys Work

  1. Circuit Definition: Define a ZK circuit that implements custom authorization logic
  2. Proof Generation: Generate zero-knowledge proofs based on the circuit constraints
  3. Verification: The network verifies proofs against the circuit's public parameters
  4. Authentication: Valid proofs authorize transactions without requiring traditional private keys

Revolutionary Features

Beyond Private Keys: SDKeys enable a new paradigm of public, constraint-based accounts:

  • Anyone Can Deploy: Any developer can write a circuit and deploy it as a new "user" on the network
  • Public Execution: Once deployed, anyone can call these circuit-based accounts
  • Constraint-Based Authorization: The circuit defines what calls are allowed, not who can make them
  • Programmable Logic: Complex business logic encoded directly in the authorization layer

Key Innovation:

  • From Identity to Logic: Authentication shifts from "who you are" (private key) to "what you can prove" (circuit satisfaction)
  • Public Smart Accounts: Create accounts that anyone can use but with built-in constraints on how they operate
  • Decentralized Services: Deploy autonomous services that operate according to mathematical rules rather than trust

Benefits

  • Programmable Authorization: Define complex authorization logic in circuits
  • Trustless Automation: Bots and agents without trusted operators
  • Quantum Resistance: ZK-based authentication immune to quantum attacks
  • Privacy Enhancement: Hide authorization logic while proving compliance
  • Flexible Security Models: Choose between private keys, public constraints, or hybrid approaches

Built-in Signature Schemes

Psy provides two built-in signature schemes for immediate use, plus support for custom circuits:

  • ZK Key: Optimized zero-knowledge signature scheme (recommended)
  • SECP256K1: ECDSA-compatible scheme for legacy integration
  • Custom Circuits: User-defined authorization logic for advanced use cases

For detailed technical specifications and performance comparisons, see Signature Schemes.

Use Cases

Traditional Use Cases (With Private Keys)

  • Personal Wallets: Fast ZK-based transaction signing
  • Enhanced Privacy: Hide transaction patterns and amounts
  • Multi-Factor Auth: Combine multiple secrets or conditions

Revolutionary Use Cases (Public Circuit Deployment)

Deployable Public Services

Precise Parameter Control: SDKeys allow you to constrain not just which contracts can be called, but the exact parameters:

// Example: Public Liquidation Service
#[software_defined_signature]
pub fn liquidation_bot_auth(
    contract_id: Felt,
    inputs: &[Felt],
    position_health: PositionHealthProof,
) -> bool {
    // Only allow calling liquidation contract
    if contract_id != LENDING_CONTRACT_ID {
        return false;
    }
    
    // Verify position is actually unhealthy
    if !verify_position_health_proof(position_health) {
        return false;
    }
    
    // Constrain liquidation parameters
    let liquidation_amount = inputs[1];
    let collateral_ratio = inputs[2];
    
    liquidation_amount <= MAX_LIQUIDATION_AMOUNT &&
    collateral_ratio >= MIN_COLLATERAL_RATIO &&
    position_health.account_id == inputs[0]  // ensure correct account
}

// Example: Public Treasury Management  
#[software_defined_signature]
pub fn treasury_bot_auth(
    contract_id: Felt,
    method_name: &str,
    inputs: &[Felt],
    allocation_data: AllocationProof,
) -> bool {
    if contract_id != TREASURY_CONTRACT_ID {
        return false;
    }
    
    match method_name {
        "stake" => {
            inputs[0] <= MAX_STAKE_PER_TX &&  // limit stake amount
            is_approved_validator(inputs[1])  // only approved validators  
        },
        "rebalance" => {
            // Only allow rebalancing when allocation drifts > 5%
            let drift = abs_diff(allocation_data.current, allocation_data.target);
            drift > 0.05 && inputs[0] == allocation_data.optimal_allocation
        },
        _ => false
    }
}

Real-World Applications:

  • DeFi Protocol Automation: Anyone can trigger protocol operations (liquidations, rebalancing) but only when mathematically justified
  • DAO Treasury Management: Public execution of treasury decisions with built-in governance constraints
  • Cross-Chain Bridge Operations: Automated bridge operations with safety constraints

Example: Public Trading Bot

Using Psy Language (DPN Software Defined):

// Define a software-defined signature circuit in Psy language
#[software_defined_signature]
pub fn trading_bot_auth(
    contract_id: Felt,
    method_name: &str, 
    inputs: &[Felt],
    market_data: MarketData,
) -> bool {
    // Constrain which contract can be called
    if contract_id != TRADING_CONTRACT_ID {
        return false;
    }
    
    // Constrain which methods are allowed
    match method_name {
        "buy" => {
            // Constrain buy order parameters
            inputs[0] < MAX_BUY_AMOUNT &&    // amount constraint
            inputs[1] > MIN_PRICE &&         // minimum price  
            market_data.balance > inputs[0]  // sufficient balance
        },
        "sell" => {
            // Constrain sell order parameters
            inputs[0] < market_data.holdings &&  // can't sell more than owned
            inputs[1] < MAX_SLIPPAGE             // slippage protection
        },
        _ => false  // reject other methods
    }
}

Using Plonky2 Circuits (Low-level):

#![allow(unused)]
fn main() {
// Register circuit using Plonky2 directly
let fingerprint = circuit_manager
    .register_plonky2_software_defined_circuit(32, 4)
    .await?;

// Deploy DPNSoftwareDefinedCallData
let call_data = DPNSoftwareDefinedCallData {
    contract_id: TRADING_CONTRACT_ID,
    inputs: vec![amount, price, slippage_limit, market_condition],
};
}

After deployment, anyone can trigger trades:

# Anyone can call this software-defined account
psy_user_cli call \
  --software-defined-call \
  --contract-id 0 \
  --inputs "[1000, 95000, 500, 1]" \
  --fingerprint <trading_bot_fingerprint>

AI Agents and Autonomous Systems

  • Portfolio Rebalancing: Deploy rebalancing logic that anyone can trigger when portfolios drift from targets
  • Yield Optimization: Create yield farming bots with constraints ensuring optimal allocation
  • Risk Management: Deploy automatic position sizing based on volatility metrics

Conditional and Time-Based Operations

  • Scheduled Payments: Transactions authorized only at specific times
  • Oracle-Based Logic: Authorization based on external data feeds
  • State-Dependent Actions: Transactions conditional on blockchain state

Collaborative Systems

  • Shared Treasuries: Group fund management with programmable spending rules
  • DAO Automation: Governance decisions executed automatically when conditions are met
  • Multi-Party Computation: Complex operations requiring multiple participants

Getting Started

  1. Choose a Signature Scheme: Select between built-in options
  2. Create a Wallet: Generate keys using your chosen scheme
  3. Register User: Register your public key with the network
  4. Sign Transactions: Use your keys to authenticate operations

Security Considerations

  • Key Storage: Securely store private keys and circuit parameters
  • Circuit Auditing: Ensure custom circuits are properly audited
  • Backup Strategy: Maintain secure backups of key material
  • Upgrade Planning: Plan for signature scheme upgrades when needed

Built-in Signature Schemes

Psy provides two built-in signature schemes that serve different use cases and performance requirements.

ZK Key Signature

The ZK signature scheme is Psy's optimized zero-knowledge signature system.

Characteristics

  • Signature Type: zk
  • Proof Generation Time: 2-5 seconds
  • Circuit Optimization: Highly optimized for fast proving
  • Security Model: Zero-knowledge proof of secret knowledge
  • Quantum Resistance: Designed with post-quantum considerations

When to Use ZK Keys

Recommended for:

  • General-purpose transaction signing
  • High-frequency trading applications
  • Real-time user interactions
  • Mobile and web applications requiring responsive UX
  • Applications prioritizing performance

Technical Details

QED ZK Signature Scheme:

public_key_params = hash(private_key, private_key_constants)
fingerprint = hash(verifier_data)
public_key = hash(public_key_params, fingerprint)
sig_action_hash = hash(data, network_magic, nonce)

circuit = private_inputs.private_key.get_public_key() == public_inputs_preimage[0..4]
public_inputs = hash(public_key_params, sig_action_hash)
private_inputs = private_key

Key Features:

  • Custom Signature Logic: Supports transaction introspection and custom constraints
  • ECDSA Compatibility: Can integrate with existing ECDSA infrastructure
  • Quantum Resistance: Designed to be resistant to quantum computing attacks
  • Optimized Circuit: Minimal constraint count for fast proving (~2-5 seconds)

SECP256K1 Signature

The SECP256K1 scheme provides compatibility with existing elliptic curve tooling through zero-knowledge proofs.

Characteristics

  • Signature Type: secp256k1
  • Proof Generation Time: 10-20 seconds
  • Circuit Complexity: Higher constraint count
  • Security Model: Elliptic curve discrete logarithm
  • Compatibility: Works with existing ECDSA tooling

Technical Details

QED Software Defined Signature (SECP256K1):

public_key_hash = hash(secp256k1_public_key)
public_key_params = public_key_hash
fingerprint = hash(verifier_data)
public_key = hash(public_key_params, fingerprint)
sig_action_hash = hash(data, network_magic, nonce)

circuit = {
  hash(private_inputs.secp256k1_public_key) == public_inputs[0..4]
  secp256k1_verify(private_inputs.secp256k1_public_key, secp256k1_signature, sig_action_hash)
}
public_inputs = hash(public_key_params, sig_action_hash)
private_inputs = secp256k1_public_key, secp256k1_signature, sig_action_hash_preimage

Circuit Implementation:

  • ECDSA Verification: Implements SECP256K1 signature verification in ZK circuit
  • Public Key Validation: Proves knowledge of private key without revealing it
  • Higher Complexity: More constraints result in longer proving times
  • Backward Compatibility: Enables migration from traditional ECDSA systems

When to Use SECP256K1

⚠️ Use only when:

  • Migrating from existing ECDSA-based systems
  • Requiring compatibility with external tools
  • Working with legacy applications
  • Development and testing scenarios

Performance Impact

The longer proof generation time of SECP256K1 makes it less suitable for:

  • Interactive applications requiring quick responses
  • High-throughput systems
  • Mobile applications with limited computational resources
  • Real-time trading platforms

Comparison Table

FeatureZK KeySECP256K1
Proof Time2-5 seconds10-20 seconds
Performance⭐⭐⭐⭐⭐⭐⭐
SecurityHigh (ZK-based)Standard (EC-based)
Circuit SizeOptimizedComplex
Quantum ResistanceBetter preparedVulnerable
Tool CompatibilityPsy nativeECDSA compatible
Use Case🎯 Primary🔧 Compatibility

Choosing the Right Scheme

Default Choice: ZK Key

For most applications, ZK key (zk) is the recommended choice because:

# Fast and efficient for most use cases
psy_user_cli register-user --private-key <key> --sign-type zk

When SECP256K1 Might Be Needed

# Only for specific compatibility requirements
psy_user_cli register-user --private-key <key> --sign-type secp256k1

Consider SECP256K1 only if you have specific requirements for:

  • Integration with existing ECDSA infrastructure
  • Migration scenarios from traditional blockchain systems
  • Development environments requiring ECDSA tooling

Performance Benchmarks

Proof Generation Times

Based on standard hardware configurations:

ZK Key Performance:

  • Consumer laptop: ~2-3 seconds
  • Server hardware: ~1-2 seconds
  • Mobile device: ~4-5 seconds

SECP256K1 Performance:

  • Consumer laptop: ~12-15 seconds
  • Server hardware: ~8-10 seconds
  • Mobile device: ~18-25 seconds

Resource Usage

ZK Key:

  • Memory usage: Moderate
  • CPU utilization: Efficient
  • Battery impact: Low (mobile)

SECP256K1:

  • Memory usage: Higher
  • CPU utilization: Intensive
  • Battery impact: Significant (mobile)

Migration Considerations

Upgrading from SECP256K1 to ZK

If you're currently using SECP256K1 and want to upgrade:

  1. Generate a new ZK key pair
  2. Register the new public key
  3. Update applications to use the new key
  4. Gradually migrate transaction signing

Backward Compatibility

Both signature schemes can coexist in the same application:

  • Different users can use different schemes
  • Applications can support both simultaneously
  • Gradual migration strategies are supported

Best Practices

For New Applications

# Always prefer ZK keys for new implementations
SIGN_TYPE=zk
psy_user_cli wallet create
psy_user_cli register-user --sign-type ${SIGN_TYPE}

For Existing Systems

  1. Evaluate Requirements: Determine if ECDSA compatibility is truly needed
  2. Performance Testing: Measure actual proof generation times in your environment
  3. User Experience: Consider the impact of longer signing times
  4. Migration Planning: Plan for eventual upgrade to ZK keys

Development vs Production

Development:

# Fast iteration with ZK keys
SIGN_TYPE=zk make register-user

Legacy Testing:

# When testing ECDSA compatibility
SIGN_TYPE=secp256k1 make register-user

Future Considerations

Roadmap

  • Custom Circuits: Support for user-defined signature circuits
  • Aggregated Signatures: Batch verification optimizations
  • Hardware Acceleration: GPU and specialized hardware support
  • Mobile Optimization: Further optimizations for mobile devices

Deprecation Timeline

While SECP256K1 support will continue, new features and optimizations will focus on ZK-based schemes. Plan migration to ZK keys for long-term compatibility.

Wallet Management

This guide covers wallet creation, management, and usage with Psy's SDKeys signature schemes.

Creating Wallets

Method 1: Create New Wallet

Generate a completely new wallet with random private key:

# Create a new wallet (interactive)
psy_user_cli wallet create

This command will:

  1. Generate a new private key
  2. Create an encrypted keystore file
  3. Display the wallet information
  4. Save the wallet to .wallets/ directory

Method 2: Generate Random Wallet

Generate a random wallet with specified signature type:

# Generate random wallet with ZK signature (recommended)
psy_user_cli wallet random --sign-type zk

# Generate random wallet with SECP256K1 signature
psy_user_cli wallet random --sign-type secp256k1

Method 3: Import Existing Private Key

If you have an existing private key, you can import it:

# Get wallet info from private key
psy_user_cli wallet info --private-key <your_private_key> --sign-type zk

Wallet Information

View Wallet Details

Display information about a wallet:

# View wallet info using private key
psy_user_cli wallet info --private-key <private_key> --sign-type zk

# View wallet info using keystore
psy_user_cli wallet info --keystore-path .wallets/your_wallet.json

Output includes:

  • Public key
  • Address representation
  • Signature type
  • Key derivation information

User Registration

Before using a wallet for transactions, you must register the user with the Psy network.

# Register user with ZK signature scheme
psy_user_cli register-user --private-key <private_key> --sign-type zk

# Register using keystore file
psy_user_cli register-user --keystore-path .wallets/wallet.json --sign-type zk

Register with SECP256K1 Signature

# Register user with SECP256K1 signature scheme
psy_user_cli register-user --private-key <private_key> --sign-type secp256k1

# Register using keystore file  
psy_user_cli register-user --keystore-path .wallets/wallet.json --sign-type secp256k1

Registration Response

Successful registration returns:

{
  "user_id": 12345,
  "public_key": "0x...",
  "transaction_hash": "0x...",
  "checkpoint_id": 67890
}

Signing Transactions

Contract Calls

Execute contract methods using your wallet:

# Call contract method with private key
psy_user_cli call \
  --private-key <private_key> \
  --contract-id <contract_id> \
  --method-name <method_name> \
  --inputs "[param1, param2, ...]" \
  --sign-type zk

# Call contract method with keystore
psy_user_cli call \
  --keystore-path .wallets/wallet.json \
  --contract-id <contract_id> \
  --method-name <method_name> \
  --inputs "[param1, param2, ...]" \
  --sign-type zk

Example: Token Operations

# Mint tokens
psy_user_cli call \
  --keystore-path .wallets/treasury.json \
  --contract-id 0 \
  --method-name simple_mint \
  --inputs "[1000000000000]" \
  --sign-type zk

# Transfer tokens
psy_user_cli call \
  --private-key <sender_private_key> \
  --contract-id 0 \
  --method-name simple_transfer \
  --inputs "[<recipient_user_id>, 250000000000]" \
  --sign-type zk

# Claim tokens from another user
psy_user_cli call \
  --private-key <recipient_private_key> \
  --contract-id 0 \
  --method-name simple_claim \
  --inputs "[<sender_user_id>]" \
  --sign-type zk

Keystore Management

Keystore File Format

Psy uses encrypted keystore files for secure key storage:

# Keystore files are stored in .wallets/ directory
.wallets/
├── miner0.json
├── miner1.json
├── treasury.json
└── user_wallet.json

Creating Keystore from Private Key

# Create wallet and save as keystore
psy_user_cli wallet create

# This automatically creates an encrypted keystore file
# Password protection is applied during creation

Using Keystore Files

# Register user using keystore
psy_user_cli register-user \
  --keystore-path .wallets/miner0.json \
  --sign-type zk

# Execute transactions using keystore
psy_user_cli call \
  --keystore-path .wallets/miner0.json \
  --contract-id 0 \
  --method-name simple_mint \
  --inputs "[1000]" \
  --sign-type zk

Multi-User Scenarios

Multiple Wallets for Testing

Create and register multiple users for testing:

# Create multiple test users with different signature types
psy_user_cli register-user --private-key 17c975c2668ebe0ca7c87f67c6414ebb7fd664f46370a0af2a3b204c8824ac5a --sign-type zk
sleep 0.5
psy_user_cli register-user --private-key f07f91a0bdc0df4ec763285ba0eb578cb6e7a0811c3150494ab54e56f761fc1d --sign-type zk  
sleep 0.5
psy_user_cli register-user --private-key 73ae514d6f69510ad778a05128d980951d9d8c097beb022471b2f50f19c41268 --sign-type zk

Cross-User Transactions

# User 0 transfers to User 1
psy_user_cli call \
  --private-key 17c975c2668ebe0ca7c87f67c6414ebb7fd664f46370a0af2a3b204c8824ac5a \
  --contract-id 0 \
  --method-name batch_simple_transfer \
  --inputs "[1, 0, 0, 0, 0, 250000000000, 0, 0, 0, 0]" \
  --sign-type zk

# User 1 claims the transfer
psy_user_cli call \
  --private-key f07f91a0bdc0df4ec763285ba0eb578cb6e7a0811c3150494ab54e56f761fc1d \
  --contract-id 0 \
  --method-name simple_claim \
  --inputs "[0]" \
  --sign-type zk

Mining Wallets

Create Mining Wallets

For mining operations, create dedicated wallets:

# Create mining wallets
psy_user_cli wallet create  # Creates .wallets/miner0.json
psy_user_cli wallet create  # Creates .wallets/miner1.json

# Register mining wallets
psy_user_cli register-user --keystore-path .wallets/miner0.json --sign-type zk
psy_user_cli register-user --keystore-path .wallets/miner1.json --sign-type zk

Use Mining Wallets

# Start mining with keystore
psy_node_cli worker \
  --config ./config.json \
  --keystore-path .wallets/miner0.json \
  --recipient 3145728

# Claim mining rewards
psy_user_cli claim-rewards \
  --keystore-path .wallets/miner0.json \
  --sign-type zk \
  --limit 10000

Security Best Practices

Key Storage

  1. Backup Keystore Files: Keep secure copies of .wallets/ directory
  2. Strong Passwords: Use strong passwords for keystore encryption
  3. Access Control: Limit file system access to keystore files
  4. Hardware Security: Consider hardware wallets for high-value operations

Private Key Handling

# Use environment variables for sensitive operations
export PRIVATE_KEY="your_private_key_here"
psy_user_cli register-user --private-key $PRIVATE_KEY --sign-type zk

# Clear environment variables after use
unset PRIVATE_KEY

Production Considerations

  1. Key Rotation: Plan for periodic key rotation
  2. Multi-Signature: Implement multi-signature schemes for critical operations
  3. Monitoring: Monitor wallet activity and unusual transactions
  4. Backup Strategy: Maintain secure, distributed backups

Troubleshooting

Common Issues

Keystore file not found:

# Verify keystore path
ls -la .wallets/
# Ensure file exists and has correct permissions

Registration fails:

# Check network connectivity
# Verify private key format
# Ensure signature type matches

Long proof generation times:

# Switch to ZK signature type for better performance
psy_user_cli register-user --private-key <key> --sign-type zk

Performance Optimization

  1. Use ZK signatures for optimal performance
  2. Hardware considerations: Ensure adequate CPU and memory
  3. Network latency: Use reliable network connections
  4. Batch operations: Group multiple transactions when possible

Advanced Usage

Custom Circuit Integration

Future versions will support custom signature circuits:

# Placeholder for future custom circuit support
psy_user_cli register-user \
  --circuit-path ./my_custom_circuit.json \
  --private-key <private_key> \
  --sign-type custom

Integration with Hardware Wallets

Planning for hardware wallet integration:

# Future hardware wallet support
psy_user_cli register-user \
  --hardware-wallet ledger \
  --derivation-path "m/44'/60'/0'/0/0" \
  --sign-type zk

Advanced Software-Defined Signatures

Beyond the built-in ZK and SECP256K1 signature schemes, Psy enables truly programmable authentication through custom zero-knowledge circuits. This allows developers to implement sophisticated signing logic with transaction introspection capabilities.

Extended Software-Defined Keys

Concept

Software-defined signatures enable authentication schemes that go beyond simple cryptographic signatures. Users can define custom circuits that implement complex authorization logic.

Basic Extended Signature

Using Psy Language:

#[software_defined_signature]
pub fn basic_constraint_auth(
    secret: Felt,
    contract_id: Felt,
    method_id: Felt,
    inputs: &[Felt],
) -> bool {
    // Verify knowledge of secret (any unique identifier)
    let computed_identifier = hash(secret);
    let secret_valid = computed_identifier == EXPECTED_IDENTIFIER;
    
    // Constrain contract and method
    let contract_valid = contract_id == 0;
    let method_valid = method_id == 0;
    let input_valid = inputs[0] < 500;
    
    secret_valid && contract_valid && method_valid && input_valid
}

Using DPNSoftwareDefinedCallData:

#![allow(unused)]
fn main() {
// Deploy the circuit
let call_data = DPNSoftwareDefinedCallData {
    contract_id: 0,
    inputs: vec![amount, param1, param2],  // amount must be < 500
};
}

Key Features:

  • Flexible Authentication: Authentication not tied to traditional private keys
  • Transaction Constraints: Circuits can constrain transaction parameters
  • Public Authorization: Can implement publicly verifiable authorization logic

Use Case: Matching Bot

Automated Market Making

A sophisticated use case is implementing an automated order matching system:

Using Psy Language for Order Matching:

#[software_defined_signature]
pub fn order_matching_auth(
    contract_id: Felt,
    method_name: &str,
    inputs: &[Felt],
    buy_orders_proof: MerkleProof,
    sell_orders_proof: MerkleProof,
    checkpoint_data: CheckpointData,
) -> bool {
    // Verify this is the expected protocol
    let protocol_valid = hash("QEDProtocol") == EXPECTED_PROTOCOL_HASH;
    assert(protocol_valid);
    
    // Only allow order matching contract
    let contract_valid = contract_id == ORDER_BOOK_CONTRACT_ID;
    assert(contract_valid);
    
    // Only allow match_orders method
    let method_valid = method_name == "match_orders";
    assert(method_valid);
    
    // Verify merkle proofs against checkpoint
    let buy_proof_valid = verify_merkle_proof(&checkpoint_data.tree_root, &buy_orders_proof);
    let sell_proof_valid = verify_merkle_proof(&checkpoint_data.tree_root, &sell_orders_proof);
    assert(buy_proof_valid && sell_proof_valid);
    
    // Extract order data from proofs
    let buy_orders = extract_orders_from_proof(&buy_orders_proof);
    let sell_orders = extract_orders_from_proof(&sell_orders_proof);
    
    // Implement optimal matching logic
    let (best_buy_index, best_sell_index) = find_optimal_match(&buy_orders, &sell_orders);
    
    // Verify the inputs match the optimal selection
    inputs[0] == best_buy_index && inputs[1] == best_sell_index
}

// Helper functions that would be available in the circuit
fn find_optimal_match(buy_orders: &[Order], sell_orders: &[Order]) -> (Felt, Felt) {
    // Implement matching algorithm ensuring best price execution
    let mut best_spread = Felt::MAX;
    let mut best_pair = (0, 0);
    
    for (i, buy_order) in buy_orders.iter().enumerate() {
        for (j, sell_order) in sell_orders.iter().enumerate() {
            if buy_order.price >= sell_order.price {
                let spread = buy_order.price - sell_order.price;
                if spread < best_spread {
                    best_spread = spread;
                    best_pair = (i as Felt, j as Felt);
                }
            }
        }
    }
    
    best_pair
}

Deployment and Usage:

#![allow(unused)]
fn main() {
// Register the matching bot circuit
let fingerprint = circuit_manager
    .register_dpn_software_defined_circuit(
        order_matching_auth_bytecode,
        32,  // contract_state_tree_height
    )
    .await?;

// Anyone can now trigger optimal order matching
let call_data = DPNSoftwareDefinedCallData {
    contract_id: ORDER_BOOK_CONTRACT_ID,
    inputs: vec![buy_order_index, sell_order_index],
};
}

Capabilities:

  • State Access: Circuits can read and verify on-chain state
  • Algorithmic Trading: Implement complex matching algorithms in ZK
  • Trustless Automation: No need to trust the bot operator
  • Optimal Execution: Prove optimal order matching on-chain

Advanced Applications

1. Permissionless Bots

#[software_defined_signature]
pub fn permissionless_bot_auth(
    contract_id: Felt,
    method_name: &str,
    inputs: &[Felt],
) -> bool {
    // No secret required - anyone can execute
    // But execution is constrained by logic
    
    // Verify bot identifier
    let bot_id = hash("PermissionlessBot");
    let bot_valid = bot_id == EXPECTED_BOT_IDENTIFIER;
    assert(bot_valid);
    
    // Only allow specific method
    let method_valid = method_name == "allowed_method";
    assert(method_valid);
    
    // Minimum threshold constraint
    inputs[0] > MIN_THRESHOLD
}

Use Cases:

  • Liquidation Bots: Anyone can liquidate, but only valid liquidations
  • Arbitrage Bots: Permissionless arbitrage with guaranteed profitability
  • Rebalancing: Portfolio rebalancing with constraints

2. Conditional Execution

#[software_defined_signature]
pub fn conditional_spending_auth(
    contract_id: Felt,
    inputs: &[Felt],
    balance_proof: MerkleProof,
    checkpoint_data: CheckpointData,
) -> bool {
    // Verify user balance from merkle proof
    let balance_proof_valid = verify_merkle_proof(&checkpoint_data.tree_root, &balance_proof);
    assert(balance_proof_valid);
    
    let user_balance = extract_balance_from_proof(&balance_proof);
    
    // Only allow transaction if balance > threshold
    let balance_sufficient = user_balance > 1000;
    assert(balance_sufficient);
    
    // Constrain transaction amount to max 50% of balance
    let transaction_amount = inputs[0];
    transaction_amount <= user_balance / 2
}

Use Cases:

  • Spending Limits: Enforce spending limits in the signature
  • Time Locks: Implement time-based restrictions
  • Multi-Factor: Require multiple secrets or conditions

3. Cross-Chain Verification

#[software_defined_signature]
pub fn cross_chain_collateral_auth(
    contract_id: Felt,
    inputs: &[Felt],
    ethereum_proof: EthereumStateProof,
    bridge_data: BridgeVerificationData,
) -> bool {
    // Verify state from Ethereum blockchain
    let eth_proof_valid = verify_ethereum_state_proof(&ethereum_proof, &bridge_data);
    assert(eth_proof_valid);
    
    // Extract user's Ethereum balance
    let ethereum_balance = extract_balance_from_eth_proof(&ethereum_proof);
    let required_collateral = MINIMUM_COLLATERAL_RATIO;
    
    // Constrain based on external state
    let collateral_sufficient = ethereum_balance > required_collateral;
    assert(collateral_sufficient);
    
    // Allow transaction only if collateral is sufficient
    let transaction_amount = inputs[0];
    transaction_amount <= ethereum_balance
}

Ultra-Programmable Signatures

Nested Circuit Architecture

The ultimate evolution enables arbitrary computation within signatures:

QED Software Defined Signature (Ultra):

public_key_params = hash(QEDProtocol)
fingerprint = hash(verifier_data)
public_key = hash(public_key_params, fingerprint)
sig_action_hash = hash(data, network_magic, nonce)

circuit = {
  let inner_circuit = compile(verify_balance_gt_zero)
  verify_inner_circuit(private_inputs, public_inputs)
}
public_inputs = hash(public_key_params, sig_action_hash)  
private_inputs = sig_action_hash_preimage

High-Level Programming

Users can write signature logic in high-level languages:

#![allow(unused)]
fn main() {
// Rust function compiled to ZK circuit
fn verify_balance_gt_zero(user_leaf: UserLeaf) -> bool {
    user_leaf.balance > 0
}

// Complex authorization logic
fn advanced_authorization(
    user_leaf: UserLeaf,
    contract_call: ContractCall,
    checkpoint_data: CheckpointData
) -> bool {
    // Implement sophisticated authorization logic
    user_leaf.balance > contract_call.amount &&
    checkpoint_data.timestamp > user_leaf.last_transaction + cooldown_period &&
    contract_call.method_id.is_whitelisted()
}
}

Capabilities:

  • Rust Integration: Write circuits in Rust using macros
  • VM Integration: Embed ZKVM for arbitrary computation
  • Nested Proofs: Compose multiple proof systems
  • Code Reuse: Share and reuse authorization components

Implementation Patterns

1. Account Types

EOA (Externally Owned Account):

  • Uses traditional private key signatures
  • Simple authentication model
  • Direct user control

SDA (Software Defined Account):

  • Uses programmable circuit-based signatures
  • Complex authorization logic
  • Automated or conditional execution

2. Circuit Management

Off-Chain Storage:

# Circuits are managed off-chain by users
.signatures/
├── matching_bot.circuit
├── conditional_spending.circuit
└── multi_factor.circuit

Fingerprint Mapping:

#![allow(unused)]
fn main() {
// Map fingerprints to circuit definitions
let circuit = load_circuit_by_fingerprint(user_public_key.fingerprint)?;
let proof = generate_signature_proof(circuit, private_inputs)?;
}

3. Security Model

Public Key Composition:

  • public_key_params: Circuit-specific identifier
  • fingerprint: Hash of circuit verifier data
  • public_key: Combined hash for on-chain storage

Privacy Guarantees:

  • Circuit logic remains private (off-chain)
  • Only fingerprint is public (on-chain)
  • Zero-knowledge proofs don't reveal circuit internals

Development Workflow

1. Circuit Development

#![allow(unused)]
fn main() {
// Write authorization logic
#[signature_circuit]
fn my_authorization_logic(
    secret: PrivateInput<F>,
    call_data: CallData,
    checkpoint: CheckpointData,
) -> PublicOutput<F> {
    // Implement custom authorization
}
}

2. Deployment

# Compile circuit
psy_compiler compile-signature-circuit my_auth.rs

# Register circuit fingerprint  
psy_user_cli register-user --circuit-fingerprint <fingerprint>

3. Usage

# Sign transaction with custom circuit
psy_user_cli call \
  --signature-circuit my_auth.circuit \
  --private-inputs secret.json \
  --contract-id 0 \
  --method-name transfer

Future Directions

1. Circuit Marketplace

  • Shared Circuits: Community-developed authorization patterns
  • Audited Components: Verified and audited circuit modules
  • Composable Logic: Mix and match authorization components

2. Hardware Integration

  • Hardware Wallets: Support for complex circuits in hardware
  • Secure Enclaves: Trusted execution for sensitive authorization logic
  • Mobile Optimization: Efficient circuits for mobile devices

3. Cross-Chain Integration

  • Bridge Verification: Verify state from other blockchains
  • Multi-Chain Auth: Authorization spanning multiple networks
  • Interoperability: Standard interfaces for cross-chain circuits

Software-defined signatures represent a fundamental shift from fixed cryptographic primitives to programmable authentication systems, enabling new classes of applications and user experiences.

Miner Setup

This guide explains how to set up and operate a miner (worker) in the Psy network to generate ZK proofs and earn rewards.

Overview

Miners in the Psy network are responsible for generating zero-knowledge proofs for user transactions and network operations. They receive proof generation jobs from the network and get rewarded for successful proof submissions.

Prerequisites

  1. Complete node installation as described in Node Installation
  2. Have a running Psy network with coordinator and realm nodes
  3. Create a wallet for mining operations

Installation

Follow the installation process described in Node Installation to install the required CLI tools.

Miner Setup

Step 1: Create Mining Wallet

Create a new wallet for mining operations:

# Create a new wallet
psy_user_cli wallet create

This creates a keystore file with:

  • ETH address
  • Public key hash
  • Private key
  • Encrypted wallet file

Step 2: Register as Miner

Register your wallet's public key with the network:

# Get your public key information
psy_user_cli get-public-key \
    --private-key=YOUR_PRIVATE_KEY \
    --sign-type=secp256k1

# Register with the network
psy_user_cli register-user \
    --private-key=YOUR_PRIVATE_KEY \
    --sign-type=secp256k1

Step 3: Configure Network Settings

Create a config.json file with network endpoints:

{
  "networks": {
    "localhost": {
      "coordinator_configs": [
        {"id": 0, "rpc_url": ["http://127.0.0.1:8545"]}
      ],
      "realm_configs": [
        {"id": 0, "rpc_url": ["http://127.0.0.1:8546"]},
        {"id": 1, "rpc_url": ["http://127.0.0.1:8547"]}
      ],
      "prove_proxy_url": ["http://127.0.0.1:9999"],
      "fees": {
        "guta_fee": 5000000000
      }
    }
  }
}

Note: The public testnet (regnet-coordinator.psy-protocol.xyz, regnet-realm0.psy-protocol.xyz) is currently suspended during v2 implementation. The testnet will reopen once the v2 version is completed. For now, please use localhost development setup.

Whitelist Feature: The current whitelist system is a temporary security measure and will be removed in future versions to allow permissionless participation.

Mining Operations

Start Mining

Launch your miner to begin processing proof generation jobs:

# Start mining with keystore
psy_node_cli worker \
  --config ./config.json \
  --keystore-path .wallets/miner0.json \
  --recipient 3145728

# Or start with private key directly
psy_node_cli worker \
  --config ./config.json \
  --private-key YOUR_PRIVATE_KEY

Mining Process

  1. Job Discovery: Miner polls network nodes for available proof generation jobs
  2. Job Assignment: Edge nodes assign jobs to miners based on priority and availability
  3. Proof Generation: Miner generates ZK proofs for assigned transactions
  4. Proof Submission: Completed proofs are submitted back to the network
  5. Reward Tracking: Successful proofs are recorded for reward calculation

Monitor Mining Activity

Check miner logs and status:

# Monitor logs (if running with output redirection)
tail -f miner.log

# Check job processing status via RPC
curl -X POST http://127.0.0.1:8545 \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"psy_latest_checkpoint","params":[],"id":1}'

Reward Management

Claim Rewards

Claim accumulated mining rewards:

# Claim rewards using keystore
psy_user_cli claim-rewards \
  --keystore-path .wallets/miner0.json \
  --sign-type secp256k1 \
  --limit 10000

# Or using private key directly
psy_user_cli claim-rewards \
  --private-key YOUR_PRIVATE_KEY \
  --sign-type secp256k1 \
  --limit 10000

Parameters:

  • --keystore-path: Path to your encrypted wallet file
  • --sign-type: Signature type (secp256k1 for miners)
  • --limit: Maximum number of rewards to claim in one transaction

Performance Optimization

Hardware Requirements

  • CPU: 8+ cores recommended for optimal proof generation
  • Memory: 16GB+ RAM for handling multiple concurrent jobs
  • Storage: Fast SSD for temporary proof data
  • Network: Stable internet connection for job coordination

Troubleshooting

Miner not receiving jobs: Ensure your wallet is registered and whitelisted with the network.

Proof generation failures: Check system resources and ensure sufficient RAM/CPU availability.

Reward claiming fails: Verify your keystore file is valid and you have accumulated claimable rewards.

Network connection issues: Check config.json endpoints and network connectivity.

Production Considerations

  • Backup: Regularly backup your keystore files
  • Monitoring: Set up automated monitoring for miner health
  • Scaling: Run multiple miners with different wallets for increased throughput
  • Security: Keep private keys secure and use hardware security modules for large operations

Mining Configuration

Configuration options and optimization settings for Psy network miners.

Basic Configuration

Network Configuration

Miners require a config.json file specifying network endpoints:

{
  "networks": {
    "localhost": {
      "users_per_realm": 1048576,
      "global_user_tree_height": 24,
      "realm_user_tree_height": 20,
      "group_realm_height": 2,
      "coordinator_configs": [
        {"id": 0, "rpc_url": ["http://127.0.0.1:8545"]}
      ],
      "realm_configs": [
        {"id": 0, "rpc_url": ["http://127.0.0.1:8546"]},
        {"id": 1, "rpc_url": ["http://127.0.0.1:8547"]}
      ],
      "prove_proxy_url": ["http://127.0.0.1:9999"],
      "fees": {
        "guta_fee": 5000000000
      }
    }
  },
  "defaultNetwork": "localhost"
}

Wallet Configuration

Miners require a keystore file for identity and reward collection:

# Create new wallet
qed_user_cli wallet create --output miner_wallet

# Or use existing private key
psy_node_cli worker \
  --config ./config.json \
  --private-key 0x1234567890abcdef... \
  --recipient 3145728

Network-Specific Configuration

Local Development

{
  "networks": {
    "localhost": {
      "coordinator_configs": [{"id": 0, "rpc_url": ["http://127.0.0.1:8545"]}],
      "realm_configs": [
        {"id": 0, "rpc_url": ["http://127.0.0.1:8546"]},
        {"id": 1, "rpc_url": ["http://127.0.0.1:8547"]}
      ],
      "prove_proxy_url": ["http://127.0.0.1:9999"]
    }
  }
}

Testnet Configuration

{
  "network": {
    "users_per_realm": 1048576,
    "global_user_tree_height": 24,
    "realm_user_tree_height": 20,
    "group_realm_height": 1,
    "coordinator_configs": [
      {"id": 0, "rpc_url": ["https://regnet-coordinator.psy-protocol.xyz"]}
    ],
    "realm_configs": [
      {"id": 0, "rpc_url": ["https://regnet-realm0.psy-protocol.xyz"]},
      {"id": 1, "rpc_url": ["https://regnet-realm1.psy-protocol.xyz"]}
    ],
    "prove_proxy_url": ["https://regnet-prover.psy-protocol.xyz"],
    "native_currency": "tPSY",
    "fees": {
      "guta_fee": 5000000000
    }
  }
}

Advanced Configuration

Load Balancing

Configure multiple endpoints for fault tolerance:

{
  "coordinator_configs": [
    {
      "id": 0, 
      "rpc_url": [
        "https://coordinator1.psy-protocol.xyz",
        "https://coordinator2.psy-protocol.xyz",
        "https://coordinator3.psy-protocol.xyz"
      ]
    }
  ],
  "realm_configs": [
    {
      "id": 0,
      "rpc_url": [
        "https://realm0-1.psy-protocol.xyz",
        "https://realm0-2.psy-protocol.xyz"
      ]
    }
  ]
}

Monitoring Configuration

Logging Setup

# Structured logging
export RUST_LOG=psy_node=info,psy_prover=debug,psy_worker=trace
export PSY_LOG_FORMAT=json
export PSY_LOG_FILE=./logs/miner.log

Metrics Configuration

# Prometheus metrics
export PSY_METRICS_ENABLED=true
export PSY_METRICS_PORT=9090
export PSY_METRICS_PATH=/metrics

# Performance tracking
export PSY_TRACK_PROOF_TIMES=true
export PSY_TRACK_MEMORY_USAGE=true
export PSY_TRACK_JOB_SUCCESS_RATE=true

Docker Configuration

# docker-compose.yml for miners
version: '3.8'
services:
  psy-miner:
    image: psy-miner:latest
    environment:
      - RUST_LOG=info
    volumes:
      - ./config.json:/app/config.json
      - ./wallets:/app/wallets
    resources:
      limits:
        memory: 16G
        cpus: '8'
      reservations:
        memory: 8G
        cpus: '4'

Configuration Validation

Validate Configuration

# Test network connectivity
psy_user_cli get-latest-block-state

# Test wallet access
psy_user_cli wallet info --keystore-path ./miner_wallet.json

Common Configuration Issues

Invalid endpoints: Ensure URLs are accessible and use correct protocols (http/https).

Keystore permissions: Verify wallet files have appropriate read permissions.

Network mismatch: Ensure all endpoints belong to the same network environment.

Resource constraints: Monitor system resources to prevent memory/CPU exhaustion.

Best Practices

  1. Separate configs per environment: Use different config files for development, testnet, and mainnet
  2. Regular backups: Backup keystore files and configuration regularly
  3. Monitor performance: Track proof generation times and success rates
  4. Update regularly: Keep configuration in sync with network updates
  5. Security first: Use encrypted keystores and secure network connections

Mining Performance Optimization

Guidelines for optimizing mining performance in the Psy network.

Hardware Requirements

CPU Recommendations

For optimal performance, choose CPUs with:

  • High core count: 8+ cores preferred
  • AVX-512 instruction support: Provides best performance for proof generation
  • High clock speeds: 3.5+ GHz base frequency

Popular choices:

  • Intel processors with AVX-512 support
  • AMD processors with high core counts

Check CPU features:

# Verify AVX support
lscpu | grep -E "(avx|sse)"

# Check available cores
nproc

Memory Requirements

  • Minimum: 8GB RAM
  • Recommended: 16GB+ RAM
  • High-performance setups: 32GB+ RAM

Storage

  • Fast SSD recommended for optimal I/O performance
  • Minimum 100GB free space

Current Limitations

GPU Acceleration: Currently under development. GPU support will be available in future releases to significantly improve proof generation performance.

Performance Optimization: The miner binary handles most performance optimizations automatically. Focus on providing adequate hardware resources.

Running Multiple Miners

You can run multiple miner instances with different wallets:

# Start multiple miners
psy_node_cli worker --config config.json --keystore-path miner1.json &
psy_node_cli worker --config config.json --keystore-path miner2.json &
psy_node_cli worker --config config.json --keystore-path miner3.json &

Monitoring Performance

Monitor your miner's resource usage:

# CPU and memory usage
htop -p $(pgrep psy_node_cli)

# Check logs for performance information
tail -f miner.log

Performance Tips

  1. Use dedicated hardware for mining operations
  2. Ensure stable network connectivity to mining endpoints
  3. Monitor system resources to avoid bottlenecks
  4. Keep the system updated for latest optimizations
  5. Use AVX-512 capable CPUs when available

Node Architecture

The Psy network consists of multiple node types that work together to maintain the blockchain state and process user transactions through zero-knowledge proofs.

Core Components

Coordinator

The coordinator maintains the upper-level blockchain state and serves as the central coordination point for the network.

Responsibilities:

  • Maintains contract tree for all deployed contracts
  • Manages upper portion of user tree (level 0)
  • Stores user registration tree
  • Maintains user information including ZKPublicKeyInfo (public key parameters and fingerprint)
  • Stores contract bytecode and circuit fingerprints/signatures for each contract function
  • Assigns user IDs and contract IDs as tree leaf indices

Tree Management:

  • Supports up to 2^32 registered users
  • Supports up to 2^32 deployed contracts
  • Coordinator manages 12 layers of user tree height
  • Aggregates realm-level proofs into level 0 user tree root modifications

Realm

Realms handle user transactions and manage the lower portion of the user tree along with contract-specific user data.

Responsibilities:

  • Accepts and processes user transactions
  • Stores lower portion of user tree (up to 20 layers height)
  • Manages user data for specific contracts within the realm
  • Generates aggregated ZK proofs and GUTA (Generalized User Transaction Aggregation) for realm root modifications

Capacity:

  • Each realm can handle up to 2^20 users (when using 20-layer height)
  • Tree height is configurable based on network requirements
  • Each realm supports tens of thousands of TPS (transactions per second)
  • Network target: 1 million TPS achievable with dozens of realms

Node Architecture

Multiple Node Consensus

Both coordinator and realm operations are distributed across multiple nodes:

  • Multiple Coordinator Nodes: Consensus mechanism determines the primary coordinator
  • Multiple Realm Nodes: Each realm has multiple nodes with consensus for primary selection
  • One Coordinator, Multiple Realms: Network topology supports horizontal scaling

Edge and Processor Components

Each coordinator and realm consists of two main components:

Edge Nodes

  • Receive and validate user transactions
  • Verify uploaded state deltas through ZK proof validation
  • Manage task priority ordering for proof generation
  • Handle external communication and API endpoints

Processor Nodes

  • Execute user-submitted state deltas
  • Process contract deployment requests
  • Handle user registration requests
  • Generate witness data and job graphs for proof workers
  • Coordinate with workers for ZK proof generation

Inter-Component Communication

V1 Architecture:

  • Edge and processor communication through shared Redis storage

V2 Architecture:

  • Communication via NATS JetStream for improved reliability and performance

Proof Generation Architecture

Local Proving Model

Psy implements a local proving architecture where:

  • Users perform VM execution locally
  • Blockchain only stores ZK-verified state deltas
  • No on-chain VM execution or gas metering
  • Edge nodes validate correctness through ZK proof verification

Worker-Based Proof Generation

Workflow:

  1. Processor generates witness data and job graphs from state deltas
  2. Workers claim tasks based on priority ordering (managed by edge nodes)
  3. Workers generate ZK proofs for assigned jobs
  4. Completed proofs are submitted back to processors

Proof Aggregation

Realm Level:

  • Multiple user transactions affecting contract state generate individual proofs
  • Realm aggregates all proofs into a single ZK proof + GUTA
  • Represents realm's modifications to its portion of user tree root

Coordinator Level:

  • Receives aggregated proofs from all realms
  • Generates final ZK proof for level 0 user tree root modifications
  • Maintains global state consistency

Additional Services

Prover Proxy

The prover proxy assists users with local proof generation:

Current Role:

  • Helps optimize proof generation for resource-constrained users
  • Provides computational assistance for complex proofs
  • Bridges the gap between user devices and proof requirements

Future Considerations:

  • May become optional as ZK systems optimize and hardware improves
  • Designed to scale down as local proving capabilities increase

Watcher Service

The watcher service retrieves and processes blockchain data:

Responsibilities:

  • Index and retrieve blockchain data
  • Process transaction and state data from the chain
  • Send processed data to API services
  • Monitor and extract relevant blockchain events

API Services

API services provide data interfaces for external applications:

Responsibilities:

  • Receive processed data from watcher service
  • Provide HTTP/JSON-RPC endpoints for block explorers
  • Serve blockchain data to upper-layer applications
  • Handle queries for transaction history, block data, and state information

Storage Backend

Supported Storage Systems

Current Support:

  • ScyllaDB: High-performance distributed database
  • LMDBX: Memory-mapped key-value store
  • TiKV: Distributed transactional key-value database

Primary Choice:

  • ScyllaDB is the preferred storage backend due to its exceptional write performance
  • Optimized for the high-throughput requirements of blockchain state updates

Storage Architecture

Data Distribution:

  • Coordinator nodes store global state trees and contract metadata
  • Realm nodes store user-specific data and transaction history
  • Prover proxy may cache frequently accessed proving data

Network Topology

┌─────────────────┐
│   Coordinator   │ ← Global state, contracts, user registration
│  (Multi-node)   │
└─────────┬───────┘
          │
    ┌─────┴─────┐
    │           │
┌───▼───┐   ┌───▼───┐
│Realm 1│   │Realm N│ ← User transactions, local state
│       │   │       │
└───────┘   └───────┘

Each node contains:
┌─────────┐   ┌─────────────┐
│  Edge   │◄─►│ Processor   │
│         │   │             │
└─────────┘   └─────────────┘
     │               │
     │         ┌─────▼─────┐
     │         │  Workers  │
     │         │ (Provers) │
     └─────────┤           │
               └───────────┘

This architecture enables horizontal scaling while maintaining security through zero-knowledge proofs and efficient state management through hierarchical trees.

Installation

This guide walks you through installing and setting up Psy network nodes.

Prerequisites

  • Rust (latest stable version)
  • Git
  • Make
  • Sufficient system resources (recommended: 16GB+ RAM, 8+ CPU cores)

Installation Steps

1. Clone the Repository

git clone https://github.com/PsyProtocol/psy-v1
cd psy-v1

2. Build the Project

make build

This command compiles all necessary components including node binaries and CLI tools.

3. Install CLI Tools

make install

This installs three main CLI tools to your system:

Installed CLI Tools

psy_node_cli

Purpose: Start and manage network nodes

Key Commands:

# Start coordinator
psy_node_cli coordinator-edge
psy_node_cli coordinator-processor

# Start realm
psy_node_cli realm-edge
psy_node_cli realm-processor

# Start supporting services
psy_node_cli worker
psy_node_cli api-services

psy_user_cli

Purpose: User interaction with the network

Key Commands:

# Register user
psy_user_cli register-user

# Deploy contract
psy_user_cli deploy-contract

# Call contract function
psy_user_cli call

psy_dev_cli

Purpose: Development utilities

Key Command:

# Hash utilities
psy_dev_cli qhash

Verification

After installation, verify that all tools are correctly installed:

# Check installations
psy_node_cli --version
psy_user_cli --version
psy_dev_cli --version

# View help for each tool
psy_node_cli --help
psy_user_cli --help
psy_dev_cli --help

Next Steps

Configuration

The Psy network uses JSON-based configuration files to manage network parameters, node settings, and deployment configurations.

Configuration Files

config.json - Network Configuration

The main configuration file defines network-wide parameters and multiple network environments.

Multi-Network Configuration

The configuration file supports multiple networks with a default network setting:

{
  "networks": {
    "localhost": {
      "magic": "0x1337CF514544CF69",
      "users_per_realm": 1048576,
      // ... other localhost config
    },
    "testnet": {
      "magic": "0x2337CF514544CF69", 
      "users_per_realm": 1048576,
      // ... other testnet config
    },
    "mainnet": {
      "magic": "0x3337CF514544CF69",
      "users_per_realm": 1048576,
      // ... other mainnet config
    }
  },
  "defaultNetwork": "localhost"
}

Applications will use the defaultNetwork configuration unless explicitly switched to another network.

Core Network Parameters

Tree Height Configuration:

{
  "global_user_tree_height": 24,     // Total user tree height (supports 2^24 users)
  "realm_user_tree_height": 20,     // Realm-level user tree height
  "group_realm_height": 2,          // Each group has 2^2 = 4 realms
  "users_per_realm": 1048576         // Users per realm (2^20)
}

Tree Height Calculation:

  • Total realms: 2^(24-20) = 2^4 = 16 realms
  • Realms per group: 2^2 = 4 realms
  • Number of groups: 16/4 = 4 groups

Fee Structure:

{
  "fees": {
    "guta_fee": 5000000000           // GUTA processing fee
  }
}

Currency Configuration:

{
  "native_currency": "PSY",           // Currency symbol
  "native_currency_decimal": 9,       // Decimal places
  "native_currency_name": "PSY Token" // Full currency name
}

Node Endpoints

Coordinator Configuration:

{
  "coordinator_configs": [
    {
      "id": 0,
      "rpc_url": ["http://127.0.0.1:8545"]
    }
  ]
}

Realm Configuration:

{
  "realm_configs": [
    {
      "id": 0,
      "rpc_url": ["http://127.0.0.1:8546"]
    },
    {
      "id": 1, 
      "rpc_url": ["http://127.0.0.1:8547"]
    }
  ]
}

Supporting Services:

{
  "prove_proxy_url": ["http://127.0.0.1:9999"],
  "api_services_url": ["http://127.0.0.1:3000"]
}

Genesis Configuration

Genesis Users: Genesis users are pre-registered users with known public keys:

{
  "genesis": {
    "users": [
      {
        "public_key_param": [/* field elements */],
        "fingerprint": [/* field elements */]
      }
    ]
  }
}

Genesis Contracts: Pre-deployed contracts and their initial state:

{
  "genesis": {
    "precompiles": [
      {
        "name": "system_contract",
        "deployer": [/* hash elements */],
        "bytecode": [/* contract bytecode */]
      }
    ]
  }
}

Security Configuration

Whitelist (Optional):

{
  "whitelist": {
    "enabled": true,
    "secp256k1": [
      "public_key_1",
      "public_key_2"
    ]
  }
}

Network Environments

localhost

  • Purpose: Local development and testing
  • User Registration Fee: 0
  • Contract Deployment Fee: 0
  • Endpoints: Local addresses (127.0.0.1)

testnet

  • Purpose: Public testing environment
  • Endpoints: Public testnet URLs

mainnet

  • Purpose: Production network
  • Endpoints: Production URLs

Key Configuration Parameters

Tree Heights

  • Global User Tree: 24 levels (supports 2^24 = 16.7M users)
  • Realm User Tree: 20 levels (2^20 = 1M users per realm)
  • Total Realms: 2^(24-20) = 16 realms
  • Group Realm Height: 2 levels (2^2 = 4 realms per group)
  • Number of Groups: 16/4 = 4 groups

Performance Tuning

  • GUTA Fee: Controls transaction batching economics
  • Realm Count: Horizontal scaling through multiple realms
  • Worker Instances: Parallel proof generation capacity

Security Settings

  • Magic Number: Network identifier for message signing
  • Whitelist: Optional public key restrictions
  • Fee Structure: Economic security parameters

Getting Started

This guide walks you through starting a complete Psy network locally for development and testing.

Prerequisites

  1. Complete installation as described in Installation
  2. Ensure config.json is properly configured
  3. Have Docker and Docker Compose installed

Quick Start

For convenience, you can use the automated script:

make run-all

This handles initialization and starts all components automatically.

Required Components

A complete Psy network requires these core components:

1. Infrastructure Services

  • Redis: Message queuing between edge and processor
  • Database: ScyllaDB/LMDBX/TiKV for state storage
  • PostgreSQL: For API services and data indexing

2. Coordinator Services

  • Coordinator Processor: Manages global state and contract tree
  • Coordinator Edge: RPC endpoint for coordinator operations

3. Realm Services

  • Realm Processor: Processes user transactions and state
  • Realm Edge: RPC endpoints for user interactions
  • Multiple Realms: Support for horizontal scaling (realm0, realm1, realm2, realm3)

4. Supporting Services

  • Workers: Generate ZK proofs for submitted jobs
  • API Services: Block explorer and data APIs
  • Watcher: Monitors and indexes blockchain data
  • Prove Proxy: Assists users with local proof generation

Manual Service Startup

Step 1: Initialize Infrastructure

Start required databases and create directories:

# Create data directories
mkdir -p ./db/coordinator ./db/realm0 ./db/realm1 ./db/realm2 ./db/realm3

# Start Redis containers
docker-compose -f ./scripts/docker-compose.db.yml up -d

# Initialize PostgreSQL for API services
cd ./psy_services
export DATABASE_URL="postgres://postgres:password@localhost/postgres"
cargo sqlx database create
cargo sqlx migrate run
cd ..

Step 2: Start Coordinator

# Start coordinator processor (manages global state)
RUST_LOG=info psy_node_cli coordinator-processor \
  --database lmdbx \
  --lmdbx-path ./db/coordinator \
  --queue-biz-key coordinator

# Start coordinator edge (RPC interface) 
RUST_LOG=info psy_node_cli coordinator-edge \
  --database lmdbx \
  --lmdbx-path ./db/coordinator \
  --queue-biz-key coordinator

Step 3: Start Realms

# Start realm0 processor
RUST_LOG=info psy_node_cli realm-processor \
  --redis-uri redis://127.0.0.1:6379 \
  --database lmdbx \
  --lmdbx-path ./db/realm0 \
  --queue-biz-key realm0

# Start realm0 edge (port 8546)
RUST_LOG=info psy_node_cli realm-edge \
  --redis-uri redis://127.0.0.1:6379 \
  --database lmdbx \
  --lmdbx-path ./db/realm0 \
  --queue-biz-key realm0

# Start realm1 processor
RUST_LOG=info psy_node_cli realm-processor \
  --redis-uri redis://127.0.0.1:6379 \
  --database lmdbx \
  --lmdbx-path ./db/realm1 \
  --realm-id 1 \
  --queue-biz-key realm1

# Start realm1 edge (port 8547)
RUST_LOG=info psy_node_cli realm-edge \
  --listen-addr 0.0.0.0:8547 \
  --redis-uri redis://127.0.0.1:6379 \
  --database lmdbx \
  --lmdbx-path ./db/realm1 \
  --coordinator-addr http://127.0.0.1:8545 \
  --realm-id 1 \
  --queue-biz-key realm1

Step 4: Start Workers and Services

# Start proof workers
RUST_LOG=info psy_node_cli worker \
  --config ./config.json \
  --keystore-path .wallets/miner0.json \
  --recipient 3145728

RUST_LOG=info psy_node_cli worker \
  --config ./config.json \
  --keystore-path .wallets/miner1.json \
  --recipient 1024

# Start API services
RUST_LOG=info psy_node_cli api-services

# Start watchers
RUST_LOG=info psy_node_cli watcher \
  --node-id 0 \
  --node-type coordinator \
  --redis-uri redis://127.0.0.1:6379 \
  --api-endpoint http://localhost:3000 \
  --database lmdbx \
  --lmdbx-path ./db/coordinator \
  --queue-biz-key coordinator

RUST_LOG=info psy_node_cli watcher \
  --node-id 0 \
  --node-type realm \
  --redis-uri redis://127.0.0.1:6379 \
  --api-endpoint http://localhost:3000 \
  --database lmdbx \
  --lmdbx-path ./db/realm0 \
  --queue-biz-key realm0

# Start prove proxy
RUST_LOG=info psy_user_cli prove-proxy

Configuration Requirements

config.json Setup

Ensure your config.json contains proper endpoint configurations:

{
  "networks": {
    "localhost": {
      "coordinator_configs": [
        {"id": 0, "rpc_url": ["http://127.0.0.1:8545"]}
      ],
      "realm_configs": [
        {"id": 0, "rpc_url": ["http://127.0.0.1:8546"]},
        {"id": 1, "rpc_url": ["http://127.0.0.1:8547"]},
        {"id": 2, "rpc_url": ["http://127.0.0.1:8548"]},
        {"id": 3, "rpc_url": ["http://127.0.0.1:8549"]}
      ],
      "prove_proxy_url": ["http://127.0.0.1:9999"],
      "api_services_url": ["http://127.0.0.1:3000"]
    }
  }
}

Service Dependencies

Services must start in the correct order:

  1. Infrastructure (Redis, databases)
  2. Coordinator (processor, then edge)
  3. Realms (processors, then edges)
  4. Workers (depend on edges for job discovery)
  5. API Services (depend on watchers for data)
  6. Watchers (depend on edges for data access)

Verification

Check that services are running:

# Check coordinator
curl -X POST http://127.0.0.1:8545 \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"psy_latest_checkpoint","params":[],"id":1}'

# Check realm0
curl -X POST http://127.0.0.1:8546 \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","method":"psy_latest_checkpoint","params":[],"id":1}'

# Check API services
curl http://127.0.0.1:3000/health

Cleanup

Stop all services and clean up:

# Stop Docker containers
docker-compose -f ./scripts/docker-compose.db.yml down -v

# Remove data directories
rm -rf ./db logs

Next Steps

Once your network is running:

  1. Register users: See User CLI documentation
  2. Deploy contracts: Use psy_user_cli deploy-contract
  3. Submit transactions: Use psy_user_cli call
  4. Monitor activity: Check logs in ./logs/ directory

Storage Options

The default setup uses LMDBX for storage. For other options:

TiKV Setup

Replace --database lmdbx with:

--database tikv \
--tikv-pd-endpoints 127.0.0.1:2379 \
--tikv-namespace coordinator  # or realm0, realm1, etc.

ScyllaDB Setup

Replace --database lmdbx with:

--database scylla \
--scylla-endpoints 127.0.0.1:9042

Troubleshooting

Services won't start: Check that config.json is valid and all required ports are available.

Workers not processing jobs: Ensure workers have valid keystore files in .wallets/ directory.

Database connection errors: Verify Docker containers are running with docker ps.

Future Features

Several components are currently under development:

  • P2P Networking: Peer-to-peer communication between nodes (in development)
  • Consensus Mechanism: Byzantine fault tolerance for production networks (in development)
  • Advanced Storage: Enhanced storage backends and optimization (in development)
  • Cross-chain Bridges: Integration with other blockchain networks (planned)

User Cli Interface

This documentation organizes the three core data query interfaces implemented by RpcProvider (QTreeDataStoreReaderSync<F>, QMetaDataStoreReaderSync<F>, and PsyComboDataStoreReaderSync<F>). It includes interface functions, method details, parameter descriptions, and return value types, focusing on core data query capabilities for blockchain scenarios such as users, contracts, and checkpoints.

Basic Information

  • Core Dependent Types:
    • F = GoldilocksField: A prime field type based on Plonky2, used for blockchain data verification and hash calculation.
    • QHashOut<F>: Hash output result type, storing raw data after hash calculation.
    • MerkleProofCore<QHashOut<F>>: Core Merkle proof type, containing information such as the root, value, and sibling nodes required for proof.

Core Structures

RpcProvider

The foundational component for RPC communication, responsible for interacting with Realm nodes (user-specific data) and Coordinator nodes (global public data, e.g., contracts, checkpoints). It supports cross-environment use (non-WASM like servers, WASM like browser extensions).

Structure Definition

#![allow(unused)]
fn main() {
#[derive(Debug, Clone)]
pub struct RpcProvider {
    pub client: Arc<Client>,               // HTTP client (for RPC requests)
    pub realm_configs: HashMap<u64, Vec<String>>,  // Realm node config: {Realm ID → List of RPC URLs}
    pub coordinator_configs: HashMap<u64, Vec<String>>,  // Coordinator node config: {Coordinator ID → List of RPC URLs}
    pub users_per_realm: u64,              // Number of users assigned to each Realm (for user-to-Realm routing)
    pub current_user_id: u64,              // ID of the currently active user (for default data requests)
}
}

Key Methods

RpcProvider provides methods for node routing, user/contract/data queries, and transaction submission. Core methods are categorized below:

CategoryMethod NameParametersReturn ValueFunction Description
Node Routingget_realm_iduser_id: u64u64Calculate the Realm ID for a user (via user_id / users_per_realm).
get_realm_urluser_id: u64anyhow::Result<&String>Get a random RPC URL of the Realm node corresponding to the user (for load balancing).
get_coordinator_urlNoneanyhow::Result<&String>Get a random RPC URL of the Coordinator node (for global data requests).
User Operationsregister_userreq: QRegisterUserRPCRequest<F>anyhow::Result<()>Submit a user registration request to the Coordinator node.
get_user_idpublic_key: QHashOut<F>anyhow::Result<u64>Query the user ID corresponding to a public key from the Coordinator node.
Contract Operationsdeploy_contractreq: QDeployContractRPCRequest<F>anyhow::Result<()>Submit a contract deployment request to the Coordinator node.
Data Queriesget_realm_latest_block_stateNoneanyhow::Result<PsyBlockState>Query the latest L2 block state from the current user’s Realm node.
get_claim_amountcheckpoint_id: u64, user_id: u64, claim_user_id: u64anyhow::Result<u64>Calculate the available claim amount for a user by querying contract state tree leaves.
check_tx_is_confirmedcheckpoint_id: u64, user_id: u64, tx_hash: QHashOut<GoldilocksField>anyhow::Result<bool>Verify if a transaction is confirmed by comparing the user leaf hash with the transaction hash.
Batch Proof Queryget_job_proofsjob_infos: Vec<JobInfo>anyhow::Result<Vec<(QProvingJobDataID, VariableHeightRewardMerkleProof)>>Batch query reward Merkle proofs for multiple jobs (routes to Realm/Coordinator based on job location).

Auxiliary Structures

RpcConfig & NetworkConfig

Configuration structures for node networks and proxy services, loaded from external config files:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct RpcConfig {
    pub users_per_realm: u64,                // Number of users per Realm
    pub global_user_tree_height: u8,         // Height of the global user Merkle tree
    pub realm_user_tree_height: u8,          // Height of the Realm-specific user Merkle tree
    pub realm_configs: Vec<RealmRpcConfig>,  // List of Realm node configs
    pub coordinator_configs: Vec<CoordinatorRpcConfig>,  // List of Coordinator node configs
    pub prove_proxy_url: Vec<String>,        // List of proof proxy service URLs
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NetworkConfig {  // Extended config for blockchain network
    pub users_per_realm: u64,
    pub global_user_tree_height: u8,
    pub realm_user_tree_height: u8,
    pub realm_configs: Vec<RealmConfig>,     // Same as RealmRpcConfig
    pub coordinator_configs: Vec<CoordinatorConfig>,  // Same as CoordinatorRpcConfig
    pub prover_url: Option<String>,          // Optional prover service URL
    pub prove_proxy_url: Vec<String>,
    pub native_currency: String,             // Native currency symbol (e.g., "PSY")
}
}

Merkle Tree Data Query

This interface focuses on querying roots, leaf hashes, and Merkle proofs of various Merkle trees in the blockchain, covering core scenarios such as users, contracts, checkpoints, deposits, and withdrawals.

(1) User Contract State Tree Queries (User-Contract Internal State)

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_user_contract_state_tree_root- checkpoint_id: u64: Unique identifier for the checkpoint
- user_id: u64: Unique identifier for the user
- contract_id: u32: Unique identifier for the contract
QHashOut<F>Queries the root hash of a user's contract state tree under a specified checkpointRealm Node
get_user_contract_state_tree_leaf_hash- Same as above
- height: u8: Height of the state tree
- leaf_id: u64: Unique identifier for the leaf node
QHashOut<F>Queries the hash of a specified leaf node in the state treeRealm Node
get_user_contract_state_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a specified leaf node in the state tree (including verification logic)Realm Node

(2) User Contract Tree Queries (User-Contract Association)

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_user_contract_tree_root- checkpoint_id: u64: Unique identifier for the checkpoint
- user_id: u64: Unique identifier for the user
QHashOut<F>Queries the root hash of a user's contract tree under a specified checkpointRealm Node
get_user_contract_tree_leaf_hash- Same as above
- contract_id: u32: Unique identifier for the contract
QHashOut<F>Queries the hash of the leaf node corresponding to a contract in the user's contract treeRealm Node
get_user_contract_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a contract in the user's contract treeRealm Node

(3) User Registration Tree Queries (Global User Registration State)

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_user_registration_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of the global user registration tree under a specified checkpointCoordinator Node
get_user_registration_tree_leaf_hash- Same as above
- leaf_index: u64: Leaf index
QHashOut<F>Queries the hash of a leaf node at a specified index in the user registration treeCoordinator Node
get_user_registration_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a leaf node at a specified index in the user registration treeCoordinator Node

(4) User Tree Queries (Global User State)

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_user_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of the global user tree under a specified checkpointCoordinator Node
get_user_tree_leaf_hash- Same as above
- user_id: u64: Unique identifier for the user
QHashOut<F>Queries the hash of the leaf node corresponding to a user in the user treeRealm Node
get_user_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a user in the user tree (automatically merges subtree proofs)Coordinator & Realm Node
get_user_sub_tree_merkle_proof- checkpoint_id: u64: Unique identifier for the checkpoint
- root_level: u8: Root level
- leaf_level: u8: Leaf level
- leaf_index: u64: Leaf index
MerkleProofCore<QHashOut<F>>Queries the Merkle proof of a specified level in the user subtreeCoordinator & Realm Node
Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_contract_function_tree_root- checkpoint_id: u64: Unique identifier for the checkpoint
- contract_id: u32: Unique identifier for the contract
QHashOut<F>Queries the root hash of a contract's function tree under a specified checkpointCoordinator Node
get_contract_function_tree_leaf_hash- Same as above
- function_id: u32: Unique identifier for the function
QHashOut<F>Queries the hash of a function's leaf node in the contract function treeCoordinator Node
get_contract_function_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a function in the contract function treeCoordinator Node
get_contract_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of the global contract tree under a specified checkpointCoordinator Node
get_contract_tree_leaf_hash- Same as above
- contract_id: u32: Unique identifier for the contract
QHashOut<F>Queries the hash of a contract's leaf node in the contract treeCoordinator Node
get_contract_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a contract in the contract treeCoordinator Node

(1) Deposit Tree Queries

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_deposit_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of the global deposit tree under a specified checkpointCoordinator Node
get_deposit_tree_leaf_hash- Same as above
- deposit_id: u32: Unique identifier for the deposit
QHashOut<F>Queries the hash of a deposit's leaf node in the deposit treeCoordinator Node
get_deposit_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a deposit in the deposit treeCoordinator Node

(2) Withdrawal Tree Queries

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_withdrawal_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of the global withdrawal tree under a specified checkpointCoordinator Node
get_withdrawal_tree_leaf_hash- Same as above
- withdrawal_id: u32: Unique identifier for the withdrawal
QHashOut<F>Queries the hash of a withdrawal's leaf node in the withdrawal treeCoordinator Node
get_withdrawal_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a withdrawal in the withdrawal treeCoordinator Node
Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_latest_checkpoint_tree_rootNo parametersQHashOut<F>Queries the root hash of the latest checkpoint treeCoordinator Node
get_checkpoint_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of a specified checkpoint treeCoordinator Node
get_checkpoint_tree_leaf_hash- Same as above
- leaf_checkpoint_id: u64: Leaf checkpoint ID
QHashOut<F>Queries the hash of a leaf node in the checkpoint treeCoordinator Node
get_checkpoint_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a leaf node in the checkpoint treeCoordinator Node

Metadata Query

This interface focuses on querying core blockchain metadata, including complete leaf data for users, contracts, and checkpoints, as well as L2 block states.

1. User Metadata Queries

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_user_leaf_data- checkpoint_id: u64: Unique identifier for the checkpoint
- user_id: u64: Unique identifier for the user
PsyUserLeaf<F>Queries complete leaf data for a user under a specified checkpoint (including user state, hash, etc.)Realm Node

2. Contract Metadata Queries

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_contract_leaf_data- contract_id: u64: Unique identifier for the contractPsyContractLeaf<F>Queries complete leaf data for a contract (including basic contract information, state, etc.)Coordinator Node
get_contract_code_definition- contract_id: u64: Unique identifier for the contractContractCodeDefinitionQueries the code definition of a contract (including bytecode, function list, etc.)Coordinator Node

3. Checkpoint and L2 Block State Queries

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_checkpoint_leaf_data- checkpoint_id: u64: Unique identifier for the checkpointPsyCheckpointLeaf<F>Queries complete leaf data for a specified checkpoint (including global state root, block information, etc.)Coordinator Node
get_latest_block_stateNo parametersPsyBlockStateQueries the complete state of the latest L2 block (including block height, transaction count, etc.)Coordinator Node
get_block_state- checkpoint_id: u64: Unique identifier for the checkpointPsyBlockStateQueries the L2 block state corresponding to a specified checkpointCoordinator Node

Virtual Machine Bytecode Operations

The Psy VM uses a Data Processing Network (DPN) architecture that compiles high-level Psy language constructs into a series of low-level operations. The bytecode forms a tree-like dependency structure where inputs serve as unknown variables that get assigned values during execution, enabling computation of the entire tree to produce outputs.

Bytecode Architecture Overview

The VM operates on a constraint-based system where each operation generates mathematical constraints that can be verified in zero-knowledge. Operations are encoded as DPNOpType enums with associated input parameters and output specifications.

Tree-Based Execution Model

The bytecode represents a computational tree structure:

  • Inputs: Tree leaves containing unknown variables (function parameters, storage values)
  • Operations: Tree nodes that compute values based on their children
  • Outputs: Tree root values representing function results
  • Execution: Assigning concrete values to inputs propagates through the tree to compute all intermediate and output values

During execution, once input variables receive concrete values, the entire dependency tree can be evaluated bottom-up to produce the final outputs. This tree structure perfectly matches zero-knowledge proof systems, where each operation becomes a constraint and each variable becomes a wire in the circuit.

Data Types

All VM operations work with these fundamental data types:

  • Target (Felt): Field elements in the Goldilocks field (p = 18446744069414584321)
  • Bool: Boolean values (0 or 1)
  • U32Target: 32-bit unsigned integers
  • HashOut: 4-element field arrays representing hash outputs
  • Arrays: Collections of the above types

Operation Categories

1. Arithmetic and Mathematical Operations

These operations perform basic arithmetic and mathematical computations on field elements.

Basic Arithmetic

Add = 4          // a + b
Sub = 5          // a - b  
Mul = 6          // a * b
Div = 7          // a / b (modular inverse in finite field)

Advanced Arithmetic

Exp = 24                 // a^b
ExpConstantPower = 25    // a^const
ExpConstantBase = 26     // const^b
Mod = 27                 // a % b
ModConstantDividend = 28 // const % b
ModConstantDivisor = 29  // a % const
DivRem4 = 30            // Division with 4-element remainder

Unary Operations

UnaryInverse = 64    // Modular inverse of a
UnaryNegative = 65   // -a in field arithmetic

2. Boolean Logic Operations

Operations for boolean logic and conditional expressions.

Basic Boolean Operations

BoolNot = 8      // !a
BoolAnd = 9      // a && b
BoolOr = 10      // a || b

Bitwise Operations

Xor = 11         // a XOR b (bitwise)
Nor = 12         // !(a || b)

Comparison Operations

Eq = 13          // a == b
Lte = 14         // a <= b
Gte = 15         // a >= b
Gt = 16          // a > b
Lt = 17          // a < b

3. U32 Integer Operations

Specialized operations for 32-bit unsigned integer arithmetic.

U32 Arithmetic

U32Add = 68      // 32-bit addition
U32Sub = 69      // 32-bit subtraction  
U32Mul = 70      // 32-bit multiplication
U32Div = 71      // 32-bit division
U32Mod = 75      // 32-bit modulo
U32Exp = 76      // 32-bit exponentiation

U32 Bitwise Operations

U32And = 32              // a & b
U32AndConstant = 33      // a & const
U32Or = 34               // a | b
U32OrConstant = 35       // a | const
U32Xor = 36              // a ^ b
U32XorConstant = 37      // a ^ const

U32 Shift Operations

U32ShiftLeft = 38                           // a << b
U32ShiftLeftConstantBitDistance = 40        // a << const
U32ShiftLeftConstantValue = 41              // const << b
U32ShiftRight = 42                          // a >> b
U32ShiftRightConstantBitDistance = 43       // a >> const
U32ShiftRightConstantValue = 44             // const >> b

4. Type Conversion Operations

Operations for converting between different data types.

CastU32 = 31     // Convert Felt to u32
CastFelt = 72    // Convert u32 to Felt
CastBool = 73    // Convert to boolean

5. Cryptographic Operations

Operations for cryptographic functions and hashing.

Hash Operations

HashNoPad = 21           // Hash without padding
HashPad = 22             // Hash with padding
HashTwoToOne = 78        // Merge two hashes into one
CalculateMerkleRoot = 45 // Calculate Merkle tree root

Digital Signature Verification

Secp256k1Verify = 77     // Verify secp256k1 signature

6. Blockchain State Access Operations

Operations for reading blockchain and transaction context information.

User and Contract Context

GetUserId = 46               // Current user's ID
GetContractId = 47           // Current contract ID
GetCallerContractId = 79     // Calling contract's ID
GetCheckpointId = 48         // Current checkpoint/block height
GetNonce = 49                // Current user's nonce
GetUserPublicKeyHash = 50    // User's public key hash

State Query Operations

GetStateQueryResult = 51           // Query blockchain state (hash result)
GetStateQueryResultSingle = 52     // Query single value from state
GetStateCommandResultHash = 53     // Get state command result as hash
GetStateCommandResultSingle = 54   // Get state command result as single value
GetStateCommandResultArray = 55    // Get state command result as array

7. Contract Storage Access Operations

State commands for reading and writing contract storage across the user tree hierarchy.

Current Contract Storage (Self)

GetSelfUserCurrentContractStateSlotHash    // Read hash from current contract slot
GetSelfUserCurrentContractStateSlotSingle  // Read single value from current contract
GetSelfUserCurrentContractStateSlotRange   // Read range of values from current contract

External Contract Storage (Same User)

GetSelfUserExternalContractStateSlotHash   // Read hash from other contract (same user)
GetSelfUserExternalContractStateSlotSingle // Read single value from other contract (same user)
GetSelfUserExternalContractStateSlotRange  // Read range from other contract (same user)

Cross-User Storage Access

GetOtherUserContractStateSlotHash    // Read hash from other user's contract
GetOtherUserContractStateSlotSingle  // Read single value from other user's contract  
GetOtherUserContractStateSlotRange   // Read range from other user's contract

Storage Write Operations

SetContractStateSlotHash     // Write 4-element hash to storage slot
SetContractStateSlotSingle   // Write single value to storage sub-slot
SetContractStateSlotRange    // Write array of values to storage range
ClearEntireTree             // Clear all storage for current user/contract

8. Contract Interaction Operations

Operations for invoking other contracts and managing execution flow.

InvokeExternalContractFunctionSync     // Synchronous contract call
InvokeExternalContractFunctionDeferred // Asynchronous contract call

9. Control Flow and Selection Operations

Operations for conditional execution and data selection.

Select = 23          // Conditional selection: condition ? a : b
SplitBits = 18       // Split value into bit array
SumBits = 19         // Sum bits back into value
TargetAt = 20        // Access element at index

10. Input/Output and Constants

Operations for handling program inputs and constant values.

InputTarget = 0      // Function input parameter (Felt)
U32InputTarget = 66  // Function input parameter (u32)
BoolInputTarget = 74 // Function input parameter (bool)
Constant = 1         // Constant Felt value
ConstantU32 = 67     // Constant u32 value  
ConstantTrue = 2     // Boolean true constant
ConstantFalse = 3    // Boolean false constant

11. Assertion Operations

Operations for runtime validation and constraint enforcement.

Assertions are not represented as DPNOpType enums but as separate DPNAssertEqInfoIndexed structures that enforce equality constraints during circuit execution.

#![allow(unused)]
fn main() {
// Assertion structure
pub struct DPNAssertEqInfoIndexed {
    pub left: u64,     // Left operand (variable reference)
    pub right: u64,    // Right operand (variable reference)  
    pub message: String, // Error message for failed assertion
}
}

Assertions validate that two computed values are equal, with descriptive error messages for debugging failed proofs.

12. Blockchain Information Access

Operations for accessing blockchain-wide information and metadata.

GetCheckpointLeafStats  // Get checkpoint/block statistics
GetContractLeaf         // Get contract metadata from global tree

Operation Properties

Each operation has several important properties:

Data Type Constraints

Operations specify their output data type through get_data_type():

  • Arithmetic ops return Target (Felt)
  • Comparison ops return Bool
  • U32 ops return U32Target
  • Hash ops return HashOut

Storage Access Patterns

The VM enforces the "read others, write self" security model:

  1. Read Operations: Can access any user's storage in read-only mode
  2. Write Operations: Can only modify current user's storage
  3. Cross-Contract: Can read from any contract, write only to current contract
  4. Storage Slots: Each contract has 2^32 available slots, each storing a Hash (4 Felts)

Function-Specific Compilation

Psy opcodes are generated for each individual function during compilation. Each function compiles to a DPNFunctionCircuitDefinition that contains all necessary information to generate and execute the corresponding zero-knowledge circuit.

DPNFunctionCircuitDefinition Structure

#![allow(unused)]
fn main() {
pub struct DPNFunctionCircuitDefinition {
    pub name: String,                                    // Function name
    pub method_id: u32,                                  // Unique method identifier
    pub circuit_inputs: Vec<u64>,                        // Input variable references
    pub circuit_outputs: Vec<u64>,                       // Output variable references
    pub state_commands: Vec<DPNStateCmd<u64>>,           // Storage/blockchain operations
    pub state_command_resolution_indices: Vec<usize>,    // Mapping to operation results
    pub assertions: Vec<DPNAssertEqInfoIndexed>,         // Runtime validation constraints
    pub definitions: Vec<DPNIndexedVarDef>,              // All operation definitions
    pub events: Vec<DPNEventRecord>,                     // Event emissions
}
}

Field Explanations

name: String

  • Human-readable function name for debugging and identification
  • Example: "simple_mint", "transfer", "main"

method_id: u32

  • Unique identifier for the function within the contract
  • Used for contract method dispatch and cross-contract calls
  • Generated deterministically from function signature

circuit_inputs: Vec<u64>

  • References to input parameters in the operation definitions
  • Maps to function parameters in order: fn transfer(from: Felt, to: Felt, amount: Felt)
  • Each u64 is an encoded reference to a DPNIndexedVarDef entry

circuit_outputs: Vec<u64>

  • References to values returned by the function
  • For void functions, this is empty
  • For functions with return values, contains references to final computed results

state_commands: Vec<DPNStateCmd<u64>>

  • All storage read/write operations and blockchain state access
  • Includes operations like GetSelfUserCurrentContractStateSlotSingle, SetContractStateSlotHash
  • Each command represents interaction with the global state tree

state_command_resolution_indices: Vec<usize>

  • Maps state command results to variable definitions
  • When a state command executes, its result is stored at the specified index
  • Enables referencing storage read results in subsequent operations

assertions: Vec<DPNAssertEqInfoIndexed>

  • All assert and assert_eq statements from the source code
  • Enforces runtime constraints that must be satisfied for proof validity
  • Contains error messages for debugging failed assertions

definitions: Vec<DPNIndexedVarDef>

  • Complete list of all operations (opcodes) in execution order
  • Each entry represents one computation step
  • Forms a dependency graph where later operations can reference earlier results
  • See detailed explanation in the DPNIndexedVarDef section below

events: Vec<DPNEventRecord>

  • Event emissions from the function execution
  • Contains checkpoint_id, user_id, contract_id, and event data
  • Used for off-chain indexing and monitoring

Compilation Process

High-level Psy code goes through this compilation process:

  1. Parse: Psy syntax → AST
  2. Semantic Analysis: Type checking, visibility rules
  3. DPN Generation: AST → DPNFunctionCircuitDefinition per function
  4. Operation Encoding: Function body → DPNIndexedVarDef operations
  5. State Command Extraction: Storage/blockchain access → DPNStateCmd
  6. Constraint Generation: Operations → Mathematical constraints
  7. Circuit Building: Constraints → Plonky2 circuits
  8. Proof Generation: Circuit execution → Zero-knowledge proofs

Example Compilation

impl TokenContractRef {
    pub fn mint(amount: Felt) {
        let contract = TokenContractRef::new(ContractMetadata::current());
        let current_supply = contract.total_supply.get();
        assert(current_supply + amount > current_supply, "overflow");
        contract.total_supply.set(current_supply + amount);
    }
}

This compiles to a DPNFunctionCircuitDefinition containing:

  • name: "mint"
  • method_id: Hash of method signature
  • circuit_inputs: [amount_var_ref]
  • circuit_outputs: [] (void function)
  • state_commands: [GetSelfUserCurrentContractStateSlotSingle, SetContractStateSlotSingle]
  • assertions: [DPNAssertEqInfoIndexed{left: overflow_check, right: true, message: "overflow"}]
  • definitions: All intermediate operations (TokenContractRef::new, get, add, comparison, set)

DPNIndexedVarDef: Operation Encoding and Symbol Evaluation

The core of the VM's execution model is the DPNIndexedVarDef structure, which represents individual operations in a unified array format that enables efficient symbolic evaluation.

Structure Definition

#![allow(unused)]
fn main() {
pub struct DPNIndexedVarDef {
    pub data_type: DPNBuiltInDataType,  // Output data type
    pub index: usize,                   // Position in definitions array
    pub op_type: DPNOpType,            // Operation to perform
    pub inputs: Vec<u64>,              // References to input operands
}
}

Unified Array Storage Model

All operations within a function are stored in a single Vec<DPNIndexedVarDef> where:

  1. Sequential Indexing: Each operation gets a unique index (0, 1, 2, ...)
  2. Dependency References: Later operations reference earlier ones by index
  3. Symbolic Evaluation: Operations form a directed acyclic graph (DAG)
  4. Memory Efficiency: Shared intermediate results avoid recomputation

Input Reference Encoding

The inputs: Vec<u64> field contains encoded references to operands:

#![allow(unused)]
fn main() {
// Encoding format: (data_type << 32) | index
pub fn encode_indexed_op_id(data_type: DPNBuiltInDataType, index: usize) -> u64 {
    ((data_type as u64) << 32) | (index as u64)
}

pub fn decode_indexed_op_id(id: u64) -> (DPNBuiltInDataType, usize) {
    (DPNBuiltInDataType::from(id >> 32), (id & 0xFFFFFFFF) as usize)
}
}

Symbolic Evaluation Process

The VM evaluates operations in dependency order:

  1. Topological Sort: Ensure dependencies are computed before dependent operations
  2. Lazy Evaluation: Only compute values when needed
  3. Memoization: Cache results to avoid duplicate computation
  4. Type Safety: Ensure type consistency across operations

Example: Simple Addition

let x = 5;
let y = 10;
let result = x + y;

Compiles to these DPNIndexedVarDef entries (note: in practice, constant folding optimization would evaluate this at compile time to a single constant 15):

#![allow(unused)]
fn main() {
[
    // Index 0: Constant x = 5
    DPNIndexedVarDef {
        data_type: Target,
        index: 0,
        op_type: Constant,
        inputs: vec![5], // Constant value embedded
    },
    
    // Index 1: Constant y = 10  
    DPNIndexedVarDef {
        data_type: Target,
        index: 1,
        op_type: Constant,
        inputs: vec![10], // Constant value embedded
    },
    
    // Index 2: result = x + y
    DPNIndexedVarDef {
        data_type: Target,
        index: 2,
        op_type: Add,
        inputs: vec![
            encode_indexed_op_id(Target, 0), // Reference to x (index 0)
            encode_indexed_op_id(Target, 1), // Reference to y (index 1)
        ],
    },
]
}

Complex Expression Example

let a = 3;
let b = 4;
let c = 5;
let result = (a + b) * c;

Compiles to:

#![allow(unused)]
fn main() {
[
    // Index 0: a = 3
    DPNIndexedVarDef { data_type: Target, index: 0, op_type: Constant, inputs: vec![3] },
    
    // Index 1: b = 4
    DPNIndexedVarDef { data_type: Target, index: 1, op_type: Constant, inputs: vec![4] },
    
    // Index 2: c = 5
    DPNIndexedVarDef { data_type: Target, index: 2, op_type: Constant, inputs: vec![5] },
    
    // Index 3: temp = a + b
    DPNIndexedVarDef { 
        data_type: Target, 
        index: 3, 
        op_type: Add, 
        inputs: vec![encode_indexed_op_id(Target, 0), encode_indexed_op_id(Target, 1)]
    },
    
    // Index 4: result = temp * c
    DPNIndexedVarDef { 
        data_type: Target, 
        index: 4, 
        op_type: Mul, 
        inputs: vec![encode_indexed_op_id(Target, 3), encode_indexed_op_id(Target, 2)]
    },
]
}

Storage Operation Integration

Storage operations create dependencies between computational operations and state access:

let current_balance = contract.balance.get();  // State read
let new_balance = current_balance + amount;    // Computation  
contract.balance.set(new_balance);            // State write

Results in:

#![allow(unused)]
fn main() {
[
    // Index 0: amount parameter
    DPNIndexedVarDef { data_type: Target, index: 0, op_type: InputTarget, inputs: vec![] },
    
    // Index 1: Read current balance (state command result reference)
    DPNIndexedVarDef { 
        data_type: Target, 
        index: 1, 
        op_type: GetStateCommandResultSingle, 
        inputs: vec![0] // References state_commands[0]
    },
    
    // Index 2: new_balance = current_balance + amount
    DPNIndexedVarDef { 
        data_type: Target, 
        index: 2, 
        op_type: Add, 
        inputs: vec![
            encode_indexed_op_id(Target, 1), // current_balance
            encode_indexed_op_id(Target, 0), // amount
        ]
    },
    
    // State write handled separately in state_commands array
]
}

Advantages of This Model

  1. Memory Efficiency: Single array reduces memory fragmentation
  2. Cache Locality: Sequential access patterns improve performance
  3. Dependency Tracking: Clear parent-child relationships
  4. Debugging: Easy to trace execution and identify bottlenecks
  5. Optimization: Compiler can perform dead code elimination and common subexpression elimination
  6. Proof Generation: Direct mapping to constraint systems

Type Safety and Validation

Each DPNIndexedVarDef includes its output data type:

#![allow(unused)]
fn main() {
pub enum DPNBuiltInDataType {
    Target = 0,        // Field element (Felt)
    Bool = 1,          // Boolean
    U32Target = 2,     // 32-bit integer
    HashOut = 3,       // 4-element hash
    TargetArray = 5,   // Array of field elements
    BoolArray = 6,     // Array of booleans
    U32TargetArray = 7,// Array of 32-bit integers
}
}

The VM enforces type consistency:

  • Input types must match operation requirements
  • Output types are determined by operation semantics
  • Type mismatches cause compilation errors

This indexed variable definition system forms the foundation of Psy's symbolic execution model, enabling efficient zero-knowledge proof generation while maintaining clear semantics and strong type safety.

Performance Considerations

Operation Costs

  • Field arithmetic: Most efficient (native to ZK circuits)
  • U32 operations: Require range checks (higher cost)
  • Hash operations: Expensive but necessary for storage
  • State access: Variable cost based on tree depth
  • Contract calls: Most expensive due to recursive proving

Optimization Strategies

  • Constant folding: Evaluate constant expressions at compile time
  • Operation fusion: Combine multiple ops when possible
  • Storage batching: Group storage operations to reduce tree traversals
  • Selective operations: Use specialized constant variants when applicable

Summary

The Psy VM bytecode provides a comprehensive instruction set that:

  • Arithmetic: Full support for field and integer arithmetic
  • Logic: Boolean operations and comparisons
  • Cryptography: Hashing and signature verification
  • State Management: Hierarchical storage with security guarantees
  • Blockchain Integration: Access to user, contract, and network context
  • Performance: Optimized for zero-knowledge proof generation

This bytecode abstraction allows high-level Psy contracts to compile down to efficient zero-knowledge circuits while maintaining the security properties required for decentralized applications.

VM Execution

VM execution takes a DPNFunctionCircuitDefinition and generates witness data for zero-knowledge proof construction.

SimpleDPNExecutor

The executor maintains type-specific storage arrays:

#![allow(unused)]
fn main() {
pub struct SimpleDPNExecutor<F: RichField> {
    pub targets: Vec<F>,           // Field elements
    pub target_arrays: Vec<Vec<F>>, 
    pub hashes: Vec<[F; 4]>,       
    pub bools: Vec<bool>,          
    pub bool_arrays: Vec<Vec<bool>>, 
    pub u32s: Vec<u32>,            
    pub u32_arrays: Vec<Vec<u32>>, 
    
    pub user_id: F,                
    pub contract_id: F,            
    pub caller_contract_id: F,     
    pub checkpoint_id: F,          
    pub user_public_key: [F; 4],   
    pub nonce: F,                  
    pub inputs: Vec<F>,            
}
}

Input

  • DPNFunctionCircuitDefinition: Compiled bytecode with operations and state commands
  • Function Parameters: Concrete values for function inputs
  • Blockchain Context: User ID, contract ID, block height, nonce
  • External State: Storage values from blockchain state tree

Execution Process

  1. Initialize executor with function inputs and blockchain context
  2. Process each DPNIndexedVarDef operation in sequence
  3. Resolve input references to concrete values from storage arrays
  4. Execute operation and store result in appropriate type array
  5. Generate complete witness for proof construction

Output

  • Witness Arrays: Computed values in type-specific storage (targets, bools, u32s, etc.)
  • State Command Witness: Results from blockchain state operations
  • Execution Context: Final blockchain state after execution
  • State Changes: Modified storage slots and new values
  • Events: Emitted contract events with parameters

Key Functions

#![allow(unused)]
fn main() {
pub fn resolve_target(&self, id: u64) -> F {
    let (data_type, index) = decode_indexed_op_id(id);
    match data_type {
        DPNBuiltInDataType::Target => self.targets[index],
        DPNBuiltInDataType::Bool => if self.bools[index] { F::ONE } else { F::ZERO },
        DPNBuiltInDataType::U32Target => F::from_canonical_u32(self.u32s[index]),
        _ => panic!("Invalid data type"),
    }
}

pub fn process_var_def(&mut self, op: &DPNIndexedVarDef) {
    match op.op_type {
        DPNOpType::Add => {
            let left = self.resolve_target(op.inputs[0]);
            let right = self.resolve_target(op.inputs[1]);
            self.targets.push(left + right);
        },
        DPNOpType::Constant => {
            self.targets.push(F::from_canonical_u64(op.inputs[0]));
        },
        // ... other operations
    }
}
}

Output

Execution produces witness data containing computed values in type-specific arrays, which are used for zero-knowledge proof generation.

User Cli Interface

This documentation organizes the three core data query interfaces implemented by RpcProvider (QTreeDataStoreReaderSync<F>, QMetaDataStoreReaderSync<F>, and PsyComboDataStoreReaderSync<F>). It includes interface functions, method details, parameter descriptions, and return value types, focusing on core data query capabilities for blockchain scenarios such as users, contracts, and checkpoints.

Basic Information

  • Core Dependent Types:
    • F = GoldilocksField: A prime field type based on Plonky2, used for blockchain data verification and hash calculation.
    • QHashOut<F>: Hash output result type, storing raw data after hash calculation.
    • MerkleProofCore<QHashOut<F>>: Core Merkle proof type, containing information such as the root, value, and sibling nodes required for proof.

Core Structures

RpcProvider

The foundational component for RPC communication, responsible for interacting with Realm nodes (user-specific data) and Coordinator nodes (global public data, e.g., contracts, checkpoints). It supports cross-environment use (non-WASM like servers, WASM like browser extensions).

Structure Definition

#![allow(unused)]
fn main() {
#[derive(Debug, Clone)]
pub struct RpcProvider {
    pub client: Arc<Client>,               // HTTP client (for RPC requests)
    pub realm_configs: HashMap<u64, Vec<String>>,  // Realm node config: {Realm ID → List of RPC URLs}
    pub coordinator_configs: HashMap<u64, Vec<String>>,  // Coordinator node config: {Coordinator ID → List of RPC URLs}
    pub users_per_realm: u64,              // Number of users assigned to each Realm (for user-to-Realm routing)
    pub current_user_id: u64,              // ID of the currently active user (for default data requests)
}
}

Key Methods

RpcProvider provides methods for node routing, user/contract/data queries, and transaction submission. Core methods are categorized below:

CategoryMethod NameParametersReturn ValueFunction Description
Node Routingget_realm_iduser_id: u64u64Calculate the Realm ID for a user (via user_id / users_per_realm).
get_realm_urluser_id: u64anyhow::Result<&String>Get a random RPC URL of the Realm node corresponding to the user (for load balancing).
get_coordinator_urlNoneanyhow::Result<&String>Get a random RPC URL of the Coordinator node (for global data requests).
User Operationsregister_userreq: QRegisterUserRPCRequest<F>anyhow::Result<()>Submit a user registration request to the Coordinator node.
get_user_idpublic_key: QHashOut<F>anyhow::Result<u64>Query the user ID corresponding to a public key from the Coordinator node.
Contract Operationsdeploy_contractreq: QDeployContractRPCRequest<F>anyhow::Result<()>Submit a contract deployment request to the Coordinator node.
Data Queriesget_realm_latest_block_stateNoneanyhow::Result<PsyBlockState>Query the latest L2 block state from the current user’s Realm node.
get_claim_amountcheckpoint_id: u64, user_id: u64, claim_user_id: u64anyhow::Result<u64>Calculate the available claim amount for a user by querying contract state tree leaves.
check_tx_is_confirmedcheckpoint_id: u64, user_id: u64, tx_hash: QHashOut<GoldilocksField>anyhow::Result<bool>Verify if a transaction is confirmed by comparing the user leaf hash with the transaction hash.
Batch Proof Queryget_job_proofsjob_infos: Vec<JobInfo>anyhow::Result<Vec<(QProvingJobDataID, VariableHeightRewardMerkleProof)>>Batch query reward Merkle proofs for multiple jobs (routes to Realm/Coordinator based on job location).

Auxiliary Structures

RpcConfig & NetworkConfig

Configuration structures for node networks and proxy services, loaded from external config files:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct RpcConfig {
    pub users_per_realm: u64,                // Number of users per Realm
    pub global_user_tree_height: u8,         // Height of the global user Merkle tree
    pub realm_user_tree_height: u8,          // Height of the Realm-specific user Merkle tree
    pub realm_configs: Vec<RealmRpcConfig>,  // List of Realm node configs
    pub coordinator_configs: Vec<CoordinatorRpcConfig>,  // List of Coordinator node configs
    pub prove_proxy_url: Vec<String>,        // List of proof proxy service URLs
}

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NetworkConfig {  // Extended config for blockchain network
    pub users_per_realm: u64,
    pub global_user_tree_height: u8,
    pub realm_user_tree_height: u8,
    pub realm_configs: Vec<RealmConfig>,     // Same as RealmRpcConfig
    pub coordinator_configs: Vec<CoordinatorConfig>,  // Same as CoordinatorRpcConfig
    pub prover_url: Option<String>,          // Optional prover service URL
    pub prove_proxy_url: Vec<String>,
    pub native_currency: String,             // Native currency symbol (e.g., "PSY")
}
}

Merkle Tree Data Query

This interface focuses on querying roots, leaf hashes, and Merkle proofs of various Merkle trees in the blockchain, covering core scenarios such as users, contracts, checkpoints, deposits, and withdrawals.

(1) User Contract State Tree Queries (User-Contract Internal State)

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_user_contract_state_tree_root- checkpoint_id: u64: Unique identifier for the checkpoint
- user_id: u64: Unique identifier for the user
- contract_id: u32: Unique identifier for the contract
QHashOut<F>Queries the root hash of a user's contract state tree under a specified checkpointRealm Node
get_user_contract_state_tree_leaf_hash- Same as above
- height: u8: Height of the state tree
- leaf_id: u64: Unique identifier for the leaf node
QHashOut<F>Queries the hash of a specified leaf node in the state treeRealm Node
get_user_contract_state_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a specified leaf node in the state tree (including verification logic)Realm Node

(2) User Contract Tree Queries (User-Contract Association)

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_user_contract_tree_root- checkpoint_id: u64: Unique identifier for the checkpoint
- user_id: u64: Unique identifier for the user
QHashOut<F>Queries the root hash of a user's contract tree under a specified checkpointRealm Node
get_user_contract_tree_leaf_hash- Same as above
- contract_id: u32: Unique identifier for the contract
QHashOut<F>Queries the hash of the leaf node corresponding to a contract in the user's contract treeRealm Node
get_user_contract_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a contract in the user's contract treeRealm Node

(3) User Registration Tree Queries (Global User Registration State)

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_user_registration_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of the global user registration tree under a specified checkpointCoordinator Node
get_user_registration_tree_leaf_hash- Same as above
- leaf_index: u64: Leaf index
QHashOut<F>Queries the hash of a leaf node at a specified index in the user registration treeCoordinator Node
get_user_registration_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a leaf node at a specified index in the user registration treeCoordinator Node

(4) User Tree Queries (Global User State)

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_user_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of the global user tree under a specified checkpointCoordinator Node
get_user_tree_leaf_hash- Same as above
- user_id: u64: Unique identifier for the user
QHashOut<F>Queries the hash of the leaf node corresponding to a user in the user treeRealm Node
get_user_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a user in the user tree (automatically merges subtree proofs)Coordinator & Realm Node
get_user_sub_tree_merkle_proof- checkpoint_id: u64: Unique identifier for the checkpoint
- root_level: u8: Root level
- leaf_level: u8: Leaf level
- leaf_index: u64: Leaf index
MerkleProofCore<QHashOut<F>>Queries the Merkle proof of a specified level in the user subtreeCoordinator & Realm Node
Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_contract_function_tree_root- checkpoint_id: u64: Unique identifier for the checkpoint
- contract_id: u32: Unique identifier for the contract
QHashOut<F>Queries the root hash of a contract's function tree under a specified checkpointCoordinator Node
get_contract_function_tree_leaf_hash- Same as above
- function_id: u32: Unique identifier for the function
QHashOut<F>Queries the hash of a function's leaf node in the contract function treeCoordinator Node
get_contract_function_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a function in the contract function treeCoordinator Node
get_contract_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of the global contract tree under a specified checkpointCoordinator Node
get_contract_tree_leaf_hash- Same as above
- contract_id: u32: Unique identifier for the contract
QHashOut<F>Queries the hash of a contract's leaf node in the contract treeCoordinator Node
get_contract_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a contract in the contract treeCoordinator Node

(1) Deposit Tree Queries

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_deposit_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of the global deposit tree under a specified checkpointCoordinator Node
get_deposit_tree_leaf_hash- Same as above
- deposit_id: u32: Unique identifier for the deposit
QHashOut<F>Queries the hash of a deposit's leaf node in the deposit treeCoordinator Node
get_deposit_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a deposit in the deposit treeCoordinator Node

(2) Withdrawal Tree Queries

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_withdrawal_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of the global withdrawal tree under a specified checkpointCoordinator Node
get_withdrawal_tree_leaf_hash- Same as above
- withdrawal_id: u32: Unique identifier for the withdrawal
QHashOut<F>Queries the hash of a withdrawal's leaf node in the withdrawal treeCoordinator Node
get_withdrawal_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a withdrawal in the withdrawal treeCoordinator Node
Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_latest_checkpoint_tree_rootNo parametersQHashOut<F>Queries the root hash of the latest checkpoint treeCoordinator Node
get_checkpoint_tree_root- checkpoint_id: u64: Unique identifier for the checkpointQHashOut<F>Queries the root hash of a specified checkpoint treeCoordinator Node
get_checkpoint_tree_leaf_hash- Same as above
- leaf_checkpoint_id: u64: Leaf checkpoint ID
QHashOut<F>Queries the hash of a leaf node in the checkpoint treeCoordinator Node
get_checkpoint_tree_merkle_proofSame parameters as aboveMerkleProofCore<QHashOut<F>>Queries the Merkle proof of a leaf node in the checkpoint treeCoordinator Node

Metadata Query

This interface focuses on querying core blockchain metadata, including complete leaf data for users, contracts, and checkpoints, as well as L2 block states.

1. User Metadata Queries

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_user_leaf_data- checkpoint_id: u64: Unique identifier for the checkpoint
- user_id: u64: Unique identifier for the user
PsyUserLeaf<F>Queries complete leaf data for a user under a specified checkpoint (including user state, hash, etc.)Realm Node

2. Contract Metadata Queries

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_contract_leaf_data- contract_id: u64: Unique identifier for the contractPsyContractLeaf<F>Queries complete leaf data for a contract (including basic contract information, state, etc.)Coordinator Node
get_contract_code_definition- contract_id: u64: Unique identifier for the contractContractCodeDefinitionQueries the code definition of a contract (including bytecode, function list, etc.)Coordinator Node

3. Checkpoint and L2 Block State Queries

Method NameParameter DescriptionReturn ValueFunction DescriptionCalled Node
get_checkpoint_leaf_data- checkpoint_id: u64: Unique identifier for the checkpointPsyCheckpointLeaf<F>Queries complete leaf data for a specified checkpoint (including global state root, block information, etc.)Coordinator Node
get_latest_block_stateNo parametersPsyBlockStateQueries the complete state of the latest L2 block (including block height, transaction count, etc.)Coordinator Node
get_block_state- checkpoint_id: u64: Unique identifier for the checkpointPsyBlockStateQueries the L2 block state corresponding to a specified checkpointCoordinator Node

Realm Edge RPC Documentation

This document provides comprehensive documentation for all Realm Edge RPC methods defined in the RealmEdgeRpc trait.

RPC Namespace: psy


Table of Contents

  1. User Management
  2. User End Cap Submission
  3. Checkpoint Data Operations
  4. L2 Block State Operations
  5. User Registration Tree Operations
  6. Checkpoint Tree Operations
  7. User Leaf Data Operations
  8. User Contract State Tree Operations
  9. User Contract Tree Operations
  10. User Tree Operations
  11. Batch Proof Generation
  12. GraphViz Export
  13. Data Structures

User Management

1. check_user_id_in_realm

Check if a user ID belongs to this realm.

Method Name: psy_check_user_id_in_realm

Request Parameters:

{
  "user_id": 12345
}
ParameterTypeDescription
user_idu64The user ID to check

Response:

{
  "result": true
}
FieldTypeDescription
resultbooltrue if the user belongs to this realm, false otherwise

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_check_user_id_in_realm",
    "params": [12345],
    "id": 1
  }'

User End Cap Submission

2. submit_user_end_cap

Submit a user end cap proof for processing.

Method Name: psy_submit_user_end_cap

Request Parameters:

{
  "user_ec_input": {
    "core": {
      "checkpoint_id": "123",
      "stats": { ... },
      "state_transition": { ... },
      "new_user_leaf": { ... }
    },
    "contract_state_updates": [ ... ]
  },
  "proof": { ... }
}
ParameterTypeDescription
user_ec_inputSubmitUserEndCapNonProofInput<F>End cap input data (see Data Structures)
proofProofWithPublicInputs<F, C, D>Zero-knowledge proof

Response:

{
  "result": "0x1234567890abcdef..."
}
FieldTypeDescription
resultStringTransaction hash or job ID

Checkpoint Data Operations

3. get_checkpoint_leaf_data

Get checkpoint leaf data by checkpoint ID (u64 parameter).

Method Name: psy_get_checkpoint_leaf_data

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See PsyCheckpointLeaf

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_checkpoint_leaf_data",
    "params": [100],
    "id": 1
  }'

4. get_checkpoint_leaf_data_f

Get checkpoint leaf data by checkpoint ID (Field parameter).

Method Name: psy_get_checkpoint_leaf_data_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See PsyCheckpointLeaf


L2 Block State Operations

5. get_latest_block_state

Get the latest L2 block state.

Method Name: psy_get_latest_block_state

Request Parameters: None

Response: See PsyBlockState

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_latest_block_state",
    "params": [],
    "id": 1
  }'

Response Example:

{
  "result": {
    "checkpoint_id": 100,
    "next_add_withdrawal_id": 50,
    "next_process_withdrawal_id": 45,
    "next_deposit_id": 200,
    "total_deposits_claimed_epoch": 180,
    "next_user_id": 1000,
    "end_balance": 5000000,
    "next_contract_id": 25
  }
}

6. get_block_state

Get L2 block state at a specific checkpoint (u64 parameter).

Method Name: psy_get_block_state

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See PsyBlockState


7. get_block_state_f

Get L2 block state at a specific checkpoint (Field parameter).

Method Name: psy_get_block_state_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See PsyBlockState


User Registration Tree Operations

8. get_user_registration_tree_root

Get the user registration tree root at a specific checkpoint.

Method Name: psy_get_user_registration_tree_root

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See QHashOut

Example Response:

{
  "result": "0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"
}

Checkpoint Tree Operations

9. get_latest_checkpoint_tree_root

Get the latest checkpoint tree root.

Method Name: psy_get_latest_checkpoint_tree_root

Request Parameters: None

Response: See QHashOut


10. get_checkpoint_tree_root

Get checkpoint tree root at a specific checkpoint (u64 parameter).

Method Name: psy_get_checkpoint_tree_root

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See QHashOut


11. get_checkpoint_tree_root_f

Get checkpoint tree root at a specific checkpoint (Field parameter).

Method Name: psy_get_checkpoint_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See QHashOut


12. get_checkpoint_tree_leaf_hash

Get a specific checkpoint tree leaf hash (u64 parameters).

Method Name: psy_get_checkpoint_tree_leaf_hash

Request Parameters:

{
  "checkpoint_id": 100,
  "leaf_checkpoint_id": 95
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
leaf_checkpoint_idu64The leaf checkpoint ID

Response: See QHashOut


13. get_checkpoint_tree_leaf_hash_f

Get a specific checkpoint tree leaf hash (Field parameters).

Method Name: psy_get_checkpoint_tree_leaf_hash_f

Request Parameters:

{
  "checkpoint_id": "100",
  "leaf_checkpoint_id": "95"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
leaf_checkpoint_idF (Field)The leaf checkpoint ID as a field element

Response: See QHashOut


14. get_checkpoint_tree_merkle_proof

Get Merkle proof for a checkpoint tree leaf (u64 parameters).

Method Name: psy_get_checkpoint_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "leaf_checkpoint_id": 95
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
leaf_checkpoint_idu64The leaf checkpoint ID

Response: See MerkleProofCore


15. get_checkpoint_tree_merkle_proof_f

Get Merkle proof for a checkpoint tree leaf (Field parameters).

Method Name: psy_get_checkpoint_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "leaf_checkpoint_id": "95"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
leaf_checkpoint_idF (Field)The leaf checkpoint ID as a field element

Response: See MerkleProofCore


16. get_checkpoint_global_state_roots

Get global state roots at a specific checkpoint.

Method Name: psy_get_checkpoint_global_state_roots

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See PsyCheckpointGlobalStateRoots

Example Response:

{
  "result": {
    "contract_tree_root": "0x...",
    "deposit_tree_root": "0x...",
    "user_tree_root": "0x...",
    "withdrawal_tree_root": "0x...",
    "user_registration_tree_root": "0x..."
  }
}

User Leaf Data Operations

17. get_user_leaf_data

Get user leaf data at a specific checkpoint (u64 parameters).

Method Name: psy_get_user_leaf_data

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID

Response: See PsyUserLeaf

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_user_leaf_data",
    "params": [100, 12345],
    "id": 1
  }'

18. get_user_leaf_data_f

Get user leaf data at a specific checkpoint (Field parameters).

Method Name: psy_get_user_leaf_data_f

Request Parameters:

{
  "checkpoint_id": "100",
  "user_id": "12345"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element

Response: See PsyUserLeaf


User Contract State Tree Operations

19. get_user_contract_state_tree_root

Get user contract state tree root (u64 parameters).

Method Name: psy_get_user_contract_state_tree_root

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345,
  "contract_id": 5
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID
contract_idu32The contract ID

Response: See QHashOut


20. get_user_contract_state_tree_root_f

Get user contract state tree root (Field parameters).

Method Name: psy_get_user_contract_state_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100",
  "user_id": "12345",
  "contract_id": "5"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element
contract_idF (Field)The contract ID as a field element

Response: See QHashOut


21. get_user_contract_state_tree_leaf_hash

Get user contract state tree leaf hash (u64 parameters).

Method Name: psy_get_user_contract_state_tree_leaf_hash

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345,
  "contract_id": 5,
  "height": 10,
  "leaf_id": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID
contract_idu32The contract ID
heightu8The tree height
leaf_idu64The leaf ID

Response: See QHashOut


22. get_user_contract_state_tree_leaf_hash_f

Get user contract state tree leaf hash (Field parameters).

Method Name: psy_get_user_contract_state_tree_leaf_hash_f

Request Parameters:

{
  "checkpoint_id": "100",
  "user_id": "12345",
  "contract_id": "5",
  "height": 10,
  "leaf_id": "42"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element
contract_idF (Field)The contract ID as a field element
heightu8The tree height
leaf_idF (Field)The leaf ID as a field element

Response: See QHashOut


23. get_user_contract_state_tree_merkle_proof

Get Merkle proof for user contract state tree (u64 parameters).

Method Name: psy_get_user_contract_state_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345,
  "contract_id": 5,
  "height": 10,
  "leaf_id": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID
contract_idu32The contract ID
heightu8The tree height
leaf_idu64The leaf ID

Response: See MerkleProofCore


24. get_user_contract_state_tree_merkle_proof_f

Get Merkle proof for user contract state tree (Field parameters).

Method Name: psy_get_user_contract_state_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "user_id": "12345",
  "contract_id": "5",
  "height": 10,
  "leaf_id": "42"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element
contract_idF (Field)The contract ID as a field element
heightu8The tree height
leaf_idF (Field)The leaf ID as a field element

Response: See MerkleProofCore


User Contract Tree Operations

25. get_user_contract_tree_root

Get user contract tree root (u64 parameters).

Method Name: psy_get_user_contract_tree_root

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID

Response: See QHashOut


26. get_user_contract_tree_root_f

Get user contract tree root (Field parameters).

Method Name: psy_get_user_contract_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100",
  "user_id": "12345"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element

Response: See QHashOut


27. get_user_contract_tree_leaf_hash

Get user contract tree leaf hash (u64 parameters).

Method Name: psy_get_user_contract_tree_leaf_hash

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345,
  "contract_id": 5
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID
contract_idu32The contract ID

Response: See QHashOut


28. get_user_contract_tree_leaf_hash_f

Get user contract tree leaf hash (Field parameters).

Method Name: psy_get_user_contract_tree_leaf_hash_f

Request Parameters:

{
  "checkpoint_id": "100",
  "user_id": "12345",
  "contract_id": "5"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element
contract_idF (Field)The contract ID as a field element

Response: See QHashOut


29. get_user_contract_tree_merkle_proof

Get Merkle proof for user contract tree (u64 parameters).

Method Name: psy_get_user_contract_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345,
  "contract_id": 5
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID
contract_idu32The contract ID

Response: See MerkleProofCore


30. get_user_contract_tree_merkle_proof_f

Get Merkle proof for user contract tree (Field parameters).

Method Name: psy_get_user_contract_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "user_id": "12345",
  "contract_id": "5"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element
contract_idF (Field)The contract ID as a field element

Response: See MerkleProofCore


User Tree Operations

31. get_user_tree_root

Get user tree root at a specific checkpoint (u64 parameter).

Method Name: psy_get_user_tree_root

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See QHashOut


32. get_user_tree_root_f

Get user tree root at a specific checkpoint (Field parameter).

Method Name: psy_get_user_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See QHashOut


33. get_user_tree_leaf_hash

Get user tree leaf hash (u64 parameters).

Method Name: psy_get_user_tree_leaf_hash

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID

Response: See QHashOut


34. get_user_tree_leaf_hash_f

Get user tree leaf hash (Field parameters).

Method Name: psy_get_user_tree_leaf_hash_f

Request Parameters:

{
  "checkpoint_id": "100",
  "user_id": "12345"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element

Response: See QHashOut


35. get_user_bottom_tree_merkle_proof

Get Merkle proof for user bottom tree (u64 parameters).

Method Name: psy_get_user_bottom_tree_merkle_proof

Request Parameters:

{
  "root_level": 5,
  "checkpoint_id": 100,
  "user_id": 12345
}
ParameterTypeDescription
root_levelu8The root level of the tree
checkpoint_idu64The checkpoint ID
user_idu64The user ID

Response: See MerkleProofCore


36. get_user_bottom_tree_merkle_proof_f

Get Merkle proof for user bottom tree (Field parameters).

Method Name: psy_get_user_bottom_tree_merkle_proof_f

Request Parameters:

{
  "root_level": 5,
  "checkpoint_id": "100",
  "user_id": "12345"
}
ParameterTypeDescription
root_levelu8The root level of the tree
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element

Response: See MerkleProofCore


37. get_user_sub_tree_merkle_proof

Get Merkle proof for user sub-tree (u64 parameters).

Method Name: psy_get_user_sub_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "root_level": 5,
  "leaf_level": 2,
  "leaf_index": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
root_levelu8The root level of the tree
leaf_levelu8The leaf level of the tree
leaf_indexu64The leaf index

Response: See MerkleProofCore


38. get_user_sub_tree_merkle_proof_f

Get Merkle proof for user sub-tree (Field parameters).

Method Name: psy_get_user_sub_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "root_level": 5,
  "leaf_level": 2,
  "leaf_index": "42"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
root_levelu8The root level of the tree
leaf_levelu8The leaf level of the tree
leaf_indexF (Field)The leaf index as a field element

Response: See MerkleProofCore


39. get_user_tree_merkle_proof

Get Merkle proof for user tree (u64 parameters).

Method Name: psy_get_user_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID

Response: See MerkleProofCore

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_user_tree_merkle_proof",
    "params": [100, 12345],
    "id": 1
  }'

40. get_user_tree_merkle_proof_f

Get Merkle proof for user tree (Field parameters).

Method Name: psy_get_user_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "user_id": "12345"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element

Response: See MerkleProofCore

Note: This method internally converts field parameters to u64 and calls get_user_tree_merkle_proof.


Batch Proof Generation

41. generate_batch_variable_height_reward_proofs

Generate batch variable height reward Merkle proofs for multiple job IDs.

Method Name: psy_generate_batch_variable_height_reward_proofs

Request Parameters:

{
  "checkpoint_id": 100,
  "job_ids": [
    {
      "topic": "GenerateStandardProof",
      "goal_id": 100,
      "slot_id": 5,
      "circuit_type": "GUTATwoEndCap",
      "group_id": 1,
      "sub_group_id": 0,
      "task_index": 0,
      "data_type": "InputWitness",
      "data_index": 0
    }
  ]
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
job_idsVec<QProvingJobDataID>Array of proving job data IDs (see QProvingJobDataID)

Response:

{
  "result": [
    [
      {
        "top_siblings": [...],
        "sibling_branch": "0x...",
        "reward_leaf": "0x...",
        "proof_height": "5",
        "index": "42"
      },
      {
        "topic": "GenerateStandardProof",
        "goal_id": 100,
        ...
      }
    ]
  ]
}
FieldTypeDescription
resultVec<(VariableHeightRewardMerkleProof, QProvingJobDataID)>Array of tuples containing proofs and job IDs

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_generate_batch_variable_height_reward_proofs",
    "params": [100, [...]],
    "id": 1
  }'

GraphViz Export

42. get_graphviz

Get GraphViz representation of the Merkle tree at a specific checkpoint.

Method Name: psy_get_graphviz

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response:

{
  "result": "digraph G {\n  node1 -> node2;\n  ...\n}"
}
FieldTypeDescription
resultStringGraphViz DOT format string

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_graphviz",
    "params": [100],
    "id": 1
  }' | jq -r '.result' | dot -Tpng > tree.png

Data Structures

QHashOut

A hash output wrapper for Plonky2 field elements.

Structure:

#![allow(unused)]
fn main() {
pub struct QHashOut<F: Field>(pub HashOut<F>);
}

JSON Representation:

"0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"

Description:

  • Wraps a Plonky2 HashOut<F> containing 4 field elements
  • Serialized as a hexadecimal string (32 bytes)
  • Represents a 256-bit hash value

PsyBlockState

L2 block state information at a specific checkpoint.

Structure:

#![allow(unused)]
fn main() {
pub struct PsyBlockState {
    pub checkpoint_id: u64,
    pub next_add_withdrawal_id: u64,
    pub next_process_withdrawal_id: u64,
    pub next_deposit_id: u64,
    pub total_deposits_claimed_epoch: u64,
    pub next_user_id: u64,
    pub end_balance: u64,
    pub next_contract_id: u32,
}
}

Fields:

FieldTypeDescription
checkpoint_idu64The checkpoint identifier
next_add_withdrawal_idu64Next withdrawal ID to be added
next_process_withdrawal_idu64Next withdrawal ID to be processed
next_deposit_idu64Next deposit ID
total_deposits_claimed_epochu64Total deposits claimed in current epoch
next_user_idu64Next user ID to be assigned
end_balanceu64Ending balance at this checkpoint
next_contract_idu32Next contract ID to be assigned

Example:

{
  "checkpoint_id": 100,
  "next_add_withdrawal_id": 50,
  "next_process_withdrawal_id": 45,
  "next_deposit_id": 200,
  "total_deposits_claimed_epoch": 180,
  "next_user_id": 1000,
  "end_balance": 5000000,
  "next_contract_id": 25
}

PsyCheckpointLeaf

Checkpoint leaf data containing global chain root and statistics.

Structure:

#![allow(unused)]
fn main() {
pub struct PsyCheckpointLeaf<F: RichField> {
    pub global_chain_root: QHashOut<F>,
    pub stats: PsyCheckpointLeafStats<F>,
}
}

Fields:

FieldTypeDescription
global_chain_rootQHashOut<F>Global chain state root hash
statsPsyCheckpointLeafStats<F>Checkpoint statistics

PsyCheckpointLeafStats Structure:

#![allow(unused)]
fn main() {
pub struct PsyCheckpointLeafStats<F: RichField> {
    pub fees_collected: F,
    pub user_ops_processed: F,
    pub total_transactions: F,
    pub slots_modified: F,
    pub pm_jobs_completed: PMJobsCompletedStats<F>,
    pub block_time: F,
    pub random_seed: QHashOut<F>,
    pub pm_rewards_commitment: PMRewardCommitment<F>,
    pub da_challenges_claimed: [F; DA_CHALLENGE_WINDOW],
}
}

Example:

{
  "global_chain_root": "0x...",
  "stats": {
    "fees_collected": "1000",
    "user_ops_processed": "50",
    "total_transactions": "75",
    "slots_modified": "30",
    "pm_jobs_completed": {...},
    "block_time": "1234567890",
    "random_seed": "0x...",
    "pm_rewards_commitment": {...},
    "da_challenges_claimed": [...]
  }
}

PsyCheckpointGlobalStateRoots

Global state roots at a specific checkpoint.

Structure:

#![allow(unused)]
fn main() {
pub struct PsyCheckpointGlobalStateRoots<F: RichField> {
    pub contract_tree_root: QHashOut<F>,
    pub deposit_tree_root: QHashOut<F>,
    pub user_tree_root: QHashOut<F>,
    pub withdrawal_tree_root: QHashOut<F>,
    pub user_registration_tree_root: QHashOut<F>,
}
}

Fields:

FieldTypeDescription
contract_tree_rootQHashOut<F>Root of the contract tree
deposit_tree_rootQHashOut<F>Root of the deposit tree
user_tree_rootQHashOut<F>Root of the user tree
withdrawal_tree_rootQHashOut<F>Root of the withdrawal tree
user_registration_tree_rootQHashOut<F>Root of the user registration tree

Example:

{
  "contract_tree_root": "0x1234...",
  "deposit_tree_root": "0x5678...",
  "user_tree_root": "0x9abc...",
  "withdrawal_tree_root": "0xdef0...",
  "user_registration_tree_root": "0x1234..."
}

PsyUserLeaf

User leaf data containing user state information.

Structure:

#![allow(unused)]
fn main() {
pub struct PsyUserLeaf<F: RichField> {
    pub public_key: QHashOut<F>,
    pub user_state_tree_root: QHashOut<F>,
    pub balance: F,
    pub nonce: F,
    pub last_checkpoint_id: F,
    pub event_index: F,
    pub user_id: F,
}
}

Fields:

FieldTypeDescription
public_keyQHashOut<F>User's public key hash
user_state_tree_rootQHashOut<F>Root of user's state tree
balanceFUser's balance
nonceFUser's transaction nonce
last_checkpoint_idFLast checkpoint ID where user was updated
event_indexFEvent index for this user
user_idFUser identifier

Example:

{
  "public_key": "0x1234...",
  "user_state_tree_root": "0x5678...",
  "balance": "1000000",
  "nonce": "42",
  "last_checkpoint_id": "100",
  "event_index": "15",
  "user_id": "12345"
}

MerkleProofCore

Generic Merkle proof structure.

Structure:

#![allow(unused)]
fn main() {
pub struct MerkleProofCore<Hash: PartialEq + Copy> {
    pub root: Hash,
    pub value: Hash,
    pub index: u64,
    pub siblings: Vec<Hash>,
}
}

Fields:

FieldTypeDescription
rootHashMerkle tree root hash
valueHashLeaf value being proven
indexu64Leaf index in the tree
siblingsVec<Hash>Sibling hashes along the path

Example:

{
  "root": "0x1234...",
  "value": "0x5678...",
  "index": 42,
  "siblings": [
    "0x9abc...",
    "0xdef0...",
    "0x1234..."
  ]
}

Verification: The proof can be verified by hashing the value with siblings along the path according to the index bits.


SubmitUserEndCapNonProofInput

Input data for submitting user end cap (without proof).

Structure:

#![allow(unused)]
fn main() {
pub struct SubmitUserEndCapNonProofInput<F: RichField> {
    pub core: SubmitUserEndCapNonProofCoreInput<F>,
    pub contract_state_updates: Vec<PsyContractStateUpdateHistory<F>>,
}

pub struct SubmitUserEndCapNonProofCoreInput<F: RichField> {
    pub checkpoint_id: F,
    pub stats: GUTAStats<F>,
    pub state_transition: UPSEndCapResultCompact<F>,
    pub new_user_leaf: PsyUserLeaf<F>,
}
}

Fields:

FieldTypeDescription
coreSubmitUserEndCapNonProofCoreInput<F>Core input data
contract_state_updatesVec<PsyContractStateUpdateHistory<F>>Contract state update history

Core Fields:

FieldTypeDescription
checkpoint_idFCheckpoint ID
statsGUTAStats<F>GUTA statistics
state_transitionUPSEndCapResultCompact<F>State transition result
new_user_leafPsyUserLeaf<F>New user leaf data

QProvingJobDataID

Proving job data identifier.

Structure:

#![allow(unused)]
fn main() {
pub struct QProvingJobDataID {
    pub topic: QJobTopic,
    pub goal_id: u64,
    pub slot_id: u64,
    pub circuit_type: ProvingJobCircuitType,
    pub group_id: u32,
    pub sub_group_id: u32,
    pub task_index: u32,
    pub data_type: ProvingJobDataType,
    pub data_index: u8,
}
}

Fields:

FieldTypeDescription
topicQJobTopicJob topic (e.g., GenerateStandardProof)
goal_idu64Goal identifier (usually checkpoint ID)
slot_idu64Slot identifier
circuit_typeProvingJobCircuitTypeType of circuit (e.g., GUTATwoEndCap)
group_idu32Group identifier
sub_group_idu32Sub-group identifier
task_indexu32Task index within the group
data_typeProvingJobDataTypeData type (e.g., InputWitness)
data_indexu8Data index

Serialization: Serialized as a 32-byte array.

Example:

{
  "topic": "GenerateStandardProof",
  "goal_id": 100,
  "slot_id": 5,
  "circuit_type": "GUTATwoEndCap",
  "group_id": 1,
  "sub_group_id": 0,
  "task_index": 0,
  "data_type": "InputWitness",
  "data_index": 0
}

VariableHeightRewardMerkleProof

Variable height Merkle proof for reward distribution.

Structure:

#![allow(unused)]
fn main() {
pub struct VariableHeightRewardMerkleProof {
    pub top_siblings: Vec<VariableHeightProofSibling>,
    pub sibling_branch: QHashOut<F>,
    pub reward_leaf: QHashOut<F>,
    pub proof_height: F,
    pub index: F,
}

pub struct VariableHeightProofSibling {
    pub sibling_branch: QHashOut<F>,
    pub sibling_reward_leaf: QHashOut<F>,
}
}

Fields:

FieldTypeDescription
top_siblingsVec<VariableHeightProofSibling>Siblings at each level
sibling_branchQHashOut<F>Sibling branch hash
reward_leafQHashOut<F>Reward leaf hash
proof_heightFHeight of the proof
indexFIndex in the tree

Example:

{
  "top_siblings": [
    {
      "sibling_branch": "0x1234...",
      "sibling_reward_leaf": "0x5678..."
    }
  ],
  "sibling_branch": "0x9abc...",
  "reward_leaf": "0xdef0...",
  "proof_height": "5",
  "index": "42"
}

Field Type Notes

Throughout this API, F represents a field element type (typically GoldilocksField).

Field Element Conversion:

  • Methods with _f suffix accept field elements as strings (e.g., "12345")
  • Methods without _f suffix accept native types (e.g., 12345)
  • Field elements are internally represented as u64 values in the Goldilocks field

Best Practices:

  • Use u64 variants for better performance when possible
  • Use Field variants when working with circuit inputs/outputs
  • Always validate checkpoint_id exists before querying
  • Handle RPC errors gracefully (missing data, invalid parameters)

Error Handling

All RPC methods return RpcResult<T> which can contain errors in the following format:

{
  "jsonrpc": "2.0",
  "error": {
    "code": -32000,
    "message": "Error description"
  },
  "id": 1
}

Common Error Codes:

  • -32000: Server error (checkpoint not found, data unavailable)
  • -32602: Invalid parameters
  • -32603: Internal error

Usage Examples

Complete Workflow Example

# 1. Check if user belongs to realm
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_check_user_id_in_realm",
    "params": [12345],
    "id": 1
  }'

# 2. Get latest L2 block state
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_latest_block_state",
    "params": [],
    "id": 2
  }'

# 3. Get user leaf data at checkpoint
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_user_leaf_data",
    "params": [100, 12345],
    "id": 3
  }'

# 4. Get Merkle proof for user
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_user_tree_merkle_proof",
    "params": [100, 12345],
    "id": 4
  }'

Method Summary

#Method NameParametersReturnsDescription
1check_user_id_in_realmuser_id: u64boolCheck user belongs to realm
2submit_user_end_capinput, proofStringSubmit end cap proof
3-4get_checkpoint_leaf_data[_f]checkpoint_idPsyCheckpointLeafGet checkpoint leaf
5-7get_[latest_]block_state[_f][checkpoint_id]PsyBlockStateGet L2 block state
8get_user_registration_tree_rootcheckpoint_idQHashOutGet registration tree root
9-11get_[latest_]checkpoint_tree_root[_f][checkpoint_id]QHashOutGet checkpoint tree root
12-15get_checkpoint_tree_[leaf_hash\|merkle_proof][_f]checkpoint_id, leaf_idQHashOut\|ProofCheckpoint tree ops
16get_checkpoint_global_state_rootscheckpoint_idGlobalStateRootsGet global state roots
17-18get_user_leaf_data[_f]checkpoint_id, user_idPsyUserLeafGet user leaf data
19-24get_user_contract_state_tree_*[_f]VariousQHashOut\|ProofContract state tree ops
25-30get_user_contract_tree_*[_f]VariousQHashOut\|ProofUser contract tree ops
31-40get_user_tree_*[_f]VariousQHashOut\|ProofUser tree operations
41generate_batch_variable_height_reward_proofscheckpoint_id, job_idsVec<(Proof, JobID)>Batch reward proofs
42get_graphvizcheckpoint_idStringGet tree visualization

Document Version: 1.0
Last Updated: 2025-10-24
Total RPC Methods: 42

Coordinator Edge RPC Documentation

This document provides comprehensive documentation for all Coordinator Edge RPC methods defined in the CoordinatorEdgeRpc trait.

RPC Namespace: psy


Table of Contents

  1. User Management
  2. Contract Management
  3. Block Operations
  4. GUTA Submission
  5. Checkpoint Operations
  6. Checkpoint Sync
  7. L2 Block State Operations
  8. User Registration Tree Operations
  9. User Tree Operations
  10. Contract Function Tree Operations
  11. Contract Tree Operations
  12. Deposit Tree Operations
  13. Withdrawal Tree Operations
  14. Checkpoint Tree Operations
  15. Reward Proofs Generation
  16. Realm Status Operations
  17. Data Structures

User Management

1. register_user

Register a new user with ZK public key.

Method Name: psy_register_user

Request Parameters:

{
  "public_key": {
    "fingerprint": "0x1234...",
    "public_key_param": "0x5678..."
  }
}
ParameterTypeDescription
public_keyZKPublicKeyInfo<F>ZK public key information

Response:

{
  "result": "ok"
}
FieldTypeDescription
resultString"ok" on success

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_register_user",
    "params": [{
      "fingerprint": "0x...",
      "public_key_param": "0x..."
    }],
    "id": 1
  }'

2. get_user_id

Get user ID by public key hash.

Method Name: psy_get_user_id

Request Parameters:

{
  "public_key": "0x1234567890abcdef..."
}
ParameterTypeDescription
public_keyQHashOut<F>User's public key hash

Response:

{
  "result": 12345
}
FieldTypeDescription
resultu64User ID

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_user_id",
    "params": ["0x1234..."],
    "id": 1
  }'

Contract Management

3. deploy_contract

Deploy a new smart contract.

Method Name: psy_deploy_contract

Request Parameters:

{
  "deploy_contract": {
    "deployer": "0x1234...",
    "code_definition": {
      "state_tree_height": 10,
      "functions": [...]
    },
    "function_whitelist": ["0x..."]
  }
}
ParameterTypeDescription
deploy_contractQBCDeployContract<F>Contract deployment data (see QBCDeployContract)

Response:

{
  "result": "ok"
}

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_deploy_contract",
    "params": [{
      "deployer": "0x...",
      "code_definition": {...},
      "function_whitelist": [...]
    }],
    "id": 1
  }'

4. get_contract_leaf_data

Get contract leaf data by contract ID (u64 parameter).

Method Name: psy_get_contract_leaf_data

Request Parameters:

{
  "contract_id": 5
}
ParameterTypeDescription
contract_idu64The contract ID

Response: See PsyContractLeaf


5. get_contract_leaf_data_f

Get contract leaf data by contract ID (Field parameter).

Method Name: psy_get_contract_leaf_data_f

Request Parameters:

{
  "contract_id": "5"
}
ParameterTypeDescription
contract_idF (Field)The contract ID as a field element

Response: See PsyContractLeaf


6. get_contract_code_definition

Get contract code definition by contract ID (u64 parameter).

Method Name: psy_get_contract_code_definition

Request Parameters:

{
  "contract_id": 5
}
ParameterTypeDescription
contract_idu64The contract ID

Response: See ContractCodeDefinition


7. get_contract_code_definition_f

Get contract code definition by contract ID (Field parameter).

Method Name: psy_get_contract_code_definition_f

Request Parameters:

{
  "contract_id": "5"
}
ParameterTypeDescription
contract_idF (Field)The contract ID as a field element

Response: See ContractCodeDefinition


Block Operations

8. build_block

Trigger building a new block/checkpoint.

Method Name: psy_build_block

Request Parameters: None

Response:

{
  "result": "ok"
}

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_build_block",
    "params": [],
    "id": 1
  }'

GUTA Submission

9. submit_guta

Submit GUTA (Global User Tree Aggregator) result with proof.

Method Name: psy_submit_guta

Request Parameters:

{
  "input": {
    "realm_id": 1,
    "checkpoint_id": 100,
    "guta_stats": {...},
    "top_line_proof": {...},
    "checkpoint_tree_root": "0x...",
    "proof_id": {...}
  },
  "proof": {...},
  "realm_id": 1
}
ParameterTypeDescription
inputSubmitGUTARealmResultAPINoProofInput<F>GUTA submission input
proofProofWithPublicInputs<F, C, D>Zero-knowledge proof
realm_idu64Realm identifier

Response:

{
  "result": "ok"
}

10. submit_guta_v1

Submit GUTA result with serialized proof (v1 format).

Method Name: psy_submit_guta_v1

Request Parameters:

{
  "input": {...},
  "proof": "0x...",
  "realm_id": 1
}
ParameterTypeDescription
inputSubmitGUTARealmResultAPINoProofInput<F>GUTA submission input
proofVec<u8>Serialized proof bytes
realm_idu64Realm identifier

Response:

{
  "result": null
}

11. submit_realm_result

Submit realm processing result to coordinator.

Method Name: psy_submit_realm_result

Request Parameters:

{
  "realm_result": {
    "header": {
      "realm_id": 1,
      "checkpoint_id": 100,
      "start_realm_root": "0x...",
      "end_realm_root": "0x...",
      "guta_stats": {...},
      "root_job_id": {...}
    },
    "proof": "0x..."
  }
}
ParameterTypeDescription
realm_resultRealmDataForCoordinator<F>Realm result data

Response:

{
  "result": null
}

Checkpoint Operations

12. get_latest_checkpoint

Get latest checkpoint information.

Method Name: psy_get_latest_checkpoint

Request Parameters: None

Response:

{
  "result": {
    "checkpoint_id": 100
  }
}
FieldTypeDescription
checkpoint_idu64Latest checkpoint ID

13. latest_checkpoint

Get latest checkpoint ID (simple version).

Method Name: psy_latest_checkpoint

Request Parameters: None

Response:

{
  "result": 100
}

14. get_latest_checkpoint_id

Get latest checkpoint ID.

Method Name: psy_get_latest_checkpoint_id

Request Parameters: None

Response:

{
  "result": 100
}

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_latest_checkpoint_id",
    "params": [],
    "id": 1
  }'

15. get_checkpoint_leaf_data

Get checkpoint leaf data by checkpoint ID (u64 parameter).

Method Name: psy_get_checkpoint_leaf_data

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See PsyCheckpointLeaf


16. get_checkpoint_leaf_data_f

Get checkpoint leaf data by checkpoint ID (Field parameter).

Method Name: psy_get_checkpoint_leaf_data_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See PsyCheckpointLeaf


17. get_checkpoint_global_state_roots

Get global state roots at a specific checkpoint.

Method Name: psy_get_checkpoint_global_state_roots

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See PsyCheckpointGlobalStateRoots


Checkpoint Sync

18. get_checkpoint_sync_info

Get checkpoint sync information for a realm.

Method Name: psy_get_checkpoint_sync_info

Request Parameters:

{
  "realm_id": 1,
  "checkpoint_id": 100
}
ParameterTypeDescription
realm_idu32Realm identifier
checkpoint_idu64Checkpoint ID

Response: See CheckpointSyncInfo


19. get_checkpoint_sync_info_compact

Get compact checkpoint sync information.

Method Name: psy_get_checkpoint_sync_info_compact

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64Checkpoint ID

Response: See [PsyCheckpointSyncInfoCompact](#psycheckpointsyncinfo compact)


L2 Block State Operations

20. get_latest_block_state

Get the latest L2 block state.

Method Name: psy_get_latest_block_state

Request Parameters: None

Response: See PsyBlockState

Example Response:

{
  "result": {
    "checkpoint_id": 100,
    "next_add_withdrawal_id": 50,
    "next_process_withdrawal_id": 45,
    "next_deposit_id": 200,
    "total_deposits_claimed_epoch": 180,
    "next_user_id": 1000,
    "end_balance": 5000000,
    "next_contract_id": 25
  }
}

21. get_block_state

Get L2 block state at a specific checkpoint (u64 parameter).

Method Name: psy_get_block_state

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See PsyBlockState


22. get_block_state_f

Get L2 block state at a specific checkpoint (Field parameter).

Method Name: psy_get_block_state_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See PsyBlockState


User Registration Tree Operations

23. get_user_registration_tree_root

Get user registration tree root at a specific checkpoint (u64 parameter).

Method Name: psy_get_user_registration_tree_root

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See QHashOut


24. get_user_registration_tree_root_f

Get user registration tree root at a specific checkpoint (Field parameter).

Method Name: psy_get_user_registration_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See QHashOut


25. get_user_registration_tree_leaf_hash

Get user registration tree leaf hash (u64 parameters).

Method Name: psy_get_user_registration_tree_leaf_hash

Request Parameters:

{
  "checkpoint_id": 100,
  "leaf_index": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
leaf_indexu64The leaf index

Response: See QHashOut


26. get_user_registration_tree_leaf_hash_f

Get user registration tree leaf hash (Field parameters).

Method Name: psy_get_user_registration_tree_leaf_hash_f

Request Parameters:

{
  "checkpoint_id": "100",
  "leaf_index": "42"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
leaf_indexF (Field)The leaf index as a field element

Response: See QHashOut


27. get_user_registration_tree_merkle_proof

Get Merkle proof for user registration tree (u64 parameters).

Method Name: psy_get_user_registration_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "leaf_index": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
leaf_indexu64The leaf index

Response: See MerkleProofCore


28. get_user_registration_tree_merkle_proof_f

Get Merkle proof for user registration tree (Field parameters).

Method Name: psy_get_user_registration_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "leaf_index": "42"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
leaf_indexF (Field)The leaf index as a field element

Response: See MerkleProofCore


User Tree Operations

29. get_user_tree_root

Get user tree root at a specific checkpoint (u64 parameter).

Method Name: psy_get_user_tree_root

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See QHashOut


30. get_user_tree_root_f

Get user tree root at a specific checkpoint (Field parameter).

Method Name: psy_get_user_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See QHashOut


31. get_user_sub_tree_merkle_proof

Get Merkle proof for user sub-tree.

Method Name: psy_get_user_sub_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "root_level": 5,
  "leaf_level": 2,
  "leaf_index": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
root_levelu8Root level of the tree
leaf_levelu8Leaf level of the tree
leaf_indexu64The leaf index

Response: See MerkleProofCore


32. get_user_top_tree_merkle_proof

Get Merkle proof for user top tree.

Method Name: psy_get_user_top_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "leaf_level": 2,
  "leaf_index": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
leaf_levelu8Leaf level of the tree
leaf_indexu64The leaf index

Response: See MerkleProofCore


33. get_user_top_tree_cap_root

Get user top tree cap root.

Method Name: psy_get_user_top_tree_cap_root

Request Parameters:

{
  "checkpoint_id": 100,
  "cap_level": 3,
  "cap_index": 10
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
cap_levelu8Cap level
cap_indexu64Cap index

Response: See QHashOut


34. get_user_latest_top_tree_cap_root

Get latest user top tree cap root.

Method Name: psy_get_user_latest_top_tree_cap_root

Request Parameters:

{
  "cap_level": 3,
  "cap_index": 10
}
ParameterTypeDescription
cap_levelu8Cap level
cap_indexu64Cap index

Response: See QHashOut


35. get_user_leaf_data

Get user leaf data at a specific checkpoint.

Method Name: psy_get_user_leaf_data

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID

Response: See PsyUserLeaf


36. get_user_tree_merkle_proof

Get Merkle proof for user tree (u64 parameters).

Method Name: psy_get_user_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "user_id": 12345
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
user_idu64The user ID

Response: See MerkleProofCore


37. get_user_tree_merkle_proof_f

Get Merkle proof for user tree (Field parameters).

Method Name: psy_get_user_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "user_id": "12345"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
user_idF (Field)The user ID as a field element

Response: See MerkleProofCore


Contract Function Tree Operations

38. get_contract_function_tree_root

Get contract function tree root (u64 parameters).

Method Name: psy_get_contract_function_tree_root

Request Parameters:

{
  "checkpoint_id": 100,
  "contract_id": 5
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
contract_idu32The contract ID

Response: See QHashOut


39. get_contract_function_tree_root_f

Get contract function tree root (Field parameters).

Method Name: psy_get_contract_function_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100",
  "contract_id": "5"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
contract_idF (Field)The contract ID as a field element

Response: See QHashOut


40. get_contract_function_tree_leaf_hash

Get contract function tree leaf hash (u64 parameters).

Method Name: psy_get_contract_function_tree_leaf_hash

Request Parameters:

{
  "checkpoint_id": 100,
  "contract_id": 5,
  "function_id": 3
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
contract_idu32The contract ID
function_idu32The function ID

Response: See QHashOut


41. get_contract_function_tree_leaf_hash_f

Get contract function tree leaf hash (Field parameters).

Method Name: psy_get_contract_function_tree_leaf_hash_f

Request Parameters:

{
  "checkpoint_id": "100",
  "contract_id": "5",
  "function_id": "3"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
contract_idF (Field)The contract ID as a field element
function_idF (Field)The function ID as a field element

Response: See QHashOut


42. get_contract_function_tree_merkle_proof

Get Merkle proof for contract function tree (u64 parameters).

Method Name: psy_get_contract_function_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "contract_id": 5,
  "function_id": 3
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
contract_idu32The contract ID
function_idu32The function ID

Response: See MerkleProofCore


43. get_contract_function_tree_merkle_proof_f

Get Merkle proof for contract function tree (Field parameters).

Method Name: psy_get_contract_function_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "contract_id": "5",
  "function_id": "3"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
contract_idF (Field)The contract ID as a field element
function_idF (Field)The function ID as a field element

Response: See MerkleProofCore


Contract Tree Operations

44. get_contract_tree_root

Get contract tree root at a specific checkpoint (u64 parameter).

Method Name: psy_get_contract_tree_root

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See QHashOut


45. get_contract_tree_root_f

Get contract tree root at a specific checkpoint (Field parameter).

Method Name: psy_get_contract_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See QHashOut


46. get_contract_tree_leaf_hash

Get contract tree leaf hash (u64 parameters).

Method Name: psy_get_contract_tree_leaf_hash

Request Parameters:

{
  "checkpoint_id": 100,
  "contract_id": 5
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
contract_idu32The contract ID

Response: See QHashOut


47. get_contract_tree_leaf_hash_f

Get contract tree leaf hash (Field parameters).

Method Name: psy_get_contract_tree_leaf_hash_f

Request Parameters:

{
  "checkpoint_id": "100",
  "contract_id": "5"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
contract_idF (Field)The contract ID as a field element

Response: See QHashOut


48. get_contract_tree_merkle_proof

Get Merkle proof for contract tree (u64 parameters).

Method Name: psy_get_contract_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "contract_id": 5
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
contract_idu32The contract ID

Response: See MerkleProofCore


49. get_contract_tree_merkle_proof_f

Get Merkle proof for contract tree (Field parameters).

Method Name: psy_get_contract_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "contract_id": "5"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
contract_idF (Field)The contract ID as a field element

Response: See MerkleProofCore


Deposit Tree Operations

50. get_deposit_tree_root

Get deposit tree root at a specific checkpoint (u64 parameter).

Method Name: psy_get_deposit_tree_root

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See QHashOut


51. get_deposit_tree_root_f

Get deposit tree root at a specific checkpoint (Field parameter).

Method Name: psy_get_deposit_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See QHashOut


52. get_deposit_tree_leaf_hash

Get deposit tree leaf hash (u64 parameters).

Method Name: psy_get_deposit_tree_leaf_hash

Request Parameters:

{
  "checkpoint_id": 100,
  "deposit_id": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
deposit_idu32The deposit ID

Response: See QHashOut


53. get_deposit_tree_leaf_hash_f

Get deposit tree leaf hash (Field parameters).

Method Name: psy_get_deposit_tree_leaf_hash_f

Request Parameters:

{
  "checkpoint_id": "100",
  "deposit_id": "42"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
deposit_idF (Field)The deposit ID as a field element

Response: See QHashOut


54. get_deposit_tree_merkle_proof

Get Merkle proof for deposit tree (u64 parameters).

Method Name: psy_get_deposit_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "deposit_id": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
deposit_idu32The deposit ID

Response: See MerkleProofCore


55. get_deposit_tree_merkle_proof_f

Get Merkle proof for deposit tree (Field parameters).

Method Name: psy_get_deposit_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "deposit_id": "42"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
deposit_idF (Field)The deposit ID as a field element

Response: See MerkleProofCore


Withdrawal Tree Operations

56. get_withdrawal_tree_root

Get withdrawal tree root at a specific checkpoint (u64 parameter).

Method Name: psy_get_withdrawal_tree_root

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See QHashOut


57. get_withdrawal_tree_root_f

Get withdrawal tree root at a specific checkpoint (Field parameter).

Method Name: psy_get_withdrawal_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See QHashOut


58. get_withdrawal_tree_leaf_hash

Get withdrawal tree leaf hash (u64 parameters).

Method Name: psy_get_withdrawal_tree_leaf_hash

Request Parameters:

{
  "checkpoint_id": 100,
  "withdrawal_id": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
withdrawal_idu32The withdrawal ID

Response: See QHashOut


59. get_withdrawal_tree_leaf_hash_f

Get withdrawal tree leaf hash (Field parameters).

Method Name: psy_get_withdrawal_tree_leaf_hash_f

Request Parameters:

{
  "checkpoint_id": "100",
  "withdrawal_id": "42"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
withdrawal_idF (Field)The withdrawal ID as a field element

Response: See QHashOut


60. get_withdrawal_tree_merkle_proof

Get Merkle proof for withdrawal tree (u64 parameters).

Method Name: psy_get_withdrawal_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "withdrawal_id": 42
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
withdrawal_idu32The withdrawal ID

Response: See MerkleProofCore


61. get_withdrawal_tree_merkle_proof_f

Get Merkle proof for withdrawal tree (Field parameters).

Method Name: psy_get_withdrawal_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "withdrawal_id": "42"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
withdrawal_idF (Field)The withdrawal ID as a field element

Response: See MerkleProofCore


Checkpoint Tree Operations

62. get_latest_checkpoint_tree_root

Get the latest checkpoint tree root.

Method Name: psy_get_latest_checkpoint_tree_root

Request Parameters: None

Response: See QHashOut


63. get_checkpoint_tree_root

Get checkpoint tree root at a specific checkpoint (u64 parameter).

Method Name: psy_get_checkpoint_tree_root

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response: See QHashOut


64. get_checkpoint_tree_root_f

Get checkpoint tree root at a specific checkpoint (Field parameter).

Method Name: psy_get_checkpoint_tree_root_f

Request Parameters:

{
  "checkpoint_id": "100"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element

Response: See QHashOut


65. get_checkpoint_tree_leaf_hash

Get checkpoint tree leaf hash (u64 parameters).

Method Name: psy_get_checkpoint_tree_leaf_hash

Request Parameters:

{
  "checkpoint_id": 100,
  "leaf_checkpoint_id": 95
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
leaf_checkpoint_idu64The leaf checkpoint ID

Response: See QHashOut


66. get_checkpoint_tree_leaf_hash_f

Get checkpoint tree leaf hash (Field parameters).

Method Name: psy_get_checkpoint_tree_leaf_hash_f

Request Parameters:

{
  "checkpoint_id": "100",
  "leaf_checkpoint_id": "95"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
leaf_checkpoint_idF (Field)The leaf checkpoint ID as a field element

Response: See QHashOut


67. get_checkpoint_tree_merkle_proof

Get Merkle proof for checkpoint tree (u64 parameters).

Method Name: psy_get_checkpoint_tree_merkle_proof

Request Parameters:

{
  "checkpoint_id": 100,
  "leaf_checkpoint_id": 95
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
leaf_checkpoint_idu64The leaf checkpoint ID

Response: See MerkleProofCore


68. get_checkpoint_tree_merkle_proof_f

Get Merkle proof for checkpoint tree (Field parameters).

Method Name: psy_get_checkpoint_tree_merkle_proof_f

Request Parameters:

{
  "checkpoint_id": "100",
  "leaf_checkpoint_id": "95"
}
ParameterTypeDescription
checkpoint_idF (Field)The checkpoint ID as a field element
leaf_checkpoint_idF (Field)The leaf checkpoint ID as a field element

Response: See MerkleProofCore


Reward Proofs Generation

69. generate_batch_variable_height_reward_proofs

Generate batch variable height reward Merkle proofs for multiple job IDs.

Method Name: psy_generate_batch_variable_height_reward_proofs

Request Parameters:

{
  "checkpoint_id": 100,
  "job_ids": [
    {
      "topic": "GenerateStandardProof",
      "goal_id": 100,
      "slot_id": 5,
      "circuit_type": "GUTATwoEndCap",
      "group_id": 1,
      "sub_group_id": 0,
      "task_index": 0,
      "data_type": "InputWitness",
      "data_index": 0
    }
  ]
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID
job_idsVec<QProvingJobDataID>Array of proving job data IDs

Response:

{
  "result": [
    [
      {
        "top_siblings": [...],
        "sibling_branch": "0x...",
        "reward_leaf": "0x...",
        "proof_height": "5",
        "index": "42"
      },
      {
        "topic": "GenerateStandardProof",
        "goal_id": 100,
        ...
      }
    ]
  ]
}
FieldTypeDescription
resultVec<(VariableHeightRewardMerkleProof, QProvingJobDataID)>Array of tuples containing proofs and job IDs

70. get_graphviz

Get GraphViz representation of the job dependency graph at a specific checkpoint.

Method Name: psy_get_graphviz

Request Parameters:

{
  "checkpoint_id": 100
}
ParameterTypeDescription
checkpoint_idu64The checkpoint ID

Response:

{
  "result": "digraph G {\n  node1 -> node2;\n  ...\n}"
}
FieldTypeDescription
resultStringGraphViz DOT format string

Example:

curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_graphviz",
    "params": [100],
    "id": 1
  }' | jq -r '.result' | dot -Tpng > graph.png

Realm Status Operations

71. get_current_realm_status_on_coordinator

Get current realm status on the coordinator.

Method Name: psy_get_current_realm_status_on_coordinator

Request Parameters:

{
  "realm_id": 1
}
ParameterTypeDescription
realm_idu64Realm identifier

Response: See BasicRealmStatusOnCoordinator

Example Response:

{
  "result": {
    "realm_id": 1,
    "checkpoint_id": 100,
    "realm_root_hash": "0x..."
  }
}

72. get_current_checkpoint_id

Get current coordinator checkpoint ID.

Method Name: psy_get_current_checkpoint_id

Request Parameters: None

Response:

{
  "result": 100
}

73. get_latest_block_updates_from_coordinator

Get latest block updates from coordinator for a realm within a checkpoint range.

Method Name: psy_get_latest_block_updates_from_coordinator

Request Parameters:

{
  "realm_id": 1,
  "from_checkpoint": 95,
  "to_checkpoint": 100
}
ParameterTypeDescription
realm_idu32Realm identifier
from_checkpointu64Starting checkpoint ID (inclusive)
to_checkpointu64Ending checkpoint ID (inclusive)

Response:

{
  "result": [
    {
      "latest_checkpoint_id": 100,
      "description": null,
      "source_coordinator_edge_id": null,
      "sync_timestamp": 1234567890,
      "compact": {...},
      "realm_root": "0x..."
    }
  ]
}
FieldTypeDescription
resultVec<GlobalBlockUpdateFromCoordinator<F>>Array of block updates

74. wait_until_coordinator_completed

Wait until coordinator completes a specific checkpoint for a realm.

Method Name: psy_wait_until_coordinator_completed

Request Parameters:

{
  "realm_id": 1,
  "checkpoint_id": 100
}
ParameterTypeDescription
realm_idu64Realm identifier
checkpoint_idu64Checkpoint ID to wait for

Response: See GlobalBlockUpdateFromCoordinator


Data Structures

QHashOut

A hash output wrapper for Plonky2 field elements.

Structure:

#![allow(unused)]
fn main() {
pub struct QHashOut<F: Field>(pub HashOut<F>);
}

JSON Representation:

"0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef"

Description:

  • Wraps a Plonky2 HashOut<F> containing 4 field elements
  • Serialized as a hexadecimal string (32 bytes)
  • Represents a 256-bit hash value

PsyBlockState

L2 block state information at a specific checkpoint.

Structure:

#![allow(unused)]
fn main() {
pub struct PsyBlockState {
    pub checkpoint_id: u64,
    pub next_add_withdrawal_id: u64,
    pub next_process_withdrawal_id: u64,
    pub next_deposit_id: u64,
    pub total_deposits_claimed_epoch: u64,
    pub next_user_id: u64,
    pub end_balance: u64,
    pub next_contract_id: u32,
}
}

Fields:

FieldTypeDescription
checkpoint_idu64The checkpoint identifier
next_add_withdrawal_idu64Next withdrawal ID to be added
next_process_withdrawal_idu64Next withdrawal ID to be processed
next_deposit_idu64Next deposit ID
total_deposits_claimed_epochu64Total deposits claimed in current epoch
next_user_idu64Next user ID to be assigned
end_balanceu64Ending balance at this checkpoint
next_contract_idu32Next contract ID to be assigned

Example:

{
  "checkpoint_id": 100,
  "next_add_withdrawal_id": 50,
  "next_process_withdrawal_id": 45,
  "next_deposit_id": 200,
  "total_deposits_claimed_epoch": 180,
  "next_user_id": 1000,
  "end_balance": 5000000,
  "next_contract_id": 25
}

PsyCheckpointLeaf

Checkpoint leaf data containing global chain root and statistics.

Structure:

#![allow(unused)]
fn main() {
pub struct PsyCheckpointLeaf<F: RichField> {
    pub global_chain_root: QHashOut<F>,
    pub stats: PsyCheckpointLeafStats<F>,
}
}

Fields:

FieldTypeDescription
global_chain_rootQHashOut<F>Global chain state root hash
statsPsyCheckpointLeafStats<F>Checkpoint statistics

PsyCheckpointGlobalStateRoots

Global state roots at a specific checkpoint.

Structure:

#![allow(unused)]
fn main() {
pub struct PsyCheckpointGlobalStateRoots<F: RichField> {
    pub contract_tree_root: QHashOut<F>,
    pub deposit_tree_root: QHashOut<F>,
    pub user_tree_root: QHashOut<F>,
    pub withdrawal_tree_root: QHashOut<F>,
    pub user_registration_tree_root: QHashOut<F>,
}
}

Fields:

FieldTypeDescription
contract_tree_rootQHashOut<F>Root of the contract tree
deposit_tree_rootQHashOut<F>Root of the deposit tree
user_tree_rootQHashOut<F>Root of the user tree
withdrawal_tree_rootQHashOut<F>Root of the withdrawal tree
user_registration_tree_rootQHashOut<F>Root of the user registration tree

Example:

{
  "contract_tree_root": "0x1234...",
  "deposit_tree_root": "0x5678...",
  "user_tree_root": "0x9abc...",
  "withdrawal_tree_root": "0xdef0...",
  "user_registration_tree_root": "0x1234..."
}

PsyUserLeaf

User leaf data containing user state information.

Structure:

#![allow(unused)]
fn main() {
pub struct PsyUserLeaf<F: RichField> {
    pub public_key: QHashOut<F>,
    pub user_state_tree_root: QHashOut<F>,
    pub balance: F,
    pub nonce: F,
    pub last_checkpoint_id: F,
    pub event_index: F,
    pub user_id: F,
}
}

Fields:

FieldTypeDescription
public_keyQHashOut<F>User's public key hash
user_state_tree_rootQHashOut<F>Root of user's state tree
balanceFUser's balance
nonceFUser's transaction nonce
last_checkpoint_idFLast checkpoint ID where user was updated
event_indexFEvent index for this user
user_idFUser identifier

Example:

{
  "public_key": "0x1234...",
  "user_state_tree_root": "0x5678...",
  "balance": "1000000",
  "nonce": "42",
  "last_checkpoint_id": "100",
  "event_index": "15",
  "user_id": "12345"
}

MerkleProofCore

Generic Merkle proof structure.

Structure:

#![allow(unused)]
fn main() {
pub struct MerkleProofCore<Hash: PartialEq + Copy> {
    pub root: Hash,
    pub value: Hash,
    pub index: u64,
    pub siblings: Vec<Hash>,
}
}

Fields:

FieldTypeDescription
rootHashMerkle tree root hash
valueHashLeaf value being proven
indexu64Leaf index in the tree
siblingsVec<Hash>Sibling hashes along the path

Example:

{
  "root": "0x1234...",
  "value": "0x5678...",
  "index": 42,
  "siblings": [
    "0x9abc...",
    "0xdef0...",
    "0x1234..."
  ]
}

ZKPublicKeyInfo

Zero-knowledge public key information.

Structure:

#![allow(unused)]
fn main() {
pub struct ZKPublicKeyInfo<F: RichField> {
    pub fingerprint: QHashOut<F>,
    pub public_key_param: QHashOut<F>,
}
}

Fields:

FieldTypeDescription
fingerprintQHashOut<F>Public key fingerprint
public_key_paramQHashOut<F>Public key parameter

Example:

{
  "fingerprint": "0x1234...",
  "public_key_param": "0x5678..."
}

QBCDeployContract

Contract deployment command.

Structure:

#![allow(unused)]
fn main() {
pub struct QBCDeployContract<F: RichField> {
    pub deployer: QHashOut<F>,
    pub code_definition: ContractCodeDefinition,
    pub function_whitelist: Vec<QHashOut<F>>,
}
}

Fields:

FieldTypeDescription
deployerQHashOut<F>Deployer's public key hash
code_definitionContractCodeDefinitionContract code definition
function_whitelistVec<QHashOut<F>>Function whitelist hashes

Example:

{
  "deployer": "0x1234...",
  "code_definition": {
    "state_tree_height": 10,
    "functions": [...]
  },
  "function_whitelist": ["0x5678..."]
}

ContractCodeDefinition

Contract code definition structure.

Structure:

#![allow(unused)]
fn main() {
pub struct ContractCodeDefinition {
    pub state_tree_height: u16,
    pub functions: Vec<ContractFunctionCodeDefinition>,
}

pub struct ContractFunctionCodeDefinition {
    pub method_id: u32,
    pub num_inputs: u32,
    pub num_outputs: u32,
    pub vm_type: u32,
    pub code: Vec<u8>,
}
}

Fields:

FieldTypeDescription
state_tree_heightu16Height of the contract state tree
functionsVec<ContractFunctionCodeDefinition>Contract functions

Function Fields:

FieldTypeDescription
method_idu32Function method ID
num_inputsu32Number of input parameters
num_outputsu32Number of output parameters
vm_typeu32VM type identifier
codeVec<u8>Function bytecode

Example:

{
  "state_tree_height": 10,
  "functions": [
    {
      "method_id": 12345678,
      "num_inputs": 2,
      "num_outputs": 1,
      "vm_type": 1,
      "code": [0x01, 0x02, ...]
    }
  ]
}

PsyContractLeaf

Contract leaf data.

Structure:

#![allow(unused)]
fn main() {
pub struct PsyContractLeaf<F: RichField> {
    pub deployer: QHashOut<F>,
    pub function_tree_root: QHashOut<F>,
    pub state_tree_height: F,
}
}

Fields:

FieldTypeDescription
deployerQHashOut<F>Deployer's public key hash
function_tree_rootQHashOut<F>Root of the function tree
state_tree_heightFHeight of the state tree

Example:

{
  "deployer": "0x1234...",
  "function_tree_root": "0x5678...",
  "state_tree_height": "10"
}

CheckpointSyncInfo

Checkpoint synchronization information.

Structure:

#![allow(unused)]
fn main() {
pub struct CheckpointSyncInfo<F: RichField> {
    pub latest_checkpoint_id: u64,
    pub description: Option<String>,
    pub source_coordinator_edge_id: Option<String>,
    pub sync_timestamp: u64,
    pub compact: PsyCheckpointSyncInfoCompact<F>,
    pub realm_root: QHashOut<F>,
}
}

Fields:

FieldTypeDescription
latest_checkpoint_idu64Latest checkpoint ID
descriptionOption<String>Optional description
source_coordinator_edge_idOption<String>Source coordinator ID
sync_timestampu64Synchronization timestamp
compactPsyCheckpointSyncInfoCompact<F>Compact sync info
realm_rootQHashOut<F>Realm root hash

PsyCheckpointSyncInfoCompact

Compact checkpoint synchronization information.

Structure:

#![allow(unused)]
fn main() {
pub struct PsyCheckpointSyncInfoCompact<F: RichField> {
    pub block_state: PsyBlockState,
    pub stats: PsyCheckpointLeafStats<F>,
    pub state_roots: PsyCheckpointGlobalStateRoots<F>,
    pub checkpoint_tree_update_siblings: Vec<QHashOut<F>>,
    pub regsitered_users_start_pivot_siblings: Vec<QHashOut<F>>,
    pub registered_users: Vec<ZKPublicKeyInfo<F>>,
    pub old_checkpoint_leaf_hash: QHashOut<F>,
    pub slot: u64,
}
}

Fields:

FieldTypeDescription
block_statePsyBlockStateL2 block state
statsPsyCheckpointLeafStats<F>Checkpoint statistics
state_rootsPsyCheckpointGlobalStateRoots<F>Global state roots
checkpoint_tree_update_siblingsVec<QHashOut<F>>Checkpoint tree update siblings
regsitered_users_start_pivot_siblingsVec<QHashOut<F>>User registration pivot siblings
registered_usersVec<ZKPublicKeyInfo<F>>Newly registered users
old_checkpoint_leaf_hashQHashOut<F>Previous checkpoint leaf hash
slotu64Slot number

SubmitGUTARealmResultAPINoProofInput

GUTA submission input without proof.

Structure:

#![allow(unused)]
fn main() {
pub struct SubmitGUTARealmResultAPINoProofInput<F: RichField> {
    pub realm_id: u64,
    pub checkpoint_id: u64,
    pub guta_stats: GUTAStats<F>,
    pub top_line_proof: DeltaMerkleProofCore<QHashOut<F>>,
    pub checkpoint_tree_root: QHashOut<F>,
    pub proof_id: QProvingJobDataID,
}
}

Fields:

FieldTypeDescription
realm_idu64Realm identifier
checkpoint_idu64Checkpoint ID
guta_statsGUTAStats<F>GUTA statistics
top_line_proofDeltaMerkleProofCore<QHashOut<F>>Top-line Merkle proof
checkpoint_tree_rootQHashOut<F>Checkpoint tree root
proof_idQProvingJobDataIDProving job ID

QProvingJobDataID

Proving job data identifier.

Structure:

#![allow(unused)]
fn main() {
pub struct QProvingJobDataID {
    pub topic: QJobTopic,
    pub goal_id: u64,
    pub slot_id: u64,
    pub circuit_type: ProvingJobCircuitType,
    pub group_id: u32,
    pub sub_group_id: u32,
    pub task_index: u32,
    pub data_type: ProvingJobDataType,
    pub data_index: u8,
}
}

Fields:

FieldTypeDescription
topicQJobTopicJob topic
goal_idu64Goal identifier (usually checkpoint ID)
slot_idu64Slot identifier
circuit_typeProvingJobCircuitTypeType of circuit
group_idu32Group identifier
sub_group_idu32Sub-group identifier
task_indexu32Task index within the group
data_typeProvingJobDataTypeData type
data_indexu8Data index

Serialization: Serialized as a 32-byte array.


VariableHeightRewardMerkleProof

Variable height Merkle proof for reward distribution.

Structure:

#![allow(unused)]
fn main() {
pub struct VariableHeightRewardMerkleProof {
    pub top_siblings: Vec<VariableHeightProofSibling>,
    pub sibling_branch: QHashOut<F>,
    pub reward_leaf: QHashOut<F>,
    pub proof_height: F,
    pub index: F,
}

pub struct VariableHeightProofSibling {
    pub sibling_branch: QHashOut<F>,
    pub sibling_reward_leaf: QHashOut<F>,
}
}

Fields:

FieldTypeDescription
top_siblingsVec<VariableHeightProofSibling>Siblings at each level
sibling_branchQHashOut<F>Sibling branch hash
reward_leafQHashOut<F>Reward leaf hash
proof_heightFHeight of the proof
indexFIndex in the tree

BasicRealmStatusOnCoordinator

Basic realm status on the coordinator.

Structure:

#![allow(unused)]
fn main() {
pub struct BasicRealmStatusOnCoordinator<F: RichField> {
    pub realm_id: u64,
    pub checkpoint_id: u64,
    pub realm_root_hash: QHashOut<F>,
}
}

Fields:

FieldTypeDescription
realm_idu64Realm identifier
checkpoint_idu64Current checkpoint ID
realm_root_hashQHashOut<F>Realm root hash

Example:

{
  "realm_id": 1,
  "checkpoint_id": 100,
  "realm_root_hash": "0x1234..."
}

GlobalBlockUpdateFromCoordinator

Type alias for CheckpointSyncInfo<F>.

#![allow(unused)]
fn main() {
pub type GlobalBlockUpdateFromCoordinator<F> = CheckpointSyncInfo<F>;
}

See CheckpointSyncInfo for structure details.


RealmDataForCoordinator

Realm data submitted to coordinator.

Structure:

#![allow(unused)]
fn main() {
pub struct RealmDataForCoordinator<F: RichField> {
    pub header: RealmDataForCoordinatorHeader<F>,
    pub proof: Vec<u8>,
}

pub struct RealmDataForCoordinatorHeader<F: RichField> {
    pub realm_id: u64,
    pub checkpoint_id: u64,
    pub start_realm_root: QHashOut<F>,
    pub end_realm_root: QHashOut<F>,
    pub guta_stats: GUTAStats<F>,
    pub root_job_id: QProvingJobDataID,
}
}

Fields:

FieldTypeDescription
headerRealmDataForCoordinatorHeader<F>Header data
proofVec<u8>Serialized proof

Header Fields:

FieldTypeDescription
realm_idu64Realm identifier
checkpoint_idu64Checkpoint ID
start_realm_rootQHashOut<F>Starting realm root
end_realm_rootQHashOut<F>Ending realm root
guta_statsGUTAStats<F>GUTA statistics
root_job_idQProvingJobDataIDRoot job ID

Field Type Notes

Throughout this API, F represents a field element type (typically GoldilocksField).

Field Element Conversion:

  • Methods with _f suffix accept field elements as strings (e.g., "12345")
  • Methods without _f suffix accept native types (e.g., 12345)
  • Field elements are internally represented as u64 values in the Goldilocks field

Best Practices:

  • Use u64 variants for better performance when possible
  • Use Field variants when working with circuit inputs/outputs
  • Always validate checkpoint_id exists before querying
  • Handle RPC errors gracefully (missing data, invalid parameters)

Error Handling

All RPC methods return RpcResult<T> which can contain errors in the following format:

{
  "jsonrpc": "2.0",
  "error": {
    "code": -32000,
    "message": "Error description"
  },
  "id": 1
}

Common Error Codes:

  • -32000: Server error (checkpoint not found, data unavailable)
  • -32602: Invalid parameters
  • -32603: Internal error
  • -32001: Not found error

Usage Examples

Complete User Registration Workflow

# 1. Register a new user
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_register_user",
    "params": [{
      "fingerprint": "0x...",
      "public_key_param": "0x..."
    }],
    "id": 1
  }'

# 2. Get user ID by public key
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_user_id",
    "params": ["0x..."],
    "id": 2
  }'

# 3. Get user leaf data
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_user_leaf_data",
    "params": [100, 12345],
    "id": 3
  }'

Contract Deployment Workflow

# 1. Deploy contract
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_deploy_contract",
    "params": [{
      "deployer": "0x...",
      "code_definition": {
        "state_tree_height": 10,
        "functions": [...]
      },
      "function_whitelist": [...]
    }],
    "id": 1
  }'

# 2. Get contract leaf data
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_contract_leaf_data",
    "params": [5],
    "id": 2
  }'

# 3. Get contract code definition
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_contract_code_definition",
    "params": [5],
    "id": 3
  }'

Checkpoint Query Workflow

# 1. Get latest checkpoint ID
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_latest_checkpoint_id",
    "params": [],
    "id": 1
  }'

# 2. Get checkpoint leaf data
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_checkpoint_leaf_data",
    "params": [100],
    "id": 2
  }'

# 3. Get checkpoint global state roots
curl -X POST http://localhost:8545 \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "psy_get_checkpoint_global_state_roots",
    "params": [100],
    "id": 3
  }'

Method Summary

#Method NameParametersReturnsDescription
1register_userpublic_keyStringRegister new user
2get_user_idpublic_keyu64Get user ID
3deploy_contractdeploy_contractStringDeploy contract
4-5get_contract_leaf_data[_f]contract_idPsyContractLeafGet contract leaf
6-7get_contract_code_definition[_f]contract_idContractCodeDefinitionGet contract code
8build_blockNoneStringTrigger block building
9-11submit_guta[_v1]input, proof, realm_idString\|voidSubmit GUTA
12-14[get_]latest_checkpoint[_id]NoneLatestCheckpointResponse\|u64Get latest checkpoint
15-16get_checkpoint_leaf_data[_f]checkpoint_idPsyCheckpointLeafGet checkpoint leaf
17get_checkpoint_global_state_rootscheckpoint_idGlobalStateRootsGet global state roots
18-19get_checkpoint_sync_info[_compact]realm_id?, checkpoint_idCheckpointSyncInfoGet sync info
20-22get_[latest_]block_state[_f][checkpoint_id]PsyBlockStateGet L2 block state
23-28get_user_registration_tree_*[_f]VariousQHashOut\|ProofRegistration tree ops
29-37get_user_tree_*[_f]VariousQHashOut\|Proof\|UserLeafUser tree operations
38-43get_contract_function_tree_*[_f]VariousQHashOut\|ProofFunction tree ops
44-49get_contract_tree_*[_f]VariousQHashOut\|ProofContract tree ops
50-55get_deposit_tree_*[_f]VariousQHashOut\|ProofDeposit tree ops
56-61get_withdrawal_tree_*[_f]VariousQHashOut\|ProofWithdrawal tree ops
62-68get_[latest_]checkpoint_tree_*[_f]VariousQHashOut\|ProofCheckpoint tree ops
69generate_batch_variable_height_reward_proofscheckpoint_id, job_idsVec<(Proof, JobID)>Batch reward proofs
70get_graphvizcheckpoint_idStringGet graph visualization
71get_current_realm_status_on_coordinatorrealm_idBasicRealmStatusGet realm status
72get_current_checkpoint_idNoneu64Get current checkpoint
73get_latest_block_updates_from_coordinatorrealm_id, from, toVec<BlockUpdate>Get block updates
74wait_until_coordinator_completedrealm_id, checkpoint_idBlockUpdateWait for completion

Document Version: 1.0
Last Updated: 2025-10-24
Total RPC Methods: 74

API Services RPC Documentation

This document provides comprehensive documentation for the Psy API Services, which offer HTTP REST endpoints for querying blockchain data, worker statistics, rewards, and telemetry.

Base URL: http://localhost:{port}

Default Port: Configurable via --port parameter


Table of Contents

  1. Authentication
  2. Health & User Management
  3. Event Management
  4. Statistics
  5. Job Status
  6. Legacy Rewards
  7. Leaderboard
  8. Checkpoint Operations
  9. Worker Rewards
  10. Admin Operations
  11. Contract Management
  12. WebSocket Endpoints
  13. Data Structures

Authentication

Some endpoints require JWT authentication with a secret token. Configure authentication using environment variables:

export JWT_SECRET="your-secret-key-here"
export JWT_EXPIRATION_HOURS="3"

Authentication Header:

Authorization: Bearer <jwt_token>

Health & User Management

GET /health

Health check endpoint.

Request: No parameters

Response:

{
  "status": "ok"
}

Example:

curl http://localhost:3000/health

POST /register

Register a new user in the system.

Request Body:

{
  "public_key": "0x1234567890abcdef...",
  "twitter_handle": "@username",
  "label": "My Mining Rig",
  "signature": "0xabcdef..."
}

Parameters:

FieldTypeRequiredDescription
public_keystringYesUser's public key
twitter_handlestringYesTwitter handle for verification
labelstringYesUser-friendly label
signaturestringYesSignature for verification

Response:

{
  "success": true,
  "user_id": "12345"
}

Example:

curl -X POST http://localhost:3000/register \
  -H "Content-Type: application/json" \
  -d '{
    "public_key": "0x...",
    "twitter_handle": "@miner123",
    "label": "Mining Rig #1",
    "signature": "0x..."
  }'

GET /user_info

Get user information by public key.

Query Parameters:

ParameterTypeRequiredDescription
public_keystringYesUser's public key

Response:

{
  "id": 12345,
  "public_key": "0x...",
  "twitter_handle": "@username",
  "label": "My Mining Rig",
  "created_at": "2024-01-01T00:00:00Z"
}

Example:

curl "http://localhost:3000/user_info?public_key=0x..."

Event Management

GET /worker_events

Query worker events with filtering options.

Query Parameters:

ParameterTypeDescription
realm_idu64Filter by realm
statusWorkerEventStatusFilter by event status
public_keystringFilter by worker public key
topicQJobTopicFilter by job topic
circuit_typeProvingJobCircuitTypeFilter by circuit type
from_checkpoint_idi64Start checkpoint (inclusive)
to_checkpoint_idi64End checkpoint (inclusive)
offseti64Pagination offset (default: 0)
limiti64Pagination limit (default: 300, max: 1000)
orderstringSort order: "asc" or "desc" (default: "desc")
categoryJobFilterCategoryJob category filter

Response:

[
  {
    "id": 1,
    "realm_id": 1,
    "worker_public_key": "0x...",
    "status": "Completed",
    "start_time": "2024-01-01T00:00:00Z",
    "end_time": "2024-01-01T00:01:00Z",
    "job_id": "...",
    "circuit_type": "UserRegistration"
  }
]

Example:

curl "http://localhost:3000/worker_events?realm_id=1&limit=100&order=desc"

GET /user_events

Query user events with filtering options.

Query Parameters:

ParameterTypeDescription
user_idstringFilter by user ID
start_timeDateTime<Utc>Start time filter
end_timeDateTime<Utc>End time filter
tx_typeUserEventTxTypeTransaction type filter
offseti64Pagination offset (default: 0)
limiti64Pagination limit (default: 300, max: 1000)
orderstringSort order: "asc" or "desc" (default: "desc")

Response:

[
  {
    "id": 1,
    "user_id": "12345",
    "public_key": "0x...",
    "tx_type": "RegisterUser",
    "checkpoint_id": 100,
    "timestamp": "2024-01-01T00:00:00Z"
  }
]

GET /worker_events_aggregations

Get aggregated worker events statistics.

Query Parameters:

ParameterTypeDescription
start_timeDateTime<Utc>Start time filter
end_timeDateTime<Utc>End time filter
bucketstringTime bucket: "2min", "1h", "1d", "1w", "1m", "all_time"
offseti64Pagination offset
limiti64Pagination limit
orderstringSort order

Response:

[
  {
    "time_bucket": "2024-01-01T00:00:00Z",
    "total_events": 1000,
    "completed_events": 950,
    "failed_events": 50,
    "unique_workers": 25
  }
]

Example:

curl "http://localhost:3000/worker_events_aggregations?bucket=1h&limit=24"

GET /user_events_aggregations

Get aggregated user events statistics.

Query Parameters: Same as worker events aggregations

Response:

[
  {
    "time_bucket": "2024-01-01T00:00:00Z",
    "total_events": 500,
    "register_user_count": 10,
    "deploy_contract_count": 5,
    "user_tx_count": 485
  }
]

Statistics

GET /stats

Get general system statistics for the last 24 hours.

Request: No parameters

Response:

{
  "status": "ok",
  "worker_events_24h": 1000,
  "user_events_24h": 500,
  "block_height": 150,
  "timestamp": "2024-01-01T12:00:00Z"
}

Example:

curl http://localhost:3000/stats

GET /stats/realms

Get global realm statistics.

Response:

{
  "total_realms": 5,
  "active_realms": 4,
  "total_jobs_24h": 10000,
  "average_completion_time": 30.5
}

GET /stats/realms/{realm_id}

Get statistics for a specific realm.

Path Parameters:

ParameterTypeDescription
realm_idi64Realm identifier

Response:

{
  "realm_id": 1,
  "jobs_24h": 2000,
  "active_workers": 10,
  "average_completion_time": 28.3,
  "success_rate": 0.95
}

Example:

curl http://localhost:3000/stats/realms/1

GET /stats/workers/{worker_public_key}

Get statistics for a specific worker.

Path Parameters:

ParameterTypeDescription
worker_public_keystringWorker's public key

Response:

{
  "worker_public_key": "0x...",
  "jobs_completed_24h": 100,
  "jobs_failed_24h": 5,
  "average_completion_time": 25.7,
  "success_rate": 0.95,
  "total_rewards": 1000
}

Example:

curl http://localhost:3000/stats/workers/0x...

Job Status

GET /stats/jobs

Get job status summary across all realms.

Query Parameters:

ParameterTypeDescription
hoursu32Time window in hours
realm_idi64Filter by realm

Response:

{
  "summary": [
    {
      "status": "Completed",
      "circuit_type": "UserRegistration",
      "job_count": 1000,
      "average_duration": 25.5
    }
  ],
  "total_jobs": 5000,
  "query_time": "2024-01-01T12:00:00Z",
  "materialized_view_healthy": true
}

Example:

curl "http://localhost:3000/stats/jobs?hours=24"

GET /stats/jobs/realm/{realm_id}

Get job status for a specific realm.

Response: Array of JobStatusSummary objects


GET /stats/jobs/all-realms

Get job status grouped by all realms.

Response:

[
  {
    "realm_id": 1,
    "total_jobs": 2000,
    "completed_jobs": 1900,
    "failed_jobs": 100,
    "success_rate": 0.95
  }
]

GET /stats/jobs/counts

Get simple job counts by status.

Response:

{
  "Completed": 4500,
  "Failed": 300,
  "InProgress": 200,
  "Pending": 1000
}

Legacy Rewards

GET /rewards/{worker_public_key}

Get worker rewards for a specific checkpoint.

Path Parameters:

ParameterTypeDescription
worker_public_keystringWorker's public key

Query Parameters:

ParameterTypeRequiredDescription
checkpoint_idi64YesCheckpoint identifier

Response:

{
  "worker_public_key": "0x...",
  "checkpoint_id": 100,
  "total_rewards": 500,
  "job_count": 25,
  "reward_breakdown": [
    {
      "circuit_type": "UserRegistration",
      "job_count": 10,
      "rewards": 200
    }
  ]
}

Example:

curl "http://localhost:3000/rewards/0x...?checkpoint_id=100"

GET /rewards_aggregations/{worker_public_key}

Get worker rewards aggregations over time.

Path Parameters:

ParameterTypeDescription
worker_public_keystringWorker's public key

Query Parameters:

ParameterTypeDescription
bucketstringTime bucket: "1d", "1w", "1m", "all_time"
start_timeDateTime<Utc>Start time filter
end_timeDateTime<Utc>End time filter
limiti64Pagination limit (default: 100)

Response:

[
  {
    "time_bucket": "2024-01-01T00:00:00Z",
    "total_rewards": 1000,
    "job_count": 50,
    "average_reward_per_job": 20.0
  }
]

Example:

curl "http://localhost:3000/rewards_aggregations/0x...?bucket=1d&limit=30"

Leaderboard

GET /leaderboard/workers

Get worker leaderboard based on 24-hour performance.

Query Parameters:

ParameterTypeDescription
limiti64Number of entries (default: 100, max: 100)

Response:

[
  {
    "rank": 1,
    "worker_public_key": "0x...",
    "total_rewards_24h": 2000,
    "jobs_completed_24h": 100,
    "success_rate": 0.98
  }
]

Example:

curl "http://localhost:3000/leaderboard/workers?limit=50"

Checkpoint Operations

GET /checkpoint/stats

Get checkpoint statistics by range.

Query Parameters:

ParameterTypeDescription
start_checkpointi64Start checkpoint (default: 0)
end_checkpointi64End checkpoint (default: max)

Response:

[
  {
    "checkpoint_id": 100,
    "total_jobs": 5000,
    "completed_jobs": 4800,
    "total_rewards_distributed": 50000,
    "unique_workers": 25,
    "processing_time": 120.5
  }
]

Example:

curl "http://localhost:3000/checkpoint/stats?start_checkpoint=90&end_checkpoint=100"

GET /checkpoint/stats/{checkpoint_id}

Get statistics for a specific checkpoint.

Path Parameters:

ParameterTypeDescription
checkpoint_idi64Checkpoint identifier

Response: Single CheckpointStats object

Example:

curl http://localhost:3000/checkpoint/stats/100

GET /checkpoint/job-events/{checkpoint_id}

Get job events for a specific checkpoint.

Response:

[
  {
    "id": 1,
    "checkpoint_id": 100,
    "job_id": "...",
    "worker_public_key": "0x...",
    "circuit_type": "UserRegistration",
    "status": "Completed",
    "duration_ms": 30000,
    "start_time": "2024-01-01T00:00:00Z",
    "end_time": "2024-01-01T00:00:30Z"
  }
]

Example:

curl http://localhost:3000/checkpoint/job-events/100

GET /checkpoint/summary/{checkpoint_id}

Get checkpoint reward summary.

Response:

{
  "checkpoint_id": 100,
  "total_rewards": 50000,
  "total_jobs": 5000,
  "unique_workers": 25,
  "reward_distribution": {
    "UserRegistration": 20000,
    "ContractExecution": 30000
  },
  "processing_status": "Completed"
}

Example:

curl http://localhost:3000/checkpoint/summary/100

GET /checkpoint/distributions/{checkpoint_id}

Get reward distributions for a checkpoint.

Response:

[
  {
    "id": 1,
    "checkpoint_id": 100,
    "worker_public_key": "0x...",
    "circuit_type": "UserRegistration",
    "job_count": 10,
    "total_rewards": 500,
    "created_at": "2024-01-01T12:00:00Z"
  }
]

Example:

curl http://localhost:3000/checkpoint/distributions/100

Worker Rewards

GET /checkpoint/rewards/{worker_public_key}

Get worker's reward aggregations with flexible time periods.

Path Parameters:

ParameterTypeDescription
worker_public_keystringWorker's public key

Query Parameters:

ParameterTypeDescription
time_periodstringTime period: "2m", "1h", "1d", "1w", "1m" (default: "1d")
start_timeDateTime<Utc>Start time filter
end_timeDateTime<Utc>End time filter
limiti64Pagination limit (default: 100)

Response:

{
  "worker_public_key": "0x...",
  "time_period": "1d",
  "aggregations": [
    {
      "time_bucket": "2024-01-01T00:00:00Z",
      "total_rewards": 1000,
      "jobs_completed": 50,
      "checkpoints_participated": 5
    }
  ],
  "total_rewards": 10000,
  "total_jobs": 500,
  "total_checkpoints": 50
}

Example:

curl "http://localhost:3000/checkpoint/rewards/0x...?time_period=1d&limit=30"

GET /checkpoint/rewards/{worker_public_key}/stats

Get worker's overall reward statistics.

Response:

{
  "worker_public_key": "0x...",
  "total_rewards_all_time": 100000,
  "total_jobs_completed": 5000,
  "total_checkpoints_participated": 500,
  "average_reward_per_job": 20.0,
  "first_job_date": "2024-01-01T00:00:00Z",
  "last_job_date": "2024-12-31T23:59:59Z"
}

Example:

curl http://localhost:3000/checkpoint/rewards/0x.../stats

Admin Operations

POST /checkpoint/calculate-rewards/{checkpoint_id}

Manually trigger reward calculation for a checkpoint.

Path Parameters:

ParameterTypeDescription
checkpoint_idi64Checkpoint identifier

Response: Array of CheckpointRewardDistribution objects

Example:

curl -X POST http://localhost:3000/checkpoint/calculate-rewards/100

GET /admin/checkpoint-processing-status

Get the status of checkpoint reward processing.

Response:

{
  "pending_count": 5,
  "pending_checkpoints": [101, 102, 103, 104, 105],
  "last_processed_checkpoint": 100,
  "status": "5 checkpoints pending"
}

Example:

curl http://localhost:3000/admin/checkpoint-processing-status

Contract Management

GET /contracts

Get list of deployed contracts with metadata.

Query Parameters:

ParameterTypeDescription
deployerstringFilter by deployer public key
limiti64Pagination limit
offseti64Pagination offset

Response:

[
  {
    "uuid": "contract-uuid-123",
    "deployer": "0x...",
    "state_tree_height": 10,
    "function_count": 5,
    "functions": [
      {
        "name": "transfer",
        "method_id": 12345678,
        "description": "Transfer tokens"
      }
    ],
    "deployed_at": "2024-01-01T00:00:00Z"
  }
]

Example:

curl "http://localhost:3000/contracts?limit=100"

GET /contracts/{contract_uuid}

Get detailed information about a specific contract.

Path Parameters:

ParameterTypeDescription
contract_uuidstringContract UUID

Response: Single contract object with detailed metadata

Example:

curl http://localhost:3000/contracts/contract-uuid-123

WebSocket Endpoints

WS /ws/tps

Real-time TPS (Transactions Per Second) data stream.

Connection: Upgrade HTTP to WebSocket

Message Format:

{
  "timestamp": "2024-01-01T12:00:00Z",
  "tps": 150.5,
  "block_height": 1000,
  "total_transactions_24h": 500000
}

Example:

const ws = new WebSocket('ws://localhost:3000/ws/tps');
ws.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log('Current TPS:', data.tps);
};

Data Structures

WorkerEventStatus

#![allow(unused)]
fn main() {
enum WorkerEventStatus {
    Pending,
    InProgress,
    Completed,
    Failed,
    Cancelled,
}
}

UserEventTxType

#![allow(unused)]
fn main() {
enum UserEventTxType {
    RegisterUser,
    DeployContract,
    UserTransaction,
}
}

JobFilterCategory

#![allow(unused)]
fn main() {
enum JobFilterCategory {
    All,
    UserRegistration,
    ContractDeployment,
    ContractExecution,
    Maintenance,
}
}

CheckpointStats

#![allow(unused)]
fn main() {
struct CheckpointStats {
    pub checkpoint_id: i64,
    pub total_jobs: i64,
    pub completed_jobs: i64,
    pub failed_jobs: i64,
    pub total_rewards_distributed: i64,
    pub unique_workers: i64,
    pub processing_time_seconds: Option<f64>,
    pub created_at: DateTime<Utc>,
}
}

WorkerRewardResponse

#![allow(unused)]
fn main() {
struct WorkerRewardResponse {
    pub worker_public_key: String,
    pub time_period: String,
    pub aggregations: Vec<WorkerRewardAggregation>,
    pub total_rewards: i64,
    pub total_jobs: i64,
    pub total_checkpoints: i64,
}
}

CheckpointProcessingStatus

#![allow(unused)]
fn main() {
struct CheckpointProcessingStatus {
    pub pending_count: usize,
    pub pending_checkpoints: Vec<i64>,
    pub last_processed_checkpoint: Option<i64>,
    pub status: String,
}
}

Error Handling

All endpoints return standard HTTP status codes:

  • 200: Success
  • 400: Bad Request (invalid parameters)
  • 401: Unauthorized (missing/invalid JWT)
  • 404: Not Found
  • 500: Internal Server Error

Error Response Format:

{
  "error": "Error message description",
  "details": "Optional additional details"
}

Rate Limiting

Rate limiting may be applied to prevent abuse. Check response headers:

X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1609459200

Background Services

The API Services automatically run several background tasks:

  1. Job Status Refresh: Updates job status every 10 seconds
  2. Checkpoint Reward Processing: Processes rewards every 30 seconds
  3. Worker Event Processing: Converts worker events to job events
  4. TPS Broadcasting: Broadcasts TPS data every 12 seconds via WebSocket

Document Version: 1.0
Last Updated: 2024-12-16
Total Endpoints: 35+

Prover Proxy RPC Documentation

This document provides comprehensive documentation for the Psy Prover Proxy RPC methods, which handle local zero-knowledge proof generation for various circuit types.

RPC Namespace: psy

Default Listen Address: 0.0.0.0:9999


Table of Contents

  1. Overview
  2. UPS (Unified Proving System) Methods
  3. Contract Management
  4. Signature Proving
  5. Software-Defined Signatures
  6. Proof Tree Aggregation
  7. Circuit Management
  8. Data Structures
  9. Configuration
  10. Error Handling

Overview

The Prover Proxy is a local proving service that generates zero-knowledge proofs for various circuit types in the Psy ecosystem. It acts as a computational backend for transaction signing, contract execution, and proof aggregation.

Key Features

  • Local Proof Generation: Generate ZK proofs without sending sensitive data to remote servers
  • Multiple Circuit Types: Support for UPS, contract calls, signatures, and aggregation circuits
  • Software-Defined Signatures: Custom circuit-based authentication schemes
  • Contract Circuit Management: Dynamic registration and execution of contract circuits
  • Proof Tree Aggregation: Hierarchical proof composition and verification

UPS (Unified Proving System) Methods

psy_prove_ups_start

Generate proof for UPS start step.

Parameters:

{
  "input": "UPSStartStepInput<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Proves the initial step in the UPS proving pipeline.


psy_prove_ups_start_register_user

Generate proof for user registration start step.

Parameters:

{
  "input": "UPSStartStepRegisterUserInput<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Proves user registration within the UPS framework.


psy_prove_ups_cfc_standard_tx

Generate proof for standard CFC (Contract Function Call) transaction.

Parameters:

{
  "input": "UPSCFCStandardTransactionCircuitInput<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Proves standard contract function call transactions.


psy_prove_ups_cfc_deferred_tx

Generate proof for deferred CFC transaction.

Parameters:

{
  "input": "UPSCFCDeferredTransactionCircuitInput<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Proves deferred contract function call transactions.


psy_prove_ups_end_cap

Generate proof for UPS end cap step.

Parameters:

{
  "end_cap_from_proof_tree_input": "UPSEndCapFromProofTreeGadgetInput<F>",
  "circuit_type": "QStandardBinaryTreeCircuitType",
  "fingerprint": "QHashOut<F>",
  "agg_header": "QRecursionAggStandardHeader<F>",
  "proof": "ProofWithPublicInputs<F, C, D>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Generates the final proof that caps off a proof tree aggregation.


Contract Management

psy_get_circuits_data

Get information about available circuits.

Parameters: None

Response:

{
  "result": "string"
}

Description: Returns serialized JSON containing circuit fingerprints and verifier configurations.


psy_register_contract_circuits

Register circuits for a contract.

Parameters:

{
  "contract_id": 123,
  "contract_code": "ContractCodeDefinition"
}

Response:

{
  "result": null
}

Description: Registers and compiles circuits for all functions in a contract.


psy_get_fn_id

Get function ID for a contract method.

Parameters:

{
  "contract_id": 123,
  "method_name": "transfer"
}

Response:

{
  "result": 0
}

Description: Returns the internal function ID for a named contract method.


psy_get_fn_id_and_circuit_def

Get function ID and circuit definition for a contract method.

Parameters:

{
  "contract_id": 123,
  "method_name": "transfer"
}

Response:

{
  "result": [0, "DPNFunctionCircuitDefinition"]
}

Description: Returns both the function ID and the circuit definition for a method.


psy_get_contract_method_common_data

Get common circuit data for a contract method.

Parameters:

{
  "contract_id": 123,
  "fn_id": 0
}

Response:

{
  "result": {
    "fingerprint": "QHashOut<F>",
    "verifier_config": "VerifierOnlyCircuitData"
  }
}

Description: Returns the circuit fingerprint and verifier configuration for a specific contract function.


psy_resolve_contract_function_by_method_name

Resolve contract function by method name.

Parameters:

{
  "contract_id": 123,
  "contract_code": "ContractCodeDefinition", 
  "method_name": "transfer"
}

Response:

{
  "result": [0, "DPNFunctionCircuitDefinition"]
}

Description: Registers contract circuits and resolves function by name.


psy_resolve_contract_function_by_method_id

Resolve contract function by method ID.

Parameters:

{
  "contract_id": 123,
  "contract_code": "ContractCodeDefinition",
  "method_name": 12345678
}

Response:

{
  "result": [0, "DPNFunctionCircuitDefinition"]
}

Description: Registers contract circuits and resolves function by method ID.


psy_prove_contract_call

Generate proof for contract function call.

Parameters:

{
  "contract_id": 123,
  "fn_id": 0,
  "input": "DapenContractFunctionCircuitInput<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Generates a ZK proof for executing a specific contract function.


Signature Proving

psy_prove_zk_sign

Generate ZK signature proof.

Parameters:

{
  "private_key": "QHashOut<F>",
  "sig_hash": "QHashOut<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Generates a zero-knowledge signature proof using the built-in ZK signature scheme.


psy_prove_zk_sign_inner

Generate inner ZK signature proof.

Parameters:

{
  "private_key": "QHashOut<F>",
  "sig_hash": "QHashOut<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Generates the inner component of a ZK signature proof for later minification.


psy_prove_zk_sign_minifier

Minify an inner ZK signature proof.

Parameters:

{
  "inner_proof": "string"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Takes a serialized inner proof and produces a minified version for efficiency.


psy_prove_secp_sign

Generate SECP256K1 signature proof.

Parameters:

{
  "signature": "PsyCompressedSecp256K1Signature"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Generates a zero-knowledge proof of a valid SECP256K1 signature.


Software-Defined Signatures

psy_register_dpn_software_defined_circuit

Register a DPN software-defined signature circuit.

Parameters:

{
  "request": "QRegisterDPNSoftwareDefinedCircuitRPCRequest"
}

Response:

{
  "result": "QHashOut<F>"
}

Description: Registers a software-defined signature circuit written in Psy language.

Status: Implementation in progress (todo!)


psy_register_plonky2_software_defined_circuit

Register a Plonky2 software-defined signature circuit.

Parameters:

{
  "request": "QRegisterPlonky2SoftwareDefinedCircuitRPCRequest"
}

Response:

{
  "result": "QHashOut<F>"
}

Description: Registers a software-defined signature circuit written directly in Plonky2.

Status: Implementation in progress (todo!)


psy_prove_dpn_software_defined_sign

Generate proof for DPN software-defined signature.

Parameters:

{
  "fingerprint": "QHashOut<F>",
  "private_key": "QHashOut<F>",
  "input": "DPNSoftwareDefinedSignatureInput",
  "sig_hash": "QHashOut<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Generates a proof for a custom signature circuit written in Psy language.


psy_prove_plonky2_software_defined_sign

Generate proof for Plonky2 software-defined signature.

Parameters:

{
  "fingerprint": "QHashOut<F>",
  "private_key": "QHashOut<F>",
  "input": "Plonky2SoftwareDefinedSignatureInput",
  "sig_hash": "QHashOut<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Generates a proof for a custom signature circuit written in Plonky2.


Proof Tree Aggregation

The prover proxy supports hierarchical proof aggregation through various circuit types:

psy_prove_single_leaf_circuit

Aggregate a single leaf proof.

Parameters:

{
  "agg_circuit_whitelist_root": "QHashOut<F>",
  "single_insert_leaf_proof": "DeltaMerkleProofCore<QHashOut<F>>",
  "single_proof": "ProofWithPublicInputs<F, C, D>",
  "single_verifier_data": "AltVerifierOnlyCircuitData<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Creates an aggregation proof for a single leaf node in the proof tree.


psy_prove_two_leaf_circuit

Aggregate two leaf proofs.

Parameters:

{
  "agg_circuit_whitelist_root": "QHashOut<F>",
  "left_insert_leaf_proof": "DeltaMerkleProofCore<QHashOut<F>>",
  "left_proof": "ProofWithPublicInputs<F, C, D>",
  "left_verifier_data": "AltVerifierOnlyCircuitData<F>",
  "right_insert_leaf_proof": "DeltaMerkleProofCore<QHashOut<F>>",
  "right_proof": "ProofWithPublicInputs<F, C, D>",
  "right_verifier_data": "AltVerifierOnlyCircuitData<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Creates an aggregation proof combining two leaf proofs.


psy_prove_two_agg_circuit

Aggregate two aggregation proofs.

Parameters:

{
  "left_agg_whitelist_merkle_proof": "MerkleProofCore<QHashOut<F>>",
  "left_agg_proof_header": "QRecursionAggStandardHeader<F>",
  "left_proof": "ProofWithPublicInputs<F, C, D>",
  "left_verifier_data": "AltVerifierOnlyCircuitData<F>",
  "right_agg_whitelist_merkle_proof": "MerkleProofCore<QHashOut<F>>",
  "right_agg_proof_header": "QRecursionAggStandardHeader<F>",
  "right_proof": "ProofWithPublicInputs<F, C, D>",
  "right_verifier_data": "AltVerifierOnlyCircuitData<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Creates an aggregation proof combining two aggregation proofs.


psy_prove_left_leaf_right_agg_circuit

Aggregate a leaf proof with an aggregation proof (leaf on left).

Parameters:

{
  "left_insert_leaf_proof": "DeltaMerkleProofCore<QHashOut<F>>",
  "left_proof": "ProofWithPublicInputs<F, C, D>",
  "left_verifier_data": "AltVerifierOnlyCircuitData<F>",
  "right_agg_whitelist_merkle_proof": "MerkleProofCore<QHashOut<F>>",
  "right_agg_proof_header": "QRecursionAggStandardHeader<F>",
  "right_proof": "ProofWithPublicInputs<F, C, D>",
  "right_verifier_data": "AltVerifierOnlyCircuitData<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Creates a mixed aggregation proof with a leaf on the left and aggregation on the right.


psy_prove_left_agg_right_leaf_circuit

Aggregate an aggregation proof with a leaf proof (aggregation on left).

Parameters:

{
  "left_agg_whitelist_merkle_proof": "MerkleProofCore<QHashOut<F>>",
  "left_agg_proof_header": "QRecursionAggStandardHeader<F>",
  "left_proof": "ProofWithPublicInputs<F, C, D>",
  "left_verifier_data": "AltVerifierOnlyCircuitData<F>",
  "right_insert_leaf_proof": "DeltaMerkleProofCore<QHashOut<F>>",
  "right_proof": "ProofWithPublicInputs<F, C, D>",
  "right_verifier_data": "AltVerifierOnlyCircuitData<F>"
}

Response:

{
  "result": "ProofWithPublicInputs<F, C, D>"
}

Description: Creates a mixed aggregation proof with aggregation on the left and a leaf on the right.


Circuit Management

Circuit Information

The prover proxy manages various circuit types with their fingerprints and verifier data:

  • UPS Circuits: ups_start, ups_start_register_user, ups_cfc_standard_tx, ups_cfc_deferred_tx, ups_end_cap
  • Aggregation Circuits: single_leaf_circuit, two_leaf_circuit, two_agg_circuit, left_leaf_right_agg_circuit, left_agg_right_leaf_circuit
  • Signature Circuits: zk_circuit, secp_circuit
  • Software-Defined Circuits: Dynamically registered custom circuits

Circuit Registration

Circuits are automatically registered during prover initialization:

  1. System Circuits: Core UPS and aggregation circuits are pre-registered
  2. Contract Circuits: Registered on-demand when contracts are deployed
  3. Software-Defined Circuits: Registered through dedicated RPC methods

Data Structures

Field Types

#![allow(unused)]
fn main() {
type F = <PoseidonGoldilocksConfig as GenericConfig<2>>::F;
type C = PoseidonGoldilocksConfig;
const D: usize = 2;
}

Common Input Types

UPSStartStepInput: Input for UPS start step proving UPSStartStepRegisterUserInput: Input for user registration proving
UPSCFCStandardTransactionCircuitInput: Input for standard contract calls UPSCFCDeferredTransactionCircuitInput: Input for deferred contract calls DapenContractFunctionCircuitInput: Input for contract function execution

Proof Types

ProofWithPublicInputs<F, C, D>: Complete ZK proof with public inputs QHashOut: Hash output in the field F MerkleProofCore<QHashOut>: Merkle proof for hash verification DeltaMerkleProofCore<QHashOut>: Delta merkle proof for tree updates

Circuit Data

QCommonCircuitData: Circuit fingerprint and verifier configuration AltVerifierOnlyCircuitData: Alternative verifier data format ContractCodeDefinition: Complete contract code with all functions DPNFunctionCircuitDefinition: Function-specific circuit definition


Configuration

Server Configuration

The prover proxy is configured via ProveProxyArgs:

#![allow(unused)]
fn main() {
pub struct ProveProxyArgs {
    pub listen_addr: String,        // Default: "0.0.0.0:9999"
    pub rpc_config: String,         // Path to network config file
}
}

Network Configuration

# Start prover proxy
psy_user_cli prove-proxy \
  --listen-addr "127.0.0.1:9999" \
  --rpc-config "config.json"

Circuit Initialization

The prover proxy initializes with:

  • Network magic number for proof validation
  • RPC provider for fetching contract code
  • Circuit manager for all proof types
  • Session circuit info store for fingerprint tracking

Error Handling

All RPC methods return Result<T, ErrorObjectOwned> with standardized error formats:

Common Error Types

{
  "code": 1,
  "message": "Task schedule failed",
  "data": "Thread pool task execution failed: ..."
}
{
  "code": 1, 
  "message": "ZK proof generation failed",
  "data": "Circuit constraint violation: ..."
}
{
  "code": 1,
  "message": "Contract not found",
  "data": "contract 123 method transfer not registered"
}

Error Categories

  1. Task Scheduling Errors: Thread pool issues during proof generation
  2. Proving Errors: ZK proof generation failures
  3. Registration Errors: Circuit registration and management issues
  4. Contract Errors: Contract code resolution and function lookup failures
  5. Deserialization Errors: Invalid input data format

Performance Considerations

Asynchronous Proving

All proving operations use tokio::task::spawn_blocking to:

  • Avoid blocking the async runtime during CPU-intensive proving
  • Enable concurrent proof generation
  • Maintain server responsiveness

Circuit Caching

  • Circuits are compiled once and reused for multiple proofs
  • Contract circuits are cached by contract ID
  • Software-defined circuits are cached by fingerprint

Memory Management

  • Circuit managers use Arc for safe sharing across async tasks
  • Large proof objects are moved into background tasks to minimize copying
  • Verifier data is pre-computed and cached for efficiency

Document Version: 1.0
Last Updated: 2024-12-16
Total RPC Methods: 25+

Appendix A: Glossary

  • Felt: A numeric type, likely an integer.
  • Hash: A fixed-size array used for state storage.
  • Storage: Persistent state management for contracts.

Appendix B: Reserved Keywords

  • fn, pub, mod, use, struct, impl, trait, let, mut, if, else, while, match, return, assert, assert_eq

Appendix C: Publications

Appendix D: Contributing

To contribute, submit a pull request to the [GitHub repository]. The source files are in mdBook format.

Appendix E: Acknowledgements