Safrole Algorithm Demystified: JAM Block Creation

llustration of the Safrole algorithm’s process for block creation and fork prevention, showing how validators, tickets, and bandersnatch signatures work in decentralized blockchain systems.

The Safrole algorithm is designed to manage block creation in a decentralized system with the goal of reducing the chances of forks, which occur when multiple blocks are created simultaneously. Safrole helps prevent forks by ensuring there’s clarity around who will author each new block.

----

The Basics

  • Block Creation Timing: A new block is added to the blockchain every 6 seconds. The algorithm is responsible for choosing who will author the block at each moment. This decision is made in advance but kept secret until needed.
  • Who Can Create Blocks?: Only a preselected set of validators can author blocks. However, being in the set of preselected validators does not guarantee that you will get to author a block. The set of validators remains fixed during an epoch (a period of block creation), and validators are selected from this group based on predetermined rules.While JAM is not opinionated about how validators are chosen, this aspect can be covered by an outside system. For example, in Polkadot, validators are often chosen based on their staked DOTs, meaning those who stake more are more likely to be selected as validators for the next epoch. During an epoch, the Safrole algorithm computes which validators will author blocks in the next epoch.

----


The Main State Components

The Safrole algorithm operates using several core components:

  • γ_s (gamma_s): This is the main state that keeps track of who will author each block during the current epoch. It either provides a set of anonymous tickets (in normal mode) or bandersnatch keys (in fallback mode). Tickets are cryptographic outputs that prove a valid validator will author the next block without revealing their identity until needed.
  • κ (kappa): This is the set of preselected validators for the current epoch. Being in kappa means that validators have a chance to author blocks, but it doesn’t guarantee it.
  • λ (lambda): This is the set of validators from the previous epoch. These validators are no longer actively involved, but they were preselected for block creation during the prior epoch.
  • γ_k (gamma_k): This is the set of validators hoping to participate in the next epoch. They submit tickets during the current epoch with the hope that their tickets will be included in γ_a (the set of best tickets for the next epoch). The goal is to remain in γ_a by the end of the ticket submission period, and for the next epoch to operate in normal mode.
  • γ_z (gamma_z): A mathematical proof that allows the system to verify the authenticity of submitted tickets. It checks the validity of the VRF (Verifiable Random Function) output against a specific context and confirms that the validator who submitted the ticket is part of γ_k without revealing their identity.
  • γ_a (gamma_a): This is the set of the current best submitted tickets. Validators submit tickets during the current epoch, aiming to secure a spot in γ_a for the next epoch. The set includes the K lowest VRF outputs (based on some predetermined order). If a new ticket is submitted and either γ_a contains fewer than K tickets or the new VRF output is smaller than an existing ticket, it gets added to γ_a.

---

Ticket Submission Process

During the ticket submission period, validators in the set γ_k (those hoping to participate in the next epoch) submit their tickets. Each validator has two tickets per epoch, and they submit or gossip these tickets with the goal of having them included in γ_a, the set of best tickets for the next epoch.

  • The ticket submission period is a portion of the epoch where γ_a is actively updated. Validators try to submit the smallest possible VRF outputs to ensure their tickets remain in γ_a.
  • After the submission period ends, there is a phase lasting 100 timeslots (E-Y) during which γ_a is no longer updated.

How Does the Algorithm Work for a Block Importer?

When a block is authored and submitted, the Safrole algorithm follows a process to verify the block and manage ticket submissions for the next epoch. Below is a step-by-step breakdown:

1. Block Submission and Verification

When a new block arrives, the block header contains the following key components:

  • Two bandersnatch signatures: H_s and H_v.
  • Timeslot (τ′): The specific timeslot the submitted block is filling.
  • Author bandersnatch block index (H_i): Identifies the block author’s index within the current set of validators κ.

The block verification process depends on whether the algorithm is in normal mode or fallback mode:

  • In normal mode:
    • The ticket of the current author is retrieved using γ_s[τ′].
    • The author’s bandersnatch key is obtained from the author index:
      H_a = κ[H_i].
    • The system verifies that H_s is the signature of the encoded block header using the public key H_a, along with the context:
      X_T + η_3 + i_r,
      where i_r is the "r" component of the current ticket, and X_T is the UTF-8 byte array of the string jam_ticket_seal.
  • In fallback mode:
    • The public bandersnatch key is directly retrieved from γ_s for the timeslot:
      H_a = γ_s[τ′].
    • The system verifies that H_s is the signature of the encoded block header using the public key H_a, with the following context:
      X_F + η_3,
      where X_F is the UTF-8 byte array of the string jam_fallback_seal.

Once H_s is verified, the system extracts the VRF output Y(H_s) using the signature H_s and the public key H_a.

2. Verifying the Second Signature (H_v)

After verifying H_s, the system proceeds to verify the second bandersnatch signature H_v, which is applied to empty data with the context:

  • X_E + η_3,
    where X_E is the UTF-8 byte array of the string jam_entropy.

If both signatures (H_s and H_v) are successfully verified, the entropy accumulator η is updated.

3. Entropy Update

The entropy accumulator η gets updated after successful block import:

  • For each new timeslot, the system computes:
    η_0' = Hash(η_0 + Y(H_v)),
    where Y(H_v) is the VRF output of H_s.
  • At the end of the epoch, the previous states of η are carried forward:
    • η_1' = η_0
    • η_2' = η_1
    • η_3' = η_2

This ensures randomness in future validator selections.

4. Updating γ_a

When a new block containing ticket submissions arrives, the system follows these steps to update γ_a:

It checks that the extrinsic doesn’t contain tickets already present in γ_s.

It ensures the tickets in the extrinsic are not duplicates and are properly sorted.

Once validated, the tickets are added to γ_a.

γ_a is then updated to keep only the E smallest tickets, based on their VRF outputs.

This process ensures that only the smallest and most valid tickets are retained, providing fairness and security in selecting the next set of block authors.

5. Transition to the Next Epoch (Only Happens After the Submission Period Ends)

At the end of the submission period, the Safrole algorithm transitions based on these conditions:

  • In normal mode, γ_s is determined by running an outside-in sequencer (Z) over γ_a. This sequencer pairs the smallest and largest tickets, the second smallest with the second largest, and so on. The normal mode operates only if γ_a reaches its size limit E.
  • In fallback mode, if γ_a does not reach size E, the system selects a bandersnatch key for each timeslot in the next epoch from the set γ_k. This selection is done using a random selector with η_2' as the seed.

More Posts

Visualization of a Patricia Trie structure showing compressed binary paths leading to encoded state components, with nodes representing state transitions and leaves containing encoded values such as α, β, and other state elements.

Patricia Trie: Key to Efficient State Management in JAM Protocol

In large-scale blockchain systems, the Patricia Trie efficiently organizes and stores state components in a compressed binary structure. This article explains how the trie leverages database storage for scalability, enabling efficient updates to the root hash and supporting Merkle proofs for light clients. By storing node hashes and updating only the necessary portions of the trie, implementers can maintain optimal performance while providing cryptographic proofs of past states.

An in-depth explanation of Merkle tree structures from Appendix E of the Gray Paper, covering well-balanced and constant-depth Merkle trees. The post also introduces the node operator as an abstraction to highlight similarities between these trees and explains the trace operator, which is used to generate cryptographic proofs. Visual examples are provided to illustrate how input values are divided in well-balanced trees and padded for constant-depth trees, ensuring efficient verification and data integrity in cryptographic systems.

Merkle Tree Structure: Appendix E of the Gray Paper

In this post, we break down the key Merkle tree structures from Appendix E of the Gray Paper, including well-balanced and constant-depth Merkle trees. We'll also explain the trace operator used for cryptographic proofs and explore how these structures ensure data integrity and efficient verification in cryptographic systems.

AI generated abstract image representing the program blob

JAM Virtual Machine: Key Components and Execution in Progam Blob

The JAM virtual machine uses a highly structured program blob to execute its programs, where each component of the blob plays a critical role in managing jumps, instructions, and their metadata. In this article, we will break down the JAM program blob structure step by step, demystifying its components and illustrating how they come together to enable dynamic execution within the virtual machine.