All phases of Avail's unification drop have now ended, πŸ‘‰πŸ‘‰ check out this page πŸ‘ˆπŸ‘ˆ for more information.

Avail-Powered zkEVM Validium

How to Use Polygon zkEVM with Avail


Embark on setting up your own Polygon zkEVM network, leveraging Avail as the data availability layer. This guide is tailored for deploying on Ethereum's Sepolia testnet and integrating with the Avail Goldberg testnet. To gain a comprehensive understanding of Polygon zkEVM, review the Polygon zkEVM documentation (opens in a new tab).

In this guide, you will conduct the following:


Ensure you have installed the following software.

Installation commands are based on Ubuntu 20.04 LTS:

Node.js (opens in a new tab)Latest LTS Version
Git (opens in a new tab)OS Default
Golang (opens in a new tab)1.19
Docker (opens in a new tab)Latest
Docker Compose (opens in a new tab)Latest
# Install Git
sudo apt install -y git
# Install Node.js (using NVM)
curl -o- | bash
export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
[ -s "$NVM_DIR/" ] && \. "$NVM_DIR/" # This loads nvm
nvm install --lts
# Install Golang
tar -C /usr/local -xzf go1.19.linux-amd64.tar.gz
echo "export PATH=$PATH:/usr/local/go/bin" >> ~/.bashrc
source ~/.bashrc
# Install Docker and Docker Compose
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli docker-compose-plugin

Hardware Requirements

Both the real and mock provers are compatible exclusively with x86 architectures. They are not designed to operate on ARM architecture machines, including Apple Silicon devices, even within Dockerized environments.

ComponentMinimum RequirementsRecommended SetupSuggested AWS Instance
Mock Prover4-core CPU, 8GB RAM, 50 GB SSD8-core CPU, 16GB RAM, 60 GB SSDr6a.xlarge
Real Prover96-core CPU, 768GB RAM, 120 GB SSD96-core CPU, 1000GB RAM, 140 GB SSDr6a.24xlarge

Running the Polygon zkEVM solution suite may lead to storage issues, primarily due to excessive Docker logs. To mitigate this, you can customize the Docker daemon's behavior by referring to the first answer here (opens in a new tab). Choose a configuration that best suits your needs.

  • For a devnet setup with limited state growth and Docker logs, we recommend a minimum disk size of approximately 50GB.
  • If you plan to run a real prover, it's advisable to allocate a minimum disk size of around 120GB for your devnet setup.
  • Keep in mind that these recommendations may vary based on your specific use case and requirements.

In production environments with a high transaction volume, your storage requirements may increase significantly. It's recommended to utilize an EBS-like data storage solution to ensure scalability, allowing you to add more storage as needed.

Network Details

Before diving into the setup, ensure you have the following network details:

Explorer (opens in a new tab)
RPC (opens in a new tab)
Bridge service (opens in a new tab)

Launch an Avail-Powered zkEVM


The prover and verifier components maintain their original security guarantees. However, please note that the data attestation verification during sequencing or any aspect of the validium-node related to data availability on Avail has not undergone an audit. Exercise caution when using this program. It is distributed without any warranty, nor an implied warranty of merchantability or fitness for a particular purpose.

Please be aware that some aspects of this guide may differ from the original source due to the unique nature of the Avail validium implementation. For zkEVM node-specific configurations and troubleshooting, refer to the official Polygon documentation (opens in a new tab).

Deploy the Contracts

  1. Clone the validium-contracts repository and install dependencies:

    git clone
    cd validium-contracts
    npm i
  2. Set up the environment and deployment parameters:

    • Update .env as per .env.example.

    • Fill in deploy_parameters.json following deploy_parameters.json.example:

      • Specify the trustedSequencer address. This address represents the Sequencer that is responsible for sequencing batches.

      • Define the trustedAggregator address. This address represents the Aggregator that handles the submission of proofs.

      • Fill in the following fields with the respective addresses that will control the contracts: admin, zkEVMOwner, timelockAddress, and initialZkEVMDeployerOwner.

      • Enter the private key for the deployer in the deployerPvtKey field.

  3. Execute deployment scripts on the Sepolia network:

    npx hardhat run --network sepolia deployment/2_deployPolygonZKEVMDeployer.js
    npx hardhat run --network sepolia deployment/3_deployContracts.js

    You should generate a deploy_output.json file.

  4. Verify the deployed contracts:

    npx hardhat run --network sepolia deployment/verifyzkEVMDeployer.js
    npx hardhat run --network sepolia deployment/verifyContracts.js

To create a fresh set of contracts, you can either employ a new private key or increment the value of the salt parameter in your configuration. After making this change, simply re-execute the deployment commands to generate the new contract suite.

Deploy the Node


The Mock Prover does not generate any zero-knowledge proofs. Instead, it simply validates any generated state root as correct. The mock verifier contract operates similarly, accepting all validity proofs without actual verification.

  1. Clone the validium-node repository for node setup:

    git clone
    cd validium-node
  2. Generate a secure account keystore file for Ethereum L1 transactions:

    docker run --rm hermeznetwork/zkevm-node:latest sh -c "/app/zkevm-node encryptKey --pk=[your private key] --pw=[password to encrypt file] --output=./keystore; cat ./keystore/*" > account.keystore
    • Replace [your private key] with your Ethereum L1 account private key.
    • Replace [password to encrypt file] with a password used for file encryption. This password must be passed to the Node later via the env variable ZKEVM_NODE_ETHERMAN_PRIVATEKEYPASSWORD.
  3. Update configuration files for the node:

    • Modify test.avail.config.json, test.node.config.toml, and test.genesis.config.json based on the provided example files.

      Click to view the avail configuration example
      Click to view the node configuration example
      Click to view the genesis configuration example
  4. Build the Docker image and launch the node:

    make build-docker
    cd test
    make run

Setup the Prover

  1. To switch to the real verifier mode, modify the deploy_parameters.json file:

    "realVerifier": true
  2. Utilize the following commands to download and unpack the configuration file:


SIZE: ~70GB+
Accelerate the download process by using a multi-thread downloader like Axel.

tar -xzvf v2.0.0-RC4-fork.5.tgz
rm -rf config
mv v2.0.0-RC4-fork.5.tgz validium-node/test/config/prover
  1. Ensure the docker-compose.yml includes proper file mappings for the prover configuration.

  2. Modify the test.prover.config.json to enable actual prover functionality:


Configure the Bridge

The zkEVM bridge service is a microservice that simplifies bridging between L1 and L2 by auto-claiming L1 transactions on L2 and generating necessary Merkle proofs. While optional for running a Validium, it enhances the ease of bridging transactions.

The Nomad DA bridge is only operational on Sepolia, limiting validium's data attestation to this chain. Alternatively, you can simulate data attestation and deploy on your preferred blockchain

  1. Clone the bridge repository:

    git clone
    cd bridge
  2. Fill in config/config.local.toml following config.local.example.toml:

    Unless you changed the genesis file, the L2 bridge address should remain the same.

    The address provided by default in the configuration is allocated ETH in the validium test setup for autoclaiming on L2. If a different address is used, it might require ETH. Similarly, in a production setup where ETH is not arbitrarily minted, you will need to manually fund the zkevm-bridge-service autoclaiming account.

    ParameterExample Value
    Click to view the bridge configuration example
  3. Build and run the Docker image using the following commands:

    make build-docker
    make run
  4. Once the Docker image is running, it serves as a microservice to detect L1 and L2 bridge transactions. You can check if the API is active by accessing the /api endpoint.

    • Generate Merkle Proofs: Use the /merkle-proof endpoint to generate the necessary Merkle proofs for bridging transactions.
    • Additional Endpoints: The microservice provides other endpoints for various functionalities, such as detecting bridge transactions for specific accounts.
    • Updating Code: If you need to modify any part of the code, remember that each change necessitates a new build. To update and rerun the service, execute the make build-docker && make run commands.