Skip to main content

Running ICP Rosetta

Intermediate
Rosetta

There are several ways to run ICP Rosetta depending on your use case and requirements. This guide covers all available deployment methods.

The easiest way to run ICP Rosetta is using the official Docker image. This method is recommended for most users.

Prerequisites

docker pull dfinity/rosetta-api

Quick Start - Test Environment

Start here for learning and development. The test environment uses TESTICP tokens that have no real value.

docker run \
--publish 8081:8081 \
--rm \
dfinity/rosetta-api \
--environment test

To get TESTICP tokens for testing, you can use Validation Cloud's free faucet. This provides test tokens without needing to use real ICP on mainnet.

Basic Production Deployment

For quick production setup using the official ICP ledger on mainnet. Note: Data will be lost when the container restarts.

docker run \
--publish 8081:8081 \
--rm \
dfinity/rosetta-api \
--environment production

Production with Data Persistence

For production environments where you need to persist blockchain data across container restarts:

# Create a volume for data persistence
docker volume create rosetta

# Run in production mode with data persistence
docker run \
--volume rosetta:/data \
--publish 8081:8081 \
--detach \
dfinity/rosetta-api:v2.1.7 \
--environment production

This setup ensures that your node doesn't need to re-sync from scratch if the container is restarted.

It's recommended to use specific versions in production for consistency and predictable deployments. Check available versions on DockerHub.

Custom Configurations

To connect to a custom test ledger canister instance (useful for development with specific test setups):

docker run \
--publish 8081:8081 \
--rm \
dfinity/rosetta-api \
--environment test \
--canister <ledger-canister-id>

Building from source

You can build and run ICP Rosetta directly from the Internet Computer source code.

Prerequisites

  • Bazel build system.
  • Internet Computer repository cloned locally: git clone https://github.com/dfinity/ic.git.

Build and run

# Clone the IC repository (if not already done)
git clone https://github.com/dfinity/ic.git
cd ic

# Build and run ICP Rosetta
bazel run //rs/rosetta-api/icp:ic-rosetta-api -- \
--port 8081 \
--environment production \
--store-location /tmp

The --store-location parameter is important when running from source as it specifies where the database files will be stored. Without this parameter, the default location may not be writable or accessible.

This method gives you the latest development version and allows for custom modifications.

Local cluster

For development and testing purposes, you can set up a complete local Kubernetes cluster with monitoring tools.

The local cluster setup provides:

  • Minikube-based Kubernetes cluster.
  • Prometheus and Grafana for monitoring.
  • cAdvisor for container metrics.
  • Both ICP and ICRC1 Rosetta services.

The deployment script will help install missing dependencies:

  • Docker.
  • Minikube.
  • kubectl.
  • Helm.

Deploying production images

# Clone the IC repository
git clone https://github.com/dfinity/ic.git
cd ic/rs/rosetta-api/local/cluster

# Deploy with default test ledgers
./deploy.sh

# Deploy pointing to specific ledgers
./deploy.sh \
--icp-ledger xafvr-biaaa-aaaai-aql5q-cai \
--icp-symbol TESTICP \
--icrc1-ledger 3jkp5-oyaaa-aaaaj-azwqa-cai

Deploying local images

First, build the containers from within the dev container:

# Enter dev container
./ci/container/container-run.sh

# Build ICP Rosetta
bazel build //rs/rosetta-api/icp:rosetta_image.tar
mv bazel-bin/rs/rosetta-api/icp/rosetta_image.tar /tmp

# Build ICRC1 Rosetta
bazel build //rs/rosetta-api/icrc1:icrc_rosetta_image.tar
mv bazel-bin/rs/rosetta-api/icrc1/icrc_rosetta_image.tar /tmp

# Exit dev container
exit

Then deploy the local images:

./deploy.sh \
--local-icp-image-tar /tmp/rosetta_image.tar \
--local-icrc1-image-tar /tmp/icrc_rosetta_image.tar

Monitoring with Grafana

Access Grafana at http://localhost:3000 with:

  • Username: admin.
  • Password: admin.

Import the dashboard using the rosetta_load_dashboard.json file in the cluster directory.

Cleaning up

To start fresh:

./deploy.sh --clean

Validation Cloud

For those who prefer not to run Rosetta locally, Validation Cloud offers managed ICP Rosetta endpoints that you can use for learning, development, and production.

Features

  • Managed infrastructure (no local setup required).
  • Global distribution with multi-region support.
  • 99.99% uptime SLA.
  • 24/7 customer support.
  • SOC 2 Type 2 compliance.

Getting started

  1. Visit Validation Cloud ICP page.
  2. Sign up for an account.
  3. Choose between Free tier (50M Compute Units) or Scale plan (unlimited).
  4. Get your API endpoint and start building.

Supported APIs

  • ICP Rosetta nodes for ledger communication.
  • ICRC API for token interactions.
  • Access to NNS governance canister.

This option is useful for:

  • Quick prototyping and learning.
  • Development without local infrastructure setup.
  • Production applications that prefer managed services.

Verification and testing

Regardless of the deployment method, you can verify your Rosetta node is working:

Check node status

curl -H "Content-Type: application/json" \
-d '{"network_identifier": {"blockchain": "Internet Computer", "network": "00000000000000020101"}}' \
-X POST http://localhost:8081/network/status

Check version

curl -H "Content-Type: application/json" \
-d '{"network_identifier": {"blockchain": "Internet Computer", "network": "00000000000000020101"}}' \
-X POST http://localhost:8081/network/options | jq '.version.node_version'

Wait for sync

Look for the message: You are all caught up to block XX in the logs to confirm the node is synchronized.

Requirements and limitations

  • Transaction timing: Unsigned transactions must be created less than 24 hours before submission due to the deduplication mechanism.
  • Signature schemes: Examples typically use Ed25519 and SECP256k1.
  • Port: Default listening port is 8081.
  • Data persistence: Mount /data directory as a volume for Docker deployments.