Join the Testnet as a Provider
This tutorial walks through adding a second model provider to the Tenzro testnet. Unlike the first tutorial, the focus here is on the gossipsub mesh: how a brand-new node discovers existing peers via boot nodes, joins the mesh, learns about the on-chain model catalog, and advertises its own serving endpoint to the rest of the network.
This is the exact flow we used to bring a second provider online serving qwen3.5-0.8b alongside an existing gemma3-270m provider, letting the network route inference to whichever provider was cheaper, healthier, or lower-latency.
Prerequisites
- A Linux host with Docker (2 vCPU, 4 GB RAM minimum)
- Outbound TCP on ports 9000 (P2P) and 443 (HTTPS for HuggingFace)
- At least one boot node multiaddr — for the public testnet, any running validator or RPC node works
- The other tutorial ( Run a Model Provider) completed, ideally on a separate host
1. Provision the host
Any cloud or bare-metal host works. For the public testnet, we spun up a second GCE VM:
# Create a GCE VM (any cloud works — this is the command we used)
gcloud compute instances create my-provider \
--project YOUR_PROJECT \
--zone us-central1-a \
--machine-type e2-standard-4 \
--image-family debian-12 \
--image-project debian-cloud \
--boot-disk-size 40GB
# SSH into the VM and install Docker
gcloud compute ssh my-provider --zone us-central1-a --command \
"curl -fsSL https://get.docker.com | sudo sh && sudo usermod -aG docker \$USER"2. Start the node with a boot node
The critical difference from a standalone node is the --boot-nodes flag. Without it, the node will start but never join the gossipsub mesh or sync blocks. You must expose port 9000 for P2P traffic — leaving it internal to the host prevents the libp2p swarm from accepting inbound connections from other peers.
docker run -d \
--name tenzro-provider \
--restart unless-stopped \
-p 8545:8545 \
-p 8080:8080 \
-p 9000:9000 \
-p 3001:3001 \
-p 3002:3002 \
-v /var/lib/tenzro:/data \
us-central1-docker.pkg.dev/tenzro-infra/tenzro/tenzro-node:latest \
--role model-provider \
--data-dir /data \
--rpc-addr 0.0.0.0:8545 \
--listen-addr /ip4/0.0.0.0/tcp/9000 \
--boot-nodes /ip4/10.0.0.10/tcp/9000Replace the boot node IP with a reachable peer. For public testnet, either ask in the community for a public boot node or use any validator/RPC multiaddr you control.
3. Verify mesh participation
Within 10–30 seconds of startup, the node should have at least one peer and its block height should start climbing. If peer_count stays at 0, check that port 9000 is open and that the boot node is reachable.
# Check that the node has discovered at least one peer
curl -s http://localhost:8080/api/status | jq
# Expected:
# {
# "node_state": "running",
# "role": "model-provider",
# "health": "healthy",
# "block_height": 42,
# "peer_count": 3,
# "uptime_secs": 45
# }You can also tail the logs and watch the gossipsub mesh form. Look for GRAFT events (peers added to the mesh) and topic subscriptions like tenzro/blocks/1.0.0 and tenzro/models/1.0.0.
# Watch for the gossipsub mesh to form (look for JOIN / GRAFT / subscribed)
docker logs -f tenzro-provider 2>&1 | grep -E "(gossip|mesh|peer|GRAFT|PRUNE)" | head -304. Verify the on-chain catalog is visible
Once synced, your node should be able to read the full model catalog from chain state — not just models it has downloaded locally. This tells you the state machine is replaying blocks correctly.
# Confirm the node sees the on-chain model catalog
curl -s http://localhost:8545 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"id":1,
"method":"tenzro_listModels",
"params":{}
}' | jq '.result | length'5. Download and serve a different model
To diversify the network, serve a different model than your other provider. Here we use qwen3.5-0.8b (~1.6 GB) — a larger model that complements the 270M model served by the first node. An agent consuming inference can now route based on price, capability, or load.
# Download a different model than the one in tutorial 1 to diversify the network
curl -s http://localhost:8545 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"id":1,
"method":"tenzro_downloadModel",
"params":{"model_id":"qwen3.5-0.8b"}
}'
# Poll until downloaded, then serve
while true; do
RESULT=$(curl -s http://localhost:8545 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"id":2,
"method":"tenzro_serveModel",
"params":{"model_id":"qwen3.5-0.8b"}
}')
echo "$RESULT"
echo "$RESULT" | grep -q "max_concurrent\|already being served" && break
sleep 10
done6. Confirm remote discoverability
The real test: can a differentnode see your endpoint? Query the public RPC from a machine with no relationship to your provider. The gossipsub mesh should have propagated the endpoint within one or two gossip rounds (~2–6 seconds).
# From ANY other node on the network (or the public RPC):
curl -s https://rpc.tenzro.network \
-X POST \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"id":1,
"method":"tenzro_listModelEndpoints",
"params":{}
}' | jq '.result[] | select(.model_id=="qwen3.5-0.8b")'7. Register as a paid provider (optional)
Serving a model is free by default — you are donating compute to the network. To get paid for inference, register as a formal provider with a TNZO stake. The stake bonds your provider identity to a reputation score and unlocks billing via the settlement engine.
# Register as a formal provider so consumers can bill you for inference
curl -s http://localhost:8545 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"id":1,
"method":"tenzro_registerProvider",
"params":{
"provider_type":"model-provider",
"stake_amount":"100000000000000000000"
}
}' | jqStake amounts are in atto-TNZO (1 TNZO = 1018 atto). The example above stakes 100 TNZO. Check your provider stats anytime:
curl -s http://localhost:8545 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"jsonrpc":"2.0",
"id":1,
"method":"tenzro_providerStats",
"params":{}
}' | jqWhat's next
You now have a multi-node model provider network. The next tutorial builds an agent that discovers both of your endpoints via the hub, picks the best one for a task, and pays for the result.