Post a training task on Tenzro Train
This tutorial walks the sponsor flow end to end: write a TimesFM-class training task spec, escrow TNZO into the reward pool, watch trainers enroll, monitor round progress, and fetch the sealed TrainingReceipt when the run completes. Phase 1 ships timeseries-first on the Open tier with Mean aggregation — those are the defaults this tutorial uses.
Prerequisites
- A registered Tenzro identity (
did:tenzro:human:...) and a funded MPC wallet. See Quick Start if you don't have one. - Enough TNZO to cover the reward pool (the example below escrows 1,000 TNZO) plus gas.
- A dataset published to IPFS (or any URI Tenzro Train can fetch) with a known SHA-256 manifest root.
- The
tenzroCLI installed and pointed at a node — usehttps://rpc.tenzro.networkfor the public testnet.
1. Write the task spec
A TrainingTaskSpec is a single JSON document that fully describes the run. The spec is committed verbatim into the final receipt, so every field — sponsor DID, dataset hash, hyperparameters — becomes an audit-trail anchor. Save this as timesfm-task.json:
{
"task_id": "train-2026-04-25-timesfm-200m",
"sponsor_did": "did:tenzro:human:01J2Z7K8...",
"sponsor_address": "0x7a4bcb13a6b2b384c284b5caa6e5ef3126527f93",
"architecture": {
"family": "timesfm",
"param_count": 200000000,
"modality": "Timeseries",
"fragment_count": 12,
"dtype": "bf16",
"metadata": {
"context_len": 2048,
"horizon": 128,
"n_quantiles": 10
}
},
"tier": "Open",
"aggregation": "Mean",
"trainer_count": 8,
"quorum": 6,
"inner_steps": 24,
"max_rounds": 1024,
"grace_window_ms": 30000,
"reward_pool": "1000000000000000000000",
"dataset_ref": "ipfs://QmXyZ...",
"dataset_hash": "0xc0ffee...deadbeef",
"min_throughput": null,
"created_at": 1761350400,
"metadata": {
"eval_suite": "monash-tsf-v2",
"target_mase": 0.8
}
}Key fields to understand:
trainer_count(M) andquorum(K) — the syncer accepts an outer gradient once K of M trainers have submitted for the same fragment in the same round. Phase 1 reference is M=8, K=6.inner_steps(H) — number of inner SGD steps each trainer runs locally between outer gradient submissions. Larger H reduces bandwidth but increases divergence risk; reference is H=24.fragment_count(F) — model is partitioned into F fragments for parallel aggregation. Reference is F=12.grace_window_ms(τ) — straggler tolerance. Submissions that land within τ after the K-th still count toward the round.reward_pool— total TNZO escrowed, denominated in attoTNZO (1018). The example escrows 1,000 TNZO.dataset_hash— SHA-256 of the dataset's manifest (root of the shard hash tree). Bound into the receipt for provenance.
2. Post the task
The CLI reads the spec, signs it with your wallet, and dispatches to tenzro_training_postTask. The reward pool is escrowed atomically — the call fails if your wallet is underfunded.
tenzro train post-task --spec ./timesfm-task.json \
--rpc https://rpc.tenzro.networkThe same call over raw JSON-RPC, if you're integrating from another tool:
curl https://rpc.tenzro.network \
-X POST \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tenzro_training_postTask",
"params": {
"task_spec": <CONTENTS OF timesfm-task.json>
}
}' | jq
# Expected:
# {
# "jsonrpc": "2.0",
# "id": 1,
# "result": {
# "task_id": "train-2026-04-25-timesfm-200m",
# "status": "awaiting_enrollment"
# }
# }On success, the run enters awaiting_enrollment. The syncer is elected (VRF-weighted by stake), and the gossip topics tenzro/training/1.0.0 and tenzro/training/syncer/1.0.0 begin advertising the new run.
3. Watch enrollment
Trainers stake into the run until trainer_count slots fill. Once the run reaches the quorum threshold and the syncer accepts the roster, status flips to running:
# Poll the run state — wait for status to flip from
# awaiting_enrollment → running once K=6 trainers stake in.
while true; do
STATUS=$(tenzro train get-run \
--task-id train-2026-04-25-timesfm-200m \
--rpc https://rpc.tenzro.network \
--format json | jq -r '.status')
ENROLLED=$(tenzro train get-run \
--task-id train-2026-04-25-timesfm-200m \
--rpc https://rpc.tenzro.network \
--format json | jq -r '.enrolled_trainers')
echo "$(date +%T) status=$STATUS enrolled=$ENROLLED/8"
[ "$STATUS" = "running" ] && break
sleep 10
done4. Monitor round progress
Once running, the syncer finalizes one round at a time. Each round produces a fresh run_root — the SHA-256 Merkle root over every accepted outer gradient, prefixed with tenzro/train/run-root/v1.
# Once running, the syncer commits one round at a time.
# Watch current_round march toward max_rounds=1024.
curl -s https://rpc.tenzro.network \
-X POST \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 2,
"method": "tenzro_training_getRun",
"params": { "task_id": "train-2026-04-25-timesfm-200m" }
}' | jq '.result | {status, current_round, max_rounds, run_root}'For real-time visibility, subscribe to the tenzro/training/syncer/1.0.0 gossip topic instead of polling.
5. Fetch the receipt
When current_round reaches max_rounds, the syncer seals a TrainingReceipt and stores it in CF_TRAINING_RECEIPTS. Any unspent escrow is refunded to sponsor_address.
tenzro train get-receipt \
--task-id train-2026-04-25-timesfm-200m \
--rpc https://rpc.tenzro.networkThe receipt shape:
{
"task_id": "train-2026-04-25-timesfm-200m",
"task_spec": { /* full original spec, bound verbatim */ },
"syncer_did": "did:tenzro:machine:syn1...",
"trainers": ["did:tenzro:machine:t1...", ...],
"rounds_completed": 1024,
"final_model_hash": "0xa1b2c3...",
"run_root": "0xd4e5f6...",
"aggregation_transcript": "0x...",
"sealed_at": 1761436800,
"syncer_signature": "0x..."
}The receipt is verifiable on-chain via the TRAINING_VERIFY precompile at 0x1008. Any contract — including ERC-20 reward escrows — can gate payouts on a receipt without trusting the sponsor.
Next steps
- Run a trainer node — the other side of this protocol: install the Python reference trainer, enroll, submit gradients.
- Tenzro Train docs — full RPC reference, gossip topics, on-chain commitments.
- Tenzro Train whitepaper — design rationale, multi-modal extensions, comparison with Prime Intellect / Nous / OpenDiLoCo.