Optimize node performance from observed signals
- Start from the public readiness and progression signals.
- Classify whether the bottleneck is dependency health, local resource pressure, or stalled progression.
- Change one tuning dimension at a time.
- Re-measure the same signals after each change.
- Stop when the node is stable and progressing rather than chasing vague “faster” behavior.
1. Use the public signal families as your starting point
Begin with:- readiness endpoints
runningStateandhealthlastRequest.requestIndexlastTx- dependency reachability
- CPU, memory, disk, and database pressure
2. Classify the bottleneck
Use these buckets:| Bottleneck class | Typical signal pattern |
|---|---|
| dependency-bound | chain, deployment-service, Postgres, Redis, PCCS, AESM, oracle, or KYC dependency is unstable |
| progression-bound | readiness is green but request or transaction progression is stalling |
| resource-bound | CPU, memory, disk, or database growth is consistently high |
| mixed | more than one of the above is failing at once |
3. Tune one category at a time
Examples:- dependency-bound: stabilize the dependency before changing node-local settings
- progression-bound: inspect whether the node is serving but not advancing, then verify the latest update did not change live coordination behavior
- resource-bound: relieve the specific pressure first instead of changing multiple services together
4. Verify after each tuning change
After any adjustment, re-check:- readiness state,
- progression fields,
- dependency health,
- system-pressure metrics.