Exzo Network maintains several different clusters with different purposes.
Before you begin make sure you have first installed the Exzo Network command line tools
- http://native.exzonetwork.com/ Native explorer.
- http://evmexplorer.exzonetwork.com/ EVM mainnet explorer.
- http://evmexplorer.testnet.exzonetwork.com/ EVM testnet explorer.
- Testnet serves as a playground for anyone who wants to take Exzo Network for a test drive, as a user, token holder, app developer, or validator.
- Application developers should target Testnet.
- Potential validators should first target Testnet.
- Key differences between Testnet and Mainnet:
- Testnet tokens are not real
- Testnet includes a token faucet for airdrops for application testing
- Testnet may be subject to ledger resets
- Testnet typically runs a newer software version than Mainnet Beta
- Gossip entrypoint for Testnet:
- Gossip entrypoint for Mainnet:
- Shred version: 17211
- Some of the popular nodes:
exzo command-line configuration
exzoconfig set --url https://api.exzoscan.io
$ exzo-validator \ --identity ~/validator-keypair.json \ --vote-account ~/vote-account-keypair.json \ --no-untrusted-rpc \ --ledger ~/validator-ledger \ --rpc-port 8899 \ --enable-rpc-transaction-history \ --trusted-validator 78rvyxYJAUXGaZHJWyz7Yx81ribpAYvwupVuF9CugGws \ --trusted-validator FSZbHLPerYngGGwgWbXHtqTLRvLmgKVeUZCKwbFttWng \ --dynamic-port-range 8000-8010 \ --entrypoint bootstrap.exzoscan.io:8001 \ --limit-ledger-size
--trusted-validators is operated by Exzo Network
The Exzo Network git repository contains all the scripts you might need to spin up your own local testnet. Depending on what you're looking to achieve, you may want to run a different variation, as the full-fledged, performance-enhanced multinode testnet is considerably more complex to set up than a Rust-only, singlenode testnode. If you are looking to develop high-level features, such as experimenting with smart contracts, save yourself some setup headaches and stick to the Rust-only singlenode demo. If you're doing performance optimization of the transaction pipeline, consider the enhanced singlenode demo. If you're doing consensus work, you'll need at least a Rust-only multinode demo. If you want to reproduce our TPS metrics, run the enhanced multinode demo.
For all four variations, you'd need the latest Rust toolchain and the Exzo Network source code:
First, setup Rust, Cargo and system packages as described in the Exzo Network README-
Now checkout the code from GitHub:
git clone https://github.com/exzonetwork/exzonetwork-chain.git cd exzo-network-blockchain
The demo code is sometimes broken between releases as we add new low-level features, so if this is your first time running the demo, you'll improve your odds of success if you check out the latest release before proceeding:
TAG=$(git describe --tags $(git rev-list --tags --max-count=1)) git checkout $TAG
Ensure important programs such as the vote program are built before any nodes are started. Note that we are using the release build here for good performance. If you want the debug build, use just
cargo build and omit the
NDEBUG=1 part of the command.
cargo build --release
The network is initialized with a genesis ledger generated by running the following script.
In order for the validators and clients to work, we'll need to spin up a faucet to give out some test tokens. The faucet delivers Milton Friedman-style "air drops" (free tokens to requesting clients) to be used in test transactions.
Start the faucet with:
Before you start a validator, make sure you know the IP address of the machine you want to be the bootstrap validator for the demo, and make sure that udp ports 8000-10000 are open on all the machines you want to test with.
Now start the bootstrap validator in a separate shell:
Wait a few seconds for the server to initialize. It will print "leader ready..." when it's ready to receive transactions. The leader will request some tokens from the faucet if it doesn't have any. The faucet does not need to be running for subsequent leader starts.
To run a multinode testnet, after starting a leader node, spin up some additional validators in separate shells:
To run a performance-enhanced validator on Linux, CUDA 10.0 must be installed on your system:
./fetch-perf-libs.sh NDEBUG=1 SOLANA_CUDA=1 ./multinode-demo/bootstrap-validator.sh NDEBUG=1 SOLANA_CUDA=1 ./multinode-demo/validator.sh
Testnet Client Demo
Now that your singlenode or multinode testnet is up and running let's send it some transactions!
In a separate shell start the client:
NDEBUG=1 ./multinode-demo/bench-tps.sh # runs against localhost by default
What just happened? The client demo spins up several threads to send 500,000 transactions to the testnet as quickly as it can. The client then pings the testnet periodically to see how many transactions it processed in that time. Take note that the demo intentionally floods the network with UDP packets, such that the network will almost certainly drop a bunch of them. This ensures the testnet has an opportunity to reach 710k TPS. The client demo completes after it has convinced itself the testnet won't process any additional transactions. You should see several TPS measurements printed to the screen. In the multinode variation, you'll see TPS measurements for each validator node as well.
There are some useful debug messages in the code, you can enable them on a per-module and per-level basis. Before running a leader or validator set the normal RUST_LOG environment variable.
- To enable
debugonly in the exzonetwork::banking_stage module:
- To enable BPF program logging:
Generally we are using
debug for infrequent debug messages,
trace for potentially frequent messages and
info for performance-related logging.
You can also attach to a running process with GDB. The leader's process is named exzonetwork-validator:
sudo gdb attach <PID> set logging on thread apply all bt
This will dump all the threads stack traces into gdb.txt
In this example the client connects to our public testnet. To run validators on the testnet you would need to open udp ports
NDEBUG=1 ./multinode-demo/bench-tps.sh --entrypoint bootstrap.testnet.exzoscan.io:8001 --faucet bootstrap.testnet.exzoscan.io:9900 --duration 60 --tx_count 50
Exzo Network cluster performance is measured as average number of transactions per second that the network can sustain (TPS). And, how long it takes for a transaction to be confirmed by super majority of the cluster (Confirmation Time).
Each cluster node maintains various counters that are incremented on certain events. These counters are periodically uploaded to a cloud based database. Exzo Network metrics dashboard fetches these counters, and computes the performance metrics and displays it on the dashboard.
Each node's bank runtime maintains a count of transactions that it has processed. The dashboard first calculates the median count of transactions across all metrics enabled nodes in the cluster. The median cluster transaction count is then averaged over a 2 second period and displayed in the TPS time series graph. The dashboard also shows the Mean TPS, Max TPS and Total Transaction Count stats which are all calculated from the median transaction count.
Each validator node maintains a list of active ledger forks that are visible to the node. A fork is considered to be frozen when the node has received and processed all entries corresponding to the fork. A fork is considered to be confirmed when it receives cumulative super majority vote, and when one of its children forks is frozen.
The node assigns a timestamp to every new fork, and computes the time it took to confirm the fork. This time is reflected as validator confirmation time in performance metrics. The performance dashboard displays the average of each validator node's confirmation time as a time series graph.
The validator software is deployed to GCP n1-standard-16 instances with 1TB pd-ssd disk, and 2x Nvidia V100 GPUs. These are deployed in the us-west-1 region.
solana-bench-tps is started after the network converges from a client machine with n1-standard-16 CPU-only instance with the following arguments:
--tx\_count=50000 --thread-batch-sleep 1000
TPS and confirmation metrics are captured from the dashboard numbers over a 5 minute average of when the bench-tps transfer stage begins.