Geo-replicated by a content delivery network (CDN) to Falkenstein (Germany), New York, Los Angeles, Singapore and Sydney
Compressed using xz[1] due to its very good compression size and fast decompression time
Empty Address Book in Database
Fair Use:
There is a generous monthly bandwidth limit to manage costs. If this limit has been reached and you urgently need to download a snapshot then please send a private message to @Stuart
PS: I’ll add some instructions on how to decompress the snapshot file and some css to make the page not look like something from 1998…
In the event a validator’s own backup database is corrupted or they find themselves needing to sync from scratch, it is possible to restore the database from a snapshot to minimise the time needed to sync with the ledger. Otherwise it takes 18+ hours to sync.
When a new Radix node starts up for the first time, it needs to create a full copy of the ledger. The ledger contains an ordered record of every single transaction that has occurred since the genesis of the Radix public network. The ledger gets larger over time as more transactions are added.
There are various factors that affect how long it takes to build the ledger and supporting database files such as network bandwidth, the memory and CPU power of the Radix node, the type of file storage and so on. People have reported anywhere from 8 hours up to 24 hours for the ledger to become fully synchronised (up to date with the rest of the Radix public network). While a node is synchronising it cannot complete proposals (participate in consensus and validate transactions) and receive staking rewards.
The snapshot archive is a copy of all the files that make up a synchronised ledger at the time the snapshot was taken. A node runner can download a “ready made” copy of the ledger files and then when the Radix node starts it only has to fetch the transactions that have occurred since the snapshot.
So, rather than having to wait 8 to 24 hours to sync, a node runner can download a snapshot (30 mintutes to 2 hours to download) and then catch up (~ minutes).
As Faraz explained, the snapshots are most useful when an existing Top 100 validator’s database become corrupted/lost and they need to recover as quickly as possible.
Thanks for creating and sharing this Stuart. Can I check if the je.properties file has the recommended settings. Could you paste a copy of the file contents if you have it to hand? (saves downloading the whole backup to check)
thanks Andrew
Sure, the je.properties file has the following content in the snapshot archive:
# Set the log file size to 1Gb each (Default: 100Mb)
je.log.fileMax=1073741824
# Run the checkpointer every 512Mb of data (Default: 20Mb)
je.checkpointer.bytesInterval=536870912
# Don't collect stats in je.stats.csv
je.stats.collect=false
# Only log warnings in je.info.*
com.sleepycat.je.util.FileHandler.level=WARNING
The snapshot is a copy taken from one of my production standby nodes so it’s grabbed my settings. I haven’t checked but it should still work if you change the settings to use your preferred values.