Skip to content

Instantly share code, notes, and snippets.

@gavinandresen
Last active December 14, 2023 23:28
Show Gist options
  • Save gavinandresen/f209a02ee559905aa69bf56e3b41040c to your computer and use it in GitHub Desktop.
Save gavinandresen/f209a02ee559905aa69bf56e3b41040c to your computer and use it in GitHub Desktop.
Storing the UTXO as a bit-vector

Half-baked thoughts exploring a different way of implementing a fully-validating BCH node.

The idea is to shift the storage of full transaction data to wallets, and explore how little data a fully validating node could store. This isn't a problem today (the UTXO set easily fits in the RAM of an inexpensive server-class machine), but might eventually be at very large transaction volumes.

Initial block download is a problem today (it is annoying to have to wait several hours or days to sync up a new node), and this scheme could make it orders of magnitude faster by shifting the time when full transaction data is broadcast from initial block download to new transaction announcement.

Start with a network protocol where wallets broadcast transactions as:

  • The transaction's data (this is what current BCH protocol broadcasts in 'tx' messages)
  • PLUS, for every confirmed input of the transaction:
    • Data for the transaction that is being spent AND
    • A Merkle path for that previous transaction to the merkle root in the block header

... and the network protocol also requires that all new block messages are preceded by transaction messages with previous inputs and merkle paths for every transaction in the block.

Let every node in this new network store a bit-vector for every block, where there are as many bits as there are outputs in the block, and each bit store whether or not that output has been spent (1 == unspent, 0 == spent).

With the network protocol described above and bit-vectors for every block, the node has enough information to fully validate every transaction.

Back-of-the-envelope calculations for gigabyte-sized blocks: A gigabyte block full of 250-byte 2-input, 2-output transactions would have 4 million outputs, which would be a 0.5 megabyte bit-vector. 10 years of gigabyte-sized blocks would fit in 263 gigabytes of RAM, which is reasonable for a powerful server (servers with terabytes of RAM are common these days).

The bit-vectors for old blocks are HIGHLY compressible, because most outputs are spent (TODO: run some BlockSci queries to see how quickly percentage unspent decays....), so the UTXO set can be much smaller on disk (TODO: how much smaller?). If lookup speed when processing new transactions is not a concern, the bit-vectors can be stored compressed in memory; I'm sure there are clever data structures for efficiently querying and updating sparse bit-vectors.

Initial block download for a new node could be very efficient:

  1. Query N peers for the current best block hash (to find tip of best-work chain)
  2. Fetch all 80-byte block headers from peers
  3. For each block, fetch compressed UTXO bit-vector from N peers; if they all match, great. If they disagree, then maybe your peers are malicious, so... uhhh... do something........

The "something" could be:

  • Fail and complain loudly. This might be the best option if your IBD peers are hand-picked ("I mostly trust Coinbase and MIT's DCI and my friend Bob, so I'll peer with them to bootstrap quickly")

  • Use a trust threshold: if 8 or 9 of 10 peers agree, then assume supermajority is correct and disconnect/ban the others.

  • Pick a different, random set of N peers and try again.

  • If you're really paranoid, fall back to downloading full blocks from archiving nodes.

I think the best option is to let node operators hand-pick one or more semi-trusted nodes to get fast boot-strapping; that is simple and there are plenty of full-node operators who have the right incentives to always serve up correct UTXO bit-vectors.

To ponder: is speeding up initial block download and saving memory at the cost of 2-3 times the bandwidth for new transaction announcements the right tradeoff? A gigabit per second connection is 75 gigabytes every ten minutes, so plenty of bandwidth for a few gigabytes of transaction data that translates into a gigabyte-sized block.

@jli225
Copy link

jli225 commented Jan 3, 2018

Great contribution.

@JaredR26
Copy link

JaredR26 commented Jan 3, 2018

Hey Gavin. I have an idea I've been working on that uses principles similar to this. It's not ready to publish yet or I would, but I'll send you what I've got in case you are interested. (Personally a little burned out from bitcoin)

@KainniaK
Copy link

KainniaK commented Jan 3, 2018

Thx for your work Gavin. Wish I was smart enough to understand all of it but I think there are some many people and users that really appreciate the work. Keep it up!

@1mbtxn
Copy link

1mbtxn commented Jan 3, 2018

Food for thought, the latest XBox One X games seem to all require 50 to 100+ Gig download sync's per new game purchase these days... Orders of magnitude of bandwidth is consumed on media these days compared to our crypto space issues. Thanks again Gavin for pushing through.

@WestonReed
Copy link

Great contribution! Keep up the great work!

@bvk
Copy link

bvk commented Jan 3, 2018

Just a random thought, not sure where this leads:

  1. Bit vector of block 1 can be imagined to have infinitely long zeros towards the LSB side.

  2. Bit vector of block 2 can be an xor on top of the infinitely long bit vector of block 1, with infinitely long zeros towards the LSB side.

  3. Bit vector of block 3 can be an xor on the result of step 2 and so on.

  4. Thus every block results in a xor on infinitely long bit vector which always ends with infinitely long zeros to the LSB end.

Can we eliminate the need for trust if every new block could include sha256 (or something else) of the bit vector xor result of all blocks before?

Can we, may be, use a range based encoding scheme for representing infinitely long bit vectors in compact form with a way to perform XOR operation?

@theantnest
Copy link

theantnest commented Jan 3, 2018

The bit-vectors for old blocks are HIGHLY compressible, because most outputs are spent

I often wondered why every node needs to store the history of every coin - from it's current UTXO, to when it was mined.

Couldn't we have some method of taking (and later verifying) a snapshot of every current coin fragments' current location only? That has already been verified by mining POW, so if we use the POW backed blockchain to make a hashed/ signed snapshot of only where every coin is at the time of the snapshot, and not its entire transaction history, this can be a great way to solve the problem of spinning up a new big block node in the future.

New nodes could just start by loading the last verified snapshot, like a waypoint, and start verifying the blockchain fully from that point.

On that notion, you could have a child blockchain which stores and verifies validated snapshots, which will be MUCH smaller than the whole blockchain, and still backed by consolidated POW. Also, blockchain analysis would be impossible from a snapshot, which is a plus for privacy.

What is the point of every brand new node needing to be able to verify every cup of coffee transaction since genesis? If we start from a known good point, we only need it to verify future transactions, until the next snapshot after which all the extraneous data could be consolidated and discarded.

Obviously future smart contract TX would need to be "continued over" in the child chain as well somehow.

@1Hyena
Copy link

1Hyena commented Jan 3, 2018

@theantnest
cool idea. why not code into the protocol the random specification of the snapshot block somewhere in the past? for example, use the hash of the concatenation of the last 1000 blocks to determine the snapshot block. this way it is impossible to predict which block is going to be the next snapshot.

@theantnest
Copy link

I would love for somebody more knowledgeable than me to look into the whole idea.

@mboyd1
Copy link

mboyd1 commented Jan 3, 2018

I am glad you are working on Bitcoin again, Gavin

@RoboTeddy
Copy link

RoboTeddy commented Jan 4, 2018

Providing merkle proofs of dependencies of transactions reminds me a lot of Vitalik's The Stateless Client Concept

@lacksfish
Copy link

lacksfish commented Jan 4, 2018

It also sounds what has been describes as UTXO commitments. They could either be optionally provided in the coinbase or made mandatory by soft forking.

EDIT: But as proposed here it would only need the merkle path to the Merkle root of a certain block..

There should maybe be tests on how resource intense it is to calculate the changing bitvector from block to block if it would be mandatory to do so

@kingvest
Copy link

kingvest commented Jan 4, 2018

Can we eliminate the need for trust
On that notion, you could have a child blockchain which stores and verifies validated snapshots

This idea is worth more exploration ...

@JaredR26
Copy link

JaredR26 commented Jan 4, 2018

So the general idea of using a bit vector to track isn't a bad idea, but in my opinion it would need some sort of commitment within blocks. Simply trusting a majority of peers is a substantial change to the trust & vulnerability model of full nodes.

However, the bandwidth overheads aren't going to be a worthwhile trade off at much larger scales. The minimum costs of running a full node come down to the bandwidth required to stay in sync with the network, the storage required to store the current serialized UTXO set & indexes, and the RAM and CPU required to keep up with the speed of the network.

I'm going to use 500mb blocks for these examples because the BU team has tested commodity hardware with roughly 500mb blocks and successfully kept the network in sync with few changes to the software (someone can correct me if I'm mistaken but that's what it looked like last I saw Peter's excellent research). So at least until that point CPU and RAM constraints won't massively contribute to the minimum costs of running a full node and can be treated as a flat $10 per month cost for this example.

Assuming that the serialized UTXO set scales equivalently with blocksize, a 500mb blocksize might have a 500x serialized UTXO dataset. Statoshi.info gives that as 3.301 GB today, so that would be 1650 GB with 500mb blocks. After accounting for technology improvement trends(10% per year bandwidth, 14% per year storage costs, see below) that's about $20.41 per month in 2028 when previous trends would allow us to fill 500mb blocks. So this cost would be double the baseline cost of the computer/server hosting.

But for bandwidth? To measure bandwidth I measured the real bandwidth usage of a full (pruning-enabled) node on the network. I first did this earlier this year, and ran the test twice for several days each. I found that my pruned fullnode was consuming about 75 gb per month of bandwidth. At 500mb blocks with the current system that would be 37.5 TB/month of bandwidth. Many cloud providers include bandwidth in the pricing of the system, but I found several cloud providers that were providing large scale bandwidth for as low as $0.01 and $0.02 per GB per month. Bandwidth price improvement per year varies by location, but I was able to find a range of 8-18% price improvement per year for backbone-scale bandwidth providers, so assuming bandwidth prices drop by 10% per year, in 2028 when the 500mb blocks were full that 37.5 TB would cost about $224.42 per month. Meaning that minimal bandwidth costs ten times as much as the minimal storage costs, and 20 times as much as the baseline hosting.

Note that these numbers all get much, much worse when syncing / fullhistory is added to the picture. Earlier this year my fullnode supporting syncing was consuming 1.5-2.5 TB per month of bandwidth, and assuming +80% per year additional transactions (matching our 2013-2016 growth rates) the full history size would be 68.5 TB right when 500mb blocks began. My conclusion from this is that clearly a much better process for syncing is absolutely necessary, ala UTXO commitments and warp-sync, but I'm skeptical of any such process that increases the largest minimal-fullnode cost - Bandwidth.

@mariodian
Copy link

the latest XBox One X games seem to all require 50 to 100+ Gig download sync's per new game purchase these days

@1mbtxn is every XBox owner constantly broadcasting this data to other users? If not then the comparison is a bit off. It's a big difference downloading n * 50GB of data per year and downloading and most importantly uploading gigabytes of data per day 24/7.

@Danconnolly
Copy link

The UTXO bit-vector changes. Imagine the following scenario: you connect to a peer and start syncing. A block is found which spends a UTXO from a block for which you have not yet retrieved the UTXO bit-vector. Will you receive the updated UTXO bit-vector or the original?

It's not reasonable to expect a peer to keep track of all the peers that are syncing.

One option: send the height of the chain with each bit-vector, but this would increase transmission size considerably with a lot of redundant data.

Another option: make sure that the peer sends the block announce before sending anymore bit-vectors but this depends on good behaviour of the peer, what would happen if it misbehaved? Something to think about.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment