Why data availability sampling matters for blockchain scaling

Data availability sampling uses polynomials to prove that transactions are accurate

article-image

Sergey Nivens/Shutterstock modified by Blockworks

share

On-chain data availability has become an increasingly common topic as Ethereum continues to scale.

Today, Ethereum developers are looking at where and how data should be stored on blockchain networks as they work towards resolving the so-called blockchain trilemma, referring to the tradeoffs between security, scalability and decentralization. 

In crypto, data availability refers to the concept that data that is stored on a network is accessible and retrievable by all network participants. 

On Ethereum layer-1, the network’s nodes download all the data in each block, making it difficult for invalid transactions to be executed. 

Although this can guarantee security, this process can be relatively inefficient — asking a network node to verify and store all data in a block can drastically reduce throughput and hinder blockchain scalability.

Ethereum layer-2 scaling solutions are designed to resolve this problem. 

One popular solution today is the optimistic rollup, such as Arbitrum and Optimism. Optimistic rollups are “optimistic” in nature as they assume transactions are valid until proven otherwise. 

Most rollups today only have one single sequencer, meaning there is a centralization risk, Anurag Arjun, co-founder of modular blockchain Avail, told Blockworks. 

This is not a major problem at present, as rollup solutions must put the raw transaction data on Ethereum using something called calldata — the cheapest form of storage on Ethereum today, as Arjun notes.

Once a calldata is submitted to Ethereum mainnet, anyone can challenge whether or not it is accurate within a set period of time, according to Neel Somani, the founder of blockchain scaling solution Eclipse.

If no one challenges the validity of the rollup, it will be accepted on Ethereum once the period of time is up.

The problem, Somani notes, is how someone can then prove that a transaction was executed inaccurately if they don’t have the data.

“If I don’t tell you what I executed, there’s no way for you to possibly prove that it is wrong, so you need to know exactly what I executed in order to fix that,” Somani said. “So all blockchains must prove data availability in some way, shape or form.”

Data availability sampling

As all blockchains must prove data availability, it can be inefficient to download a full block onto a network, which again invokes the initial data availability problem.

“So as someone who doesn’t want to download the full block, I still want the confidence that the information on the block is not being withheld,” Somani said.

The solution, according to Somani, is the use of data availability sampling to gain confidence that the block is actually there.

Data availability sampling involves sampling random parts of the block to obtain arbitrarily high confidence that the block is there, Somani explains. 

This technology utilizes polynomials — a mathematical expression comprising variables, coefficients and exponentiation — to model relationships between variables in a block. 

A common misinterpretation of data availability sampling is that if you sample half the block, you only gain 50% confidence that the information in the block is accurate, Somani said. This isn’t true, he explains, because as with data availability sampling, users must make sure that they have enough points to recover the original polynomial.

Projects like Celestia and Avail are currently building out data availability sampling solutions.

“What we sincerely believe is that every base layer is going to be a data availability layer,” Arjun told Blockworks. “The main directional fight that we are having is wanting to scale data availability at the base layer, and have execution and roll up on the second layer.”


Start your day with top crypto insights from David Canellis and Katherine Ross. Subscribe to the Empire newsletter.

Tags

Upcoming Events

Salt Lake City, UT

WED - FRI, OCTOBER 9 - 11, 2024

Pack your bags, anon — we’re heading west! Join us in the beautiful Salt Lake City for the third installment of Permissionless. Come for the alpha, stay for the fresh air. Permissionless III promises unforgettable panels, killer networking opportunities, and mountains […]

recent research

Research report HL cover.jpg

Research

It's increasingly apparent that orderbooks represent the most efficient model for perpetual trading, with the primary obstacle being that the most popular blockchains are ill-suited for hosting a fully onchain orderbook. Hyperliquid is a perpetual trading protocol built on its own L1 that aims to replicate the user experience of centralized exchanges while offering a fully onchain orderbook.

article-image

They both may be in prison for an overlapping 120 days, but the similarities stop there

article-image

The tokenization of real-world assets is set to continue as a “defining trend” for institutional crypto in 2024, Anchorage Digital CEO says

article-image

Upcoming macroeconomic clarity, or a lack thereof, is likely to be a key contributor to bitcoin’s next price movement

article-image

Runes protocol will bring versatility to Bitcoin, but some are worried about the increased fees

article-image

The sentencing closes the book on the DOJ’s settlement with Binance and its former CEO

article-image

Roger Ver was arrested in Spain on Tuesday, the DOJ said