Why data availability sampling matters for blockchain scaling

Data availability sampling uses polynomials to prove that transactions are accurate

article-image

Sergey Nivens/Shutterstock modified by Blockworks

share

On-chain data availability has become an increasingly common topic as Ethereum continues to scale.

Today, Ethereum developers are looking at where and how data should be stored on blockchain networks as they work towards resolving the so-called blockchain trilemma, referring to the tradeoffs between security, scalability and decentralization. 

In crypto, data availability refers to the concept that data that is stored on a network is accessible and retrievable by all network participants. 

On Ethereum layer-1, the network’s nodes download all the data in each block, making it difficult for invalid transactions to be executed. 

Although this can guarantee security, this process can be relatively inefficient — asking a network node to verify and store all data in a block can drastically reduce throughput and hinder blockchain scalability.

Ethereum layer-2 scaling solutions are designed to resolve this problem. 

One popular solution today is the optimistic rollup, such as Arbitrum and Optimism. Optimistic rollups are “optimistic” in nature as they assume transactions are valid until proven otherwise. 

Most rollups today only have one single sequencer, meaning there is a centralization risk, Anurag Arjun, co-founder of modular blockchain Avail, told Blockworks. 

This is not a major problem at present, as rollup solutions must put the raw transaction data on Ethereum using something called calldata — the cheapest form of storage on Ethereum today, as Arjun notes.

Once a calldata is submitted to Ethereum mainnet, anyone can challenge whether or not it is accurate within a set period of time, according to Neel Somani, the founder of blockchain scaling solution Eclipse.

If no one challenges the validity of the rollup, it will be accepted on Ethereum once the period of time is up.

The problem, Somani notes, is how someone can then prove that a transaction was executed inaccurately if they don’t have the data.

“If I don’t tell you what I executed, there’s no way for you to possibly prove that it is wrong, so you need to know exactly what I executed in order to fix that,” Somani said. “So all blockchains must prove data availability in some way, shape or form.”

Data availability sampling

As all blockchains must prove data availability, it can be inefficient to download a full block onto a network, which again invokes the initial data availability problem.

“So as someone who doesn’t want to download the full block, I still want the confidence that the information on the block is not being withheld,” Somani said.

The solution, according to Somani, is the use of data availability sampling to gain confidence that the block is actually there.

Data availability sampling involves sampling random parts of the block to obtain arbitrarily high confidence that the block is there, Somani explains. 

This technology utilizes polynomials — a mathematical expression comprising variables, coefficients and exponentiation — to model relationships between variables in a block. 

A common misinterpretation of data availability sampling is that if you sample half the block, you only gain 50% confidence that the information in the block is accurate, Somani said. This isn’t true, he explains, because as with data availability sampling, users must make sure that they have enough points to recover the original polynomial.

Projects like Celestia and Avail are currently building out data availability sampling solutions.

“What we sincerely believe is that every base layer is going to be a data availability layer,” Arjun told Blockworks. “The main directional fight that we are having is wanting to scale data availability at the base layer, and have execution and roll up on the second layer.”


Start your day with top crypto insights from David Canellis and Katherine Ross. Subscribe to the Empire newsletter.

Tags

Upcoming Events

Salt Lake City, UT

WED - FRI, OCTOBER 9 - 11, 2024

Pack your bags, anon — we’re heading west! Join us in the beautiful Salt Lake City for the third installment of Permissionless. Come for the alpha, stay for the fresh air. Permissionless III promises unforgettable panels, killer networking opportunities, and mountains […]

recent research

Screen Shot 2024-05-16 at 14.53.45.png

Research

Loss-versus-rebalancing (LVR) is arguably Ethereum DeFi’s biggest problem, and thus reducing LVR is fundamental to the success of Ethereum. This report dives into the world of LVR. We uncover its importance for AMM designers, discuss the two major mechanism design categories and various projects developing solutions, and offer a higher level perspective on the importance of AMMs in general.

article-image

Yesterday saw Congress’ upper chamber side with the House on a measure aimed at overturning SAB 121

article-image

Oklahoma’s new crypto bill will go into effect in November of this year

article-image

The deposits hit a $20 million cap in just 45 minutes

article-image

Twelve Democratic Senators voted in favor to pass the resolution Thursday

article-image

Pump.fun is “aware” that bonding curve contracts on Pump.fun were exploited, and has since paused trading

article-image

Some investment pros are mulling crypto allocations between 1% and 10% and seeking ex-BTC exposure for interested clients