OpenAI’s GPT-4 Shows Limited Success in ID’ing Smart Contract Vulnerabilities

The weaknesses of large language models like ChatGPT are “too great to use reliably for security,” OpenZeppelin’s machine learning lead says

article-image

Pavel Ignatov/Shutterstock modified by Blockworks

share

As artificial intelligence gains traction, executives at blockchain security firm OpenZeppelin said a recent company experiment proves the continued need for a human auditor.   

An OpenZeppelin study tested whether GPT-4 — OpenAI’s latest multimodal model designed to generate text and have human-like conversations — could identify various smart contract vulnerabilities within 28 Ethernaut challenges. 

GPT-4 has already been able to solve coding challenges on Leetcode, a platform for software engineers preparing for coding interviews, according to Mariko Wakabayashi, machine learning lead at OpenZeppelin. 

“We wanted to assess whether GPT4’s strong results in traditional code and academic exams map equally to smart contract code, and if yes, if it can be used to detect and propose fixes for vulnerabilities,” Wakabayashi told Blockworks. 

GPT-4 was able to solve 19 of the 23 Ethernaut challenges introduced before its training data cutoff date of September 2021. It then failed four of the final five tasks.

The AI tool “generally lacks knowledge” of events that happened after September 2021, and “does not learn from its experience,” OpenAI states on its website.

An OpenAI spokesperson did not immediately return a request for comment. 

Though the security researcher running the experiment was initially surprised to see how many challenges GPT-4 seemed to solve, Wakabayashi noted, it became clear there wasn’t “reliable reasoning” behind the model’s outputs.

“In some cases, the model was able to identify a vulnerability correctly but failed to explain the correct attack vector or propose a solution,” the executive added. “It also leaned on false information in its explanation and even made up vulnerabilities that don’t exist.”

For the problems that the AI tool did solve, a security expert had to offer additional prompts to guide it to correct solutions.

Extensive security knowledge is necessary to assess whether the answer provided by AI is “accurate or nonsensical,” Wakabayashi and Security Services Manager Felix Wegener added in written findings.

On level 24 of the Ethernaut challenges, for example, GPT-4 falsely claimed it was not possible for an attacker to become the owner of the wallet.

“While advancements in AI may cause shifts in developer jobs and inspire the rapid innovation of useful tooling to improve efficiency, it is unlikely to replace human auditors in the near future,” Wakabayashi and Wegener wrote.

OpenZeppelin’s test comes after crypto derivatives platform Bitget decided earlier this month to limit the company’s use of AI tools, such as ChatGPT.

The company told Blockworks that an internal survey found that in 80% of cases, crypto traders had a negative experience using the AI chatbot, citing false investment advice and other misinformation. 

Other crypto companies are more bullish on the technology, including Crypto.com, which launched an AI companion tool called Amy.

Abhi Bisarya, Crypto.com’s global head of product, told Blockworks in an interview that AI initiatives will be “game-changing” for the industry. 

Loading Tweet..

Though large language models (LLMs) like ChatGPT have strengths, Wakabayashi told Blockworks, its weaknesses are too great to use reliably for security.

“However, it can be a great tool for creative and more open-ended tasks, so we’re encouraging everyone at OpenZeppelin to experiment and find new use cases,” Wakabayashi said.


Get the news in your inbox. Explore Blockworks newsletters:

Tags

Decoding crypto and the markets. Daily, with Byron Gilliam.

Upcoming Events

Old Billingsgate

Mon - Wed, October 13 - 15, 2025

Blockworks’ Digital Asset Summit (DAS) will feature conversations between the builders, allocators, and legislators who will shape the trajectory of the digital asset ecosystem in the US and abroad.

recent research

Research Report Templates.jpg

Research

Figure, founded by former SoFi CEO Mike Cagney, has emerged as a leader in onchain RWAs, with ~$17.5B publicly tokenized. The platform’s ecosystem volume is growing ~40% YoY as it expands beyond HELOCs into student loans, DSCR loans, unsecured loans, bankruptcy claims, and more. Operationally, Figure cuts average loan production cost by ~93% and compresses median funding time from ~42 days to ~10, creating a durable speed-and-cost advantage.

article-image

If the president breaks the Fed, he’ll own the budget problems

article-image

Combining Franklin Templeton’s tokenization expertise with Binance’s trading infrastructure could speed crypto adoption, companies say

article-image

The firm’s upcoming filing comes as competition heats up over the USDH stablecoin

article-image

Robinhood and Coinbase are vying for user attention with new social features that could rival Twitter

article-image

Atkins backs predictable digital asset framework through Project Crypto, marking a departure from prior enforcement tactics

by Blockworks /
article-image

Following a roundtable, the startup emerges as frontrunner, despite entries from Paxos, Ethena, and others