OpenAI’s GPT-4 Shows Limited Success in ID’ing Smart Contract Vulnerabilities

The weaknesses of large language models like ChatGPT are “too great to use reliably for security,” OpenZeppelin’s machine learning lead says


Pavel Ignatov/Shutterstock modified by Blockworks


As artificial intelligence gains traction, executives at blockchain security firm OpenZeppelin said a recent company experiment proves the continued need for a human auditor.   

An OpenZeppelin study tested whether GPT-4 — OpenAI’s latest multimodal model designed to generate text and have human-like conversations — could identify various smart contract vulnerabilities within 28 Ethernaut challenges. 

GPT-4 has already been able to solve coding challenges on Leetcode, a platform for software engineers preparing for coding interviews, according to Mariko Wakabayashi, machine learning lead at OpenZeppelin. 

“We wanted to assess whether GPT4’s strong results in traditional code and academic exams map equally to smart contract code, and if yes, if it can be used to detect and propose fixes for vulnerabilities,” Wakabayashi told Blockworks. 

GPT-4 was able to solve 19 of the 23 Ethernaut challenges introduced before its training data cutoff date of September 2021. It then failed four of the final five tasks.

The AI tool “generally lacks knowledge” of events that happened after September 2021, and “does not learn from its experience,” OpenAI states on its website.

An OpenAI spokesperson did not immediately return a request for comment. 

Though the security researcher running the experiment was initially surprised to see how many challenges GPT-4 seemed to solve, Wakabayashi noted, it became clear there wasn’t “reliable reasoning” behind the model’s outputs.

“In some cases, the model was able to identify a vulnerability correctly but failed to explain the correct attack vector or propose a solution,” the executive added. “It also leaned on false information in its explanation and even made up vulnerabilities that don’t exist.”

For the problems that the AI tool did solve, a security expert had to offer additional prompts to guide it to correct solutions.

Extensive security knowledge is necessary to assess whether the answer provided by AI is “accurate or nonsensical,” Wakabayashi and Security Services Manager Felix Wegener added in written findings.

On level 24 of the Ethernaut challenges, for example, GPT-4 falsely claimed it was not possible for an attacker to become the owner of the wallet.

“While advancements in AI may cause shifts in developer jobs and inspire the rapid innovation of useful tooling to improve efficiency, it is unlikely to replace human auditors in the near future,” Wakabayashi and Wegener wrote.

OpenZeppelin’s test comes after crypto derivatives platform Bitget decided earlier this month to limit the company’s use of AI tools, such as ChatGPT.

The company told Blockworks that an internal survey found that in 80% of cases, crypto traders had a negative experience using the AI chatbot, citing false investment advice and other misinformation. 

Other crypto companies are more bullish on the technology, including, which launched an AI companion tool called Amy.

Abhi Bisarya,’s global head of product, told Blockworks in an interview that AI initiatives will be “game-changing” for the industry. 

Loading Tweet..

Though large language models (LLMs) like ChatGPT have strengths, Wakabayashi told Blockworks, its weaknesses are too great to use reliably for security.

“However, it can be a great tool for creative and more open-ended tasks, so we’re encouraging everyone at OpenZeppelin to experiment and find new use cases,” Wakabayashi said.

Start your day with top crypto insights from David Canellis and Katherine Ross. Subscribe to the Empire newsletter.


Upcoming Events

Salt Lake City, UT

WED - FRI, OCTOBER 9 - 11, 2024

Pack your bags, anon — we’re heading west! Join us in the beautiful Salt Lake City for the third installment of Permissionless. Come for the alpha, stay for the fresh air. Permissionless III promises unforgettable panels, killer networking opportunities, and mountains […]

recent research



Data publishing costs have historically been a bottleneck for rollups, and as more rollups launch, interoperability will continue to be a major challenge. Avail presents a potential solution to rollup fragmentation through its three products: Avail DA, Nexus, and Fusion, which together aim to unify the web3 experience.


The Bitcoin halving is a spectacle that only comes round once every four years


The SEC alleges that Justin Sun spent nearly 400 days in the US from 2017 to 2019


Short-term “sell the news” reactions could follow new BTC price peaks months from now, industry watchers say — but only if history repeats itself


While crypto fundraising remains well off its bull market highs, Q1 data shows capital is returning to the space


Billed as a better BRC-20 fungible token standard, Bitcoin Runes launches tomorrow


Bitcoin miners need to explore unconventional energy avenues or be buried by the financial realities created by this halving