Security implications of AI in the crypto space

AI is proliferating the digital asset workspace, and we must develop best practices for institutions seeking to mitigate the risks of AI-generated work

article-image

VesnaArt/Shutterstock modified by Blockworks

share

Artificial intelligence, or AI, has taken the world by storm. Crypto companies worldwide are actively embracing the new possibilities of AI technology, integrating it to both increase productivity and bolster security within their enterprises. However, AI adoption can also lead to security risks for companies, such as potential bugs and issues around data sensitivity. Crypto companies need to be particularly cautious as the introduction of a bug or a data leak can lead to vast sums of customer wealth being stolen from exchanges and protocols.

This post will highlight the security risks which institutions face as a result of the surge in AI technology and how they can manage these risks. We will reference professionals from Bullish in this piece. Bullish is a regulated cryptocurrency exchange which is uniquely positioned to comment on these matters. Serving a global user base and regularly facilitating over 1 billion USD in daily trading volume, Bullish is an exchange which has security and data sensitivity at the forefront of their minds. Whether it be in relation to AI technology or other developments in the tech space, the team at Bullish is ensuring that they have appropriate guardrails in place to protect both proprietary company information and sensitive user data.

“The surge in AI usage should not be a surprise to anyone. Developers and content creators have started using the technology in earnest, and companies cannot afford to be unsure of how to handle that. Given the wide variety of AI tools on the market, and the integration into many productivity platforms, it is not an easy problem to tackle, especially around adequately protecting sensitive data. However, by putting the right measures in place, companies can protect their sensitive data while still allowing staff to realize the benefits of using these new technologies.”

— Josh Wallace, Bullish head of offensive security and vulnerability management

Generative AI proliferation marks new era of AI-assisted development

Source: trends.google.com

The usage of generative AI (AI models that generate content such as images, text, and video) reached an inflection point with the launch of ChatGPT in November of 2022. Content producers and developers alike adopted the technology in vast numbers to experiment with its capabilities. Developers quickly found that AI could assist them in a broad variety of ways, proving proficient at fundamental development tasks, such as generating code and checking for problems. However, it could also help in more creative ways such as serving as a virtual mentor for brainstorming and understanding complex issues, acting as peer programmers for navigating and interpreting legacy code, and also contributing to writing test cases and debugging.

AI is being integrated in versatile ways within the cryptocurrency industry. While more sensitive code areas like smart contract development are better managed without the assistance of generative AI, the development of less critical software areas can be streamlined through the usage of generative AI tools. Drafting preliminary codebases for areas such as design patterns, unit tests, and automations can all be accomplished to a high level with the usage of the right prompts for generative AI.

For instance, cryptocurrency traders or HFT (high-frequency trading) funds seeking to connect to an exchange API can quickly generate testing scripts. These scripts allow them to carry out technical tests that ensure they can connect to the exchange in a reliable manner. Ultimately, AI works well to establish a starting point for development but refinements and stringent security vetting will be necessary from that point.

Although this new era of AI-assisted development and content generation is helping crypto professionals streamline their work, it does not come without its share of risks. There is an element of randomness in AI outputs, commonly known as the ‘hallucination problem,’ and as such, accuracy is not guaranteed. This can give rise to several risks, such as output that sounds plausible to humans but is incorrect, code being generated with security flaws and content being produced with plagiarism or copyright issues. 

“The majority of these AI tools are based on neural network models. A critical piece to understand here is that these models may not have been trained on the latest information on a subject and may even make conclusions based on incomplete information (this is called inference). As a result, these models can’t always be trusted to always give the correct output, meaning that their outputs need to be verified before being introduced into live code or as proof of some analysis.”

— Matt Presson, Bullish chief information security officer for the Americas

While plagiarism and security gaps in code are blatant risks to be aware of, a much more subtle concern may present itself in the form of data sensitivity. While crypto professionals may move in droves to adopt the latest and most powerful AI tools, they need to be acutely aware of how these applications use the data which they feed them. This is pertinent not only for institutions but on an individual level as well. 

The majority of AI and machine learning models will use the data which they are fed to improve their output algorithm. If employees use sensitive data related to the business or personal identifiable information (PII) in AI tools, this data may find itself being used in the AI’s training model which is designed to further improve the outputs of the application. However, this training model may be publicly viewable, revealing sensitive personal and operational data.

An example as simple as a developer using the source code of an in-house project to generate more code and functions can result in confidential project features or customer data being revealed to the public. Even if the AI is only used to format or organize the data, there is the risk that this data could end up being leaked. 

A measured approach to risk management for institutions adopting AI

Integrating AI responsibly should be at the forefront of a company’s concerns when it comes to using generative AI to streamline their productivity and improve security. Implementing the technology carelessly will put companies and their customers at significant risk. 

Instead, companies should seek to introduce guardrails and procedures which sufficiently mitigate the risk associated with the unrestricted use of generative AI. An initial step would be taking the time to understand the distinctions between different AI solutions in the market. From there, institutions can avoid AI tools which use the data you feed them in their training models. They can also identify which AI providers operate in jurisdictions with a lack of strong data protection laws.

Once companies know which AI vendors they wish to work with, they can put in place contracts with these vendors to legally outline the boundaries of how the vendor can use the data which is provided to them. On top of that, crypto companies can put data classification and data protection measures in place to ensure that data that is particularly sensitive will not be shared with AI tools.

“Classifying data internally and putting contracts in place with AI vendors will become increasingly important for all companies, including those working in crypto. As AI tools continue to proliferate and grow in popularity, companies can’t risk sensitive customer or company data being leaked due to poor data practices around using a tool.”

— Matt Presson, Bullish chief information security officer for the Americas

In terms of the output produced using AI tools, employees need to be held accountable for the quality and security of their work. AI-generated code and content should go through the same peer review and security testing process as code and content that is crafted without AI. This will help ensure that the AI output is not treated as special and instead goes through the same checkpoints as human-generated code to identify any potential risks before going live. All content or code that could pose a significant risk to the business should be put through a rigorous review process, regardless of whether it is generated by AI or not.

AI as a fundamental pillar of business

As AI technology rapidly proliferates, it presents both challenges and unprecedented opportunities, particularly in the dynamic world of cryptocurrency. Instead of preventing the use of these AI tools, embracing and deeply understanding them emerges as the key to future success.

Institutions are encouraged to engage in open dialogues and partnerships with their IT and security teams. By doing so, they can develop effective strategies and introduce necessary controls to mitigate risks while capitalizing on AI’s potential to drive innovation and competitive advantage. The future of digital business, securely empowered by AI, offers a landscape brimming with possibilities for growth and transformation.

This content is sponsored by Bullish.


Start your day with top crypto insights from David Canellis and Katherine Ross. Subscribe to the Empire newsletter.

Explore the growing intersection between crypto, macroeconomics, policy and finance with Ben Strack, Casey Wagner and Felix Jauvin. Subscribe to the Forward Guidance newsletter.

Get alpha directly in your inbox with the 0xResearch newsletter — market highlights, charts, degen trade ideas, governance updates, and more.

The Lightspeed newsletter is all things Solana, in your inbox, every day. Subscribe to daily Solana news from Jack Kubinec and Jeff Albus.

Tags

Upcoming Events

Javits Center North | 445 11th Ave

Tues - Thurs, March 18 - 20, 2025

Blockworks’ Digital Asset Summit (DAS) will feature conversations between the builders, allocators, and legislators who will shape the trajectory of the digital asset ecosystem in the US and abroad.

Brooklyn, NY

TUES - THURS, JUNE 24 - 26, 2025

Permissionless IV serves as the definitive gathering for crypto’s technical founders, developers, and builders to come together and create the future.If you’re ready to shape the future of crypto, Permissionless IV is where it happens.

recent research

Research Report Templates.png

Research

An overview of the Base Ecosystem, with a focus on market leaders.

article-image

Although bitcoin hitting $120k by year’s end is looking unlikely

article-image

About 270 million HYPE has been claimed, valued around $7.6 billion

article-image

Stanford professors David Mazières and Dan Boneh will lead the lab alongside a cohort of graduate student researchers

article-image

With more companies holding BTC, bitcoin yielding strategies could become “a new corporate finance norm,” CoinShares posed

article-image

The proposal comes after Polygon governance considered a controversial use of bridged liquidity for yield

article-image

Can the community balance its decentralized ethos with the need for inclusivity and constructive debate?