Open-Source AI Is Uniquely Dangerous

Open-Source AI Is Uniquely Dangerous

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

When people think of AI applications these days, they likely think of “closed-source” AI applications like OpenAI’s ChatGPT—where the system’s software is securely held by its maker and a limited set of vetted partners. Everyday users interact with these systems through a web interface like a chatbot, and business users can access an Application Programming Interface (API) which allows them to embed the AI system in their own applications or workflows. Crucially, these uses allow the company that owns the model to provide access to it as a service, while keeping the underlying software secure. Less well understood by the public is the rapid and uncontrolled release of powerful unsecured (sometimes called open-source) AI systems.

A good first step in understanding the threats posed by unsecured AI is to ask secured AI systems like ChatGPT, Bard, or Claude to misbehave.

OpenAI’s brand name adds to the confusion. While the company was originally founded to produce open-source AI systems, its leaders determined in 2019 that it was too dangerous to continue releasing its GPT systems’ source code and model weights (the numerical representations of relationships between the nodes in its artificial neural network) to the public. OpenAI worried because these text-generating AI systems can be used to generate massive amounts of well-written but misleading or toxic content.

Companies including Meta (my former employer) have moved in the opposite direction, choosing to release powerful unsecured AI systems in the name of democratizing access to AI. Other examples of companies releasing unsecured AI systems include Stability AI, Hugging Face, Mistral, EleutherAI, and the Technology Innovation Institute. These companies and like-minded advocacy groups have made limited progress in obtaining exemptions for some unsecured models in the European Union’s AI Act, which is designed to reduce the risks of powerful AI systems. They may push for similar exemptions in the United States via the public comment period recently set forth in the White House’s AI Executive Order.

I think the open-source movement has an important role in AI. With a technology that brings so many new capabilities, it’s important that no single entity acts as a gatekeeper to the technology’s use. However, as things stand today, unsecured AI poses an enormous risk that we are not yet able to contain.

Understanding the Threat of Unsecured AI

A good first step in understanding the threats posed by unsecured AI is to ask secured AI systems like ChatGPT, Bard, or Claude to misbehave. You could ask them to design a more deadly coronavirus, provide instructions for making a bomb, make naked pictures of your favorite actor, or write a series of inflammatory text messages designed to make voters in swing states more angry about immigration. You…

Read full article: Open-Source AI Is Uniquely Dangerous

The post “Open-Source AI Is Uniquely Dangerous” by David Evan Harris was published on 01/12/2024 by spectrum.ieee.org