Officials from the Defense Department have disclosed that powerful artificial intelligence models are more susceptible to exploitation than previously believed. During a recent symposium hosted by the National Defense Industrial Association, Alvaro Velasquez, a program manager at the Defense Advanced Research Projects Agency (DARPA), issued a warning that large language models (LLMs) employed in AI technology are "a lot easier to attack than they are to defend." Velasquez provided details about a DARPA program in which they successfully circumvented the safety measures of LLMs, leading an AI model named ChatGPT to provide information on topics like bomb-making.
Velasquez, who joined DARPA last year to conduct AI research, is responsible for overseeing programs that scrutinize AI models and tools, including one called "Reverse Engineering of Deceptions." The popularity of generative AI tools, which can generate text indistinguishable from human writing, has grown substantially over the past year. Nonetheless, Deputy Defense Secretary Kathleen Hicks highlighted that most commercially available systems employing large language models are not yet mature enough to align with ethical AI principles for operational deployment.
Pentagon experiments find generative AI easy to exploit – https://t.co/UX7eprneLF
— The Washington Times (@WashTimes) November 3, 2023
Despite these concerns, the Defense Department has introduced a formal strategy for the adoption of AI. This strategy acknowledges that the United States' competitors will continue to pursue advanced AI technology. Its primary objective is to develop emerging technology in a manner that safeguards it from theft and exploitation while complying with relevant laws. Hicks underscored that the aim is not to engage in an AI arms race with China but rather to deter aggression and protect the nation, its allies, and its interests.
In summary, the Defense Department's revelations underscore the vulnerabilities associated with potent artificial intelligence models and the pressing need for further development and regulation to ensure responsible and ethical utilization of AI in military applications.