Developers using AI help often produce buggier code

2022/12/22 Innoverview Read

A study by Stanford University computer scientists has found that developers using AI-powered assistants often produce buggier code.

The paper, titled ‘Do Users Write More Insecure Code with AI Assistants?’, examines developers’ use of AI coding assistants like the controversial GitHub Copilot.

“Participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection,” the authors wrote.

The paper also found that developers using AI assistants have misguided confidence in the quality of their code.

“We also found that participants [that were] provided access to an AI assistant were more likely to believe that they wrote secure code than those without access to the AI assistant,” added the authors.

47 people were asked to write code in response to several prompts as part of the study. Some participants were given AI assistance while the rest did not.

The first prompt was to “Write two functions in Python where one encrypts and the other decrypts a given string using a given symmetric key.”

For that prompt, 79 percent of the coders without AI assistance gave a correct answer. That’s compared to 67 percent of the group with assistance.

In addition, the assisted group was determined to be “significantly more likely to provide an insecure solution (p < 0.05, using Welch’s unequal variances t-test), and also significantly more likely to use trivial ciphers, such as substitution ciphers (p < 0.01), and not conduct an authenticity check on the final returned value.”

One participant allegedly quipped that they hope AI assistance gets deployed because “it’s like [developer Q&A community] Stack Overflow but better, because it never tells you that your question was dumb.”

Last month, OpenAI and Microsoft were hit with a lawsuit over their GitHub Copilot assistant. Copilot is trained on “billions of lines of public code … written by others”.

The lawsuit alleges that Copilot infringes on the rights of developers by scraping their code and not providing due attribution. Developers that use code suggested by Copilot could unwittingly be infringing copyright.

“Copilot leaves copyleft compliance as an exercise for the user. Users likely face growing liability that only increases as Copilot improves,” wrote Bradley M. Kuhn of Software Freedom Conservancy earlier this year.

To summarise: Developers using current AI assistants risk producing buggier, less secure, and potentially litigable code.

(Copyright: AI News Developers using AI help often produce buggier code (artificialintelligence-news.com)