David Benas, associate principal consultant at application security vendor Black Duck, said these security issues are a natural consequence of training AI models on human-generated code.
“The sooner everyone is comfortable treating their code-generating LLMs as they would interns or junior engineers pushing code, the better,” Benas said. “The underlying models behind LLMs are inherently going to be just as flawed as the sum of the human corpus of code, with an extra serving of flaw sprinkled on top due to their tendency to hallucinate, tell lies, misunderstand queries, process flawed queries, etc.”
While AI coding assistants such as GitHub Copilot increase developer speed, they also introduce new security risks, John Smith, EMEA chief technology officer at Veracode, told CSO.