Package hallucination: LLMs may deliver malicious code to careless devs
LLMs’ tendency to “hallucinate” code packages that don’t exist could become the basis for a new type of supply chain attack dubbed “slopsquatting” (courtesy of Seth Larson, Security Developer-in-Residence at the Python Software Foundation). A known occurrence Many software developers nowadays use large language models (LLMs) to help with their programming. And, unfortunately, LLMs’ known tendency to spit out fabrications and confidently present them as facts when asked questions on various topics extends to coding. … More →
The post Package hallucination: LLMs may deliver malicious code to careless devs appeared first on Help Net Security.