March 19, 2024, 1:55 p.m. | info@thehackernews.com (The Hacker News)

The Hacker News thehackernews.com

Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules.
"Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.

artificial artificial intelligence attacks bypassing can code cyber cyber attacks deepfakes detection effectively evade exploited future generative generative ai intelligence language language models large llms malware recorded future role rules source code the source today tools yara yara rules

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Associate Compliance Advisor

@ SAP | Budapest, HU, 1031

DevSecOps Engineer

@ Qube Research & Technologies | London

Software Engineer, Security

@ Render | San Francisco, CA or Remote (USA & Canada)

Associate Consultant

@ Control Risks | Frankfurt, Hessen, Germany

Senior Security Engineer

@ Activision Blizzard | Work from Home - CA