all InfoSec news
Hackers Developing Malicious LLMs After WormGPT Falls Flat
March 27, 2024, 11:10 p.m. |
DataBreachToday.co.uk RSS Syndication www.databreachtoday.co.uk
Cybercrooks are exploring ways to develop custom, malicious large language models after existing tools such as WormGPT failed to cater to their demands for advanced intrusion capabilities, security researchers say. Undergrounds forums teem with hackers' discussions about how to exploit guardrails.
advanced capabilities demands discussions experts exploit forums guardrails hackers intrusion jailbreak language language models large llm llms malicious recruiting researchers security security researchers tools wormgpt
More from www.databreachtoday.co.uk / DataBreachToday.co.uk RSS Syndication
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Open-Source Intelligence (OSINT) Policy Analyst (TS/SCI)
@ WWC Global | Reston, Virginia, United States
Security Architect (DevSecOps)
@ EUROPEAN DYNAMICS | Brussels, Brussels, Belgium
Infrastructure Security Architect
@ Ørsted | Kuala Lumpur, MY
Contract Penetration Tester
@ Evolve Security | United States - Remote
Senior Penetration Tester
@ DigitalOcean | Canada