March 5, 2024, 3:11 p.m. | Jiacen Xu, Jack W. Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, Zhou Li

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.01038v1 Announce Type: new
Abstract: Large language models (LLMs) have demonstrated impressive results on natural language tasks, and security researchers are beginning to employ them in both offensive and defensive systems. In cyber-security, there have been multiple research efforts that utilize LLMs focusing on the pre-breach stage of attacks like phishing and malware generation. However, so far there lacks a comprehensive study regarding whether LLM-based systems can be leveraged to simulate the post-breach stage of attacks that are typically human-operated, …

arxiv attacks automatic breach cs.ai cs.cr cyber defensive language language models large large language model llms natural natural language offensive pre-breach research researchers results security security researchers stage system systems

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Cyber Security Culture – Communication and Content Specialist

@ H&M Group | Stockholm, Sweden

Container Hardening, Sr. (Remote | Top Secret)

@ Rackner | San Antonio, TX

GRC and Information Security Analyst

@ Intertek | United States

Information Security Officer

@ Sopra Steria | Bristol, United Kingdom

Casual Area Security Officer South Down Area

@ TSS | County Down, United Kingdom