Feb. 6, 2024, 5:52 p.m. | Black Hat

Black Hat www.youtube.com

No doubt everybody is curious if you can use large language models (LLMs) for offensive security operations.

In this talk, we will demonstrate how you can and can't use LLMs like GPT4 to find security vulnerabilities in applications, and discuss in detail the promise and limitations of using LLMs this way.

We will go deep on how LLMs work and share state-of-the-art techniques for using them in offensive contexts.

By: Shane Caldwell , Ariel Herbert-Voss

Full Abstract and Presentation Materials: …

applications bot building can discuss don dos doubt find gpts hack language language models large limitations llms offensive offensive security operations security security operations train vulnerabilities

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Senior Security Researcher, SIEM

@ Huntress | Remote Canada

Senior Application Security Engineer

@ Revinate | San Francisco Bay Area

Cyber Security Manager

@ American Express Global Business Travel | United States - New York - Virtual Location

Incident Responder Intern

@ Bentley Systems | Remote, PA, US

SC2024-003533 Senior Online Vulnerability Assessment Analyst (CTS) - THU 9 May

@ EMW, Inc. | Mons, Wallonia, Belgium