Feb. 6, 2024, 5:52 p.m. | Black Hat

Black Hat www.youtube.com

No doubt everybody is curious if you can use large language models (LLMs) for offensive security operations.

In this talk, we will demonstrate how you can and can't use LLMs like GPT4 to find security vulnerabilities in applications, and discuss in detail the promise and limitations of using LLMs this way.

We will go deep on how LLMs work and share state-of-the-art techniques for using them in offensive contexts.

By: Shane Caldwell , Ariel Herbert-Voss

Full Abstract and Presentation Materials: …

applications bot building can discuss don dos doubt find gpts hack language language models large limitations llms offensive offensive security operations security security operations train vulnerabilities

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)