Feb. 6, 2024, 5:52 p.m. | Black Hat

Black Hat www.youtube.com

No doubt everybody is curious if you can use large language models (LLMs) for offensive security operations.

In this talk, we will demonstrate how you can and can't use LLMs like GPT4 to find security vulnerabilities in applications, and discuss in detail the promise and limitations of using LLMs this way.

We will go deep on how LLMs work and share state-of-the-art techniques for using them in offensive contexts.

By: Shane Caldwell , Ariel Herbert-Voss

Full Abstract and Presentation Materials: …

applications bot building can discuss don dos doubt find gpts hack language language models large limitations llms offensive offensive security operations security security operations train vulnerabilities

Information Technology Specialist I, LACERA: Information Security Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA

Field Account Executive

@ Darktrace | Americas

Account Executive

@ Darktrace | Los Angeles

Field Account Executive

@ Darktrace | Michigan, United States

Field Account Executive

@ Darktrace | Ohio, United States

Named Account Manager - Telco & Enterprise, Thailand

@ Palo Alto Networks | Bangkok, Thailand