April 25, 2024, 7:11 p.m. | Elijah Pelofske, Vincent Urias, Lorie M. Liebrock

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.15681v1 Announce Type: new
Abstract: Generative pre-trained transformers (GPT's) are a type of large language machine learning model that are unusually adept at producing novel, and coherent, natural language. In this study the ability of GPT models to generate novel and correct versions, and notably very insecure versions, of implementations of the cryptographic hash function SHA-1 is examined. The GPT models Llama-2-70b-chat-h, Mistral-7B-Instruct-v0.1, and zephyr-7b-alpha are used. The GPT models are prompted to re-write each function using a modified version …

arxiv automated code coherent cryptographic cs.ai cs.cr cs.lg function generative gpt hash hash function implementation language large machine machine learning natural natural language novel producing source code study transformers

Sr Cyber Threat Hunt Researcher

@ Peraton | Beltsville, MD, United States

Lead Consultant, Hydrogeologist

@ WSP | Chattanooga, TN, United States

Senior Security Engineer - Netskope/Proofpoint

@ Sainsbury's | London, London, United Kingdom

Senior Technical Analyst-Network Security

@ Computacenter | Bengaluru Bengaluru (Bengaluru, IN, 560025

Senior DevSecOps Engineer - Clearance Required

@ Logistics Management Institute | Remote, United States

Software Test Automation Manager - Cloud Security

@ Tenable | Israel - Office - CS