Feb. 23, 2024, 5:11 a.m. | Alexey Shestov, Rodion Levichev, Ravil Mussabayev, Anton Cheshkov

cs.CR updates on arXiv.org arxiv.org

arXiv:2401.17010v2 Announce Type: replace
Abstract: This paper presents the results of finetuning large language models (LLMs) for the task of detecting vulnerabilities in source code. We leverage WizardCoder, a recent improvement of the state-of-the-art LLM StarCoder, and adapt it for vulnerability detection through further finetuning. To accelerate training, we modify WizardCoder's training procedure, also we investigate optimal training regimes. For the imbalanced dataset with many more negative examples than positive, we also explore different techniques to improve classification performance. The …

accelerate art arxiv code cs.ai cs.cr cs.lg detection finetuning improvement language language models large llm llms procedure results source code state task training vulnerabilities vulnerability vulnerability detection

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Information Security Engineer, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

BaaN IV Techno-functional consultant-On-Balfour

@ Marlabs | Piscataway, US

Senior Security Analyst

@ BETSOL | Bengaluru, India

Security Operations Centre Operator

@ NEXTDC | West Footscray, Australia

Senior Network and Security Research Officer

@ University of Toronto | Toronto, ON, CA