June 9, 2023, 1:10 a.m. | Cristina Improta, Pietro Liguori, Roberto Natella, Bojan Cukic, Domenico Cotroneo

cs.CR updates on arXiv.org arxiv.org

In this work, we present a method to add perturbations to the code
descriptions, i.e., new inputs in natural language (NL) from well-intentioned
developers, in the context of security-oriented code, and analyze how and to
what extent perturbations affect the performance of AI offensive code
generators. Our experiments show that the performance of the code generators is
highly affected by perturbations in the NL descriptions. To enhance the
robustness of the code generators, we use the method to perform data …

code context data descriptions developers inputs language natural language offensive performance robustness security work

Principal - Cyber Risk and Assurance - Infra/Network

@ GSK | Bengaluru Luxor North Tower

Staff Security Engineer

@ Airwallex | AU - Melbourne

Chief Information Security Officer

@ Australian Payments Plus | Sydney, New South Wales, Australia

TW Test Automation Engineer (Access Control & Intrusion Systems)

@ Bosch Group | Taipei, Taiwan

Consultant infrastructure sécurité H/F

@ Hifield | Sèvres, France

SOC Analyst

@ Wix | Tel Aviv, Israel