Nov. 23, 2023, 2:19 a.m. | Chi Zhang, Zifan Wang, Ravi Mangal, Matt Fredrikson, Limin Jia, Corina Pasareanu

cs.CR updates on arXiv.org arxiv.org

Modern large language models (LLMs), such as ChatGPT, have demonstrated
impressive capabilities for coding tasks including writing and reasoning about
code. They improve upon previous neural network models of code, such as
code2seq or seq2seq, that already demonstrated competitive results when
performing tasks such as code summarization and identifying code
vulnerabilities. However, these previous code models were shown vulnerable to
adversarial examples, i.e. small syntactic perturbations that do not change the
program's semantics, such as the inclusion of "dead code" …

attacks capabilities chatgpt code coding competitive defenses language language models large llms network neural network performing reasoning results transfer writing

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cybersecurity Engineer

@ Booz Allen Hamilton | USA, VA, Arlington (1550 Crystal Dr Suite 300) non-client

Invoice Compliance Reviewer

@ AC Disaster Consulting | Fort Myers, Florida, United States - Remote

Technical Program Manager II - Compliance

@ Microsoft | Redmond, Washington, United States

Head of U.S. Threat Intelligence / Senior Manager for Threat Intelligence

@ Moonshot | Washington, District of Columbia, United States

Customer Engineer, Security, Public Sector

@ Google | Virginia, USA; Illinois, USA