March 9, 2023, 2:10 a.m. | Aayush Garg, Renzo Degiovanni, Mike Papadakis, Yves Le Traon

cs.CR updates on arXiv.org arxiv.org

With the increasing release of powerful language models trained on large code
corpus (e.g. CodeBERT was trained on 6.4 million programs), a new family of
mutation testing tools has arisen with the promise to generate more "natural"
mutants in the sense that the mutated code aims at following the implicit rules
and coding conventions typically produced by programmers. In this paper, we
study to what extent the mutants produced by language models can semantically
mimic the observable behavior of security-related …

code coding family language language models large release rules study testing testing tools tools vulnerability

Security Engineer

@ SNC-Lavalin | GB.Bristol.The Hub

Application Security Engineer

@ Virtru | Remote

SC2024-003563 Firewall Coordinator (NS) - TUE 21 May

@ EMW, Inc. | Mons, Wallonia, Belgium

Senior Application Security Engineer

@ Fortis Games | Remote - Canada

DevSecOps Manager

@ Philips | Bengaluru – Embassy Business Hub

Information System Security Manager (ISSM)

@ ARA | Raleigh, North Carolina, United States