Nov. 22, 2022, 2:20 a.m. | Haibo Jin, Ruoxi Chen, Haibin Zheng, Jinyin Chen, Yao Cheng, Yue Yu, Xianglong Liu

cs.CR updates on arXiv.org arxiv.org

Despite impressive capabilities and outstanding performance, deep neural
networks (DNNs) have captured increasing public concern about their security
problems, due to their frequently occurred erroneous behaviors. Therefore, it
is necessary to conduct a systematical testing for DNNs before they are
deployed to real-world applications. Existing testing methods have provided
fine-grained metrics based on neuron coverage and proposed various approaches
to improve such metrics. However, it has been gradually realized that a higher
neuron coverage does \textit{not} necessarily represent better capabilities …

deep learning framework testing testing framework

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Security Officer Hospital Mission Viejo

@ Allied Universal | Mission Viejo, CA, United States

Junior Offensive Cyber Security Researcher

@ Draper | Cambridge, MA, United States

Consultant reporting reglementaire

@ Talan | Luxembourg, Luxembourg

Chief Information Security Officer

@ Kantox | Barcelona, Catalonia, Spain