all InfoSec news
AI coding helpers get FAILing grade
Aug. 15, 2023, 4:12 p.m. | richi.jennings@richi.co.uk (Richi Jennings)
ReversingLabs Blog blog.reversinglabs.com
An academic study says ChatGPT is wrong more than half the time, when asked the sort of programming questions you’d find on Stack Overflow. The “comprehensive analysis” concludes that GitHub Copilot’s LLM engine will make many conceptual errors, couching its output in a wordy, confident and authoritative tone.
academic analysis chatgpt coding copilot dev & devsecops engine errors find github github copilot llm overflow programming questions secure software blogwatch sort stack stack overflow study wrong
More from blog.reversinglabs.com / ReversingLabs Blog
Jobs in InfoSec / Cybersecurity
Security Analyst
@ Northwestern Memorial Healthcare | Chicago, IL, United States
GRC Analyst
@ Richemont | Shelton, CT, US
Security Specialist
@ Peraton | Government Site, MD, United States
Information Assurance Security Specialist (IASS)
@ OBXtek Inc. | United States
Cyber Security Technology Analyst
@ Airbus | Bengaluru (Airbus)
Vice President, Cyber Operations Engineer
@ BlackRock | LO9-London - Drapers Gardens