all InfoSec news
Helpful or Harmful? Exploring the Efficacy of Large Language Models for Online Grooming Prevention
March 18, 2024, 4:10 a.m. | Ellie Prosser, Matthew Edwards
cs.CR updates on arXiv.org arxiv.org
Abstract: Powerful generative Large Language Models (LLMs) are becoming popular tools amongst the general public as question-answering systems, and are being utilised by vulnerable groups such as children. With children increasingly interacting with these tools, it is imperative for researchers to scrutinise the safety of LLMs, especially for applications that could lead to serious outcomes, such as online child safety queries. In this paper, the efficacy of LLMs for online grooming prevention is explored both for …
arxiv children cs.ai cs.cl cs.cr general generative language language models large llms popular prevention public question researchers systems tools vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 16 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 16 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
1 day, 16 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Senior Security Researcher, SIEM
@ Huntress | Remote Canada
Senior Application Security Engineer
@ Revinate | San Francisco Bay Area
Cyber Security Manager
@ American Express Global Business Travel | United States - New York - Virtual Location
Incident Responder Intern
@ Bentley Systems | Remote, PA, US
SC2024-003533 Senior Online Vulnerability Assessment Analyst (CTS) - THU 9 May
@ EMW, Inc. | Mons, Wallonia, Belgium