all InfoSec news
Citation: A Key to Building Responsible and Accountable Large Language Models
April 2, 2024, 7:12 p.m. | Jie Huang, Kevin Chen-Chuan Chang
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) bring transformative benefits alongside unique challenges, including intellectual property (IP) and ethical concerns. This position paper explores a novel angle to mitigate these risks, drawing parallels between LLMs and established web systems. We identify "citation" - the acknowledgement or reference to a source or evidence - as a crucial yet missing component in LLMs. Incorporating citation could enhance content transparency and verifiability, thereby confronting the IP and ethical issues in the …
arxiv benefits building challenges cs.ai cs.cl cs.cr drawing ethical identify intellectual property key language language models large llms novel parallels property reference responsible risks systems web
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Information Security Engineer, Sr. (Container Hardening)
@ Rackner | San Antonio, TX
BaaN IV Techno-functional consultant-On-Balfour
@ Marlabs | Piscataway, US
Senior Security Analyst
@ BETSOL | Bengaluru, India
Security Operations Centre Operator
@ NEXTDC | West Footscray, Australia
Senior Network and Security Research Officer
@ University of Toronto | Toronto, ON, CA