Feb. 26, 2024, 5:11 a.m. | Zejun Zhang, Li Zhang, Xin Yuan, Anlan Zhang, Mengwei Xu, Feng Qian

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.15105v1 Announce Type: new
Abstract: With the advancement of Large Language Models (LLMs), increasingly sophisticated and powerful GPTs are entering the market. Despite their popularity, the LLM ecosystem still remains unexplored. Additionally, LLMs' susceptibility to attacks raises concerns over safety and plagiarism. Thus, in this work, we conduct a pioneering exploration of GPT stores, aiming to study vulnerabilities and plagiarism within GPT applications. To begin with, we conduct, to our knowledge, the first large-scale monitoring and analysis of two stores, …

advancement apps arxiv attacks cs.cl cs.cr ecosystem gpt gpts language language models large llm llms market plagiarism safety vulnerability work

QA Customer Response Engineer

@ ORBCOMM | Sterling, VA Office, Sterling, VA, US

Enterprise Security Architect

@ Booz Allen Hamilton | USA, TX, San Antonio (3133 General Hudnell Dr) Client Site

DoD SkillBridge - Systems Security Engineer (Active Duty Military Only)

@ Sierra Nevada Corporation | Dayton, OH - OH OD1

Senior Development Security Analyst (REMOTE)

@ Oracle | United States

Software Engineer - Network Security

@ Cloudflare, Inc. | Remote

Software Engineer, Cryptography Services

@ Robinhood | Toronto, ON