March 28, 2024, 1:41 p.m. |

Mozilla Foundation Blog foundation.mozilla.org


















Mozilla research found that detection tools aren’t always as reliable as they say. Further, researchers found that large language models like ChatGPT can be successfully prompted to create more ‘human-sounding’ text












Introduction

As we wrote previously, generative AI presents new threats to the health of our information ecosystem. The major AI players recognize the risks that their services present: OpenAI published a paper on the threat of automated influence operations and their policy prohibits the use of ChatGPT for …

can chatgpt detect detection ecosystem found generated generative generative ai health human information language language models large major mozilla players research researchers text threats tools

Azure DevSecOps Cloud Engineer II

@ Prudent Technology | McLean, VA, USA

Security Engineer III - Python, AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SOC Analyst (Threat Hunter)

@ NCS | Singapore, Singapore

Managed Services Information Security Manager

@ NTT DATA | Sydney, Australia

Senior Security Engineer (Remote)

@ Mattermost | United Kingdom

Penetration Tester (Part Time & Remote)

@ TestPros | United States - Remote