all InfoSec news
Enhancing Reasoning Capacity of SLM using Cognitive Enhancement
April 2, 2024, 7:11 p.m. | Jonathan Pan, Swee Liang Wong, Xin Wei Chia, Yidi Yuan
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) have been applied to automate cyber security activities and processes including cyber investigation and digital forensics. However, the use of such models for cyber investigation and digital forensics should address accountability and security considerations. Accountability ensures models have the means to provide explainable reasonings and outcomes. This information can be extracted through explicit prompt requests. For security considerations, it is crucial to address privacy and confidentiality of the involved data during …
accountability address arxiv cs.ai cs.cr cyber cyber security digital digital forensics forensics investigation language language models large llms processes reasoning security
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Sr. Staff Firmware Engineer – Networking & Firewall
@ Axiado | Bengaluru, India
Compliance Architect / Product Security Sr. Engineer/Expert (f/m/d)
@ SAP | Walldorf, DE, 69190
SAP Security Administrator
@ FARO Technologies | EMEA-Portugal