all InfoSec news
Enhancing Reasoning Capacity of SLM using Cognitive Enhancement
April 2, 2024, 7:11 p.m. | Jonathan Pan, Swee Liang Wong, Xin Wei Chia, Yidi Yuan
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) have been applied to automate cyber security activities and processes including cyber investigation and digital forensics. However, the use of such models for cyber investigation and digital forensics should address accountability and security considerations. Accountability ensures models have the means to provide explainable reasonings and outcomes. This information can be extracted through explicit prompt requests. For security considerations, it is crucial to address privacy and confidentiality of the involved data during …
accountability address arxiv cs.ai cs.cr cyber cyber security digital digital forensics forensics investigation language language models large llms processes reasoning security
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
DevSecOps Engineer
@ Material Bank | Remote
Instrumentation & Control Engineer - Cyber Security
@ ASSYSTEM | Bridgwater, United Kingdom
Security Consultant
@ Tenable | MD - Columbia - Headquarters
Management Consultant - Cybersecurity - Internship
@ Wavestone | Hong Kong, Hong Kong
TRANSCOM IGC - Cybersecurity Engineer
@ IT Partners, Inc | St. Louis, Missouri, United States
Manager, Security Operations Engineering (EMEA)
@ GitLab | Remote, EMEA