all InfoSec news
URLBERT:A Contrastive and Adversarial Pre-trained Model for URL Classification
Feb. 20, 2024, 5:11 a.m. | Yujie Li, Yanbin Wang, Haitao Xu, Zhenhao Guo, Zheng Cao, Lun Zhang
cs.CR updates on arXiv.org arxiv.org
Abstract: URLs play a crucial role in understanding and categorizing web content, particularly in tasks related to security control and online recommendations. While pre-trained models are currently dominating various fields, the domain of URL analysis still lacks specialized pre-trained models. To address this gap, this paper introduces URLBERT, the first pre-trained representation learning model applied to a variety of URL classification or detection tasks. We first train a URL tokenizer on a corpus of billions of …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Principal Business Value Consultant
@ Palo Alto Networks | Chicago, IL, United States
Cybersecurity Specialist, Sr. (Container Hardening)
@ Rackner | San Antonio, TX
Penetration Testing Engineer- Remote United States
@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700
Internal Audit- Compliance & Legal Audit-Dallas-Associate
@ Goldman Sachs | Dallas, Texas, United States
Threat Responder
@ Deepwatch | Remote