all InfoSec news
Which Features are Learned by CodeBert: An Empirical Study of the BERT-based Source Code Representation Learning. (arXiv:2301.08427v1 [cs.CL])
cs.CR updates on arXiv.org arxiv.org
The Bidirectional Encoder Representations from Transformers (BERT) were
proposed in the natural language process (NLP) and shows promising results.
Recently researchers applied the BERT to source-code representation learning
and reported some good news on several downstream tasks. However, in this
paper, we illustrated that current methods cannot effectively understand the
logic of source codes. The representation of source code heavily relies on the
programmer-defined variable and function names. We design and implement a set
of experiments to demonstrate our conjecture …
code current effectively features language logic natural language nlp process representation researchers results source code study transformers understand