Aug. 28, 2022, 1:01 a.m. | via Tom Van Vleck

The RISKS Digest catless.ncl.ac.uk

[2205.11916] Large Language Models are Zero-Shot Reasoners,
Takeshi Kojima et al.
https://arxiv.org/abs/2205.11916

If you feed a machine-learning language model "reasoning" questions,
it gets some right and some wrong. Depending on the model and how
it was "trained." If you give the same question to the model but add
"Let's think step by step", it gets them right.

Apparently the magic phrase depends on the kind of model, and the kinds of
training. What phrase could we use on humans, to …

ml reasoning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Salesforce Solution Consultant

@ BeyondTrust | Remote United States

Divisional Deputy City Solicitor, Public Safety Compliance Counsel - Compliance and Legislation Unit

@ City of Philadelphia | Philadelphia, PA, United States

Security Engineer, IT IAM, EIS

@ Micron Technology | Hyderabad - Skyview, India

Security Analyst

@ Northwestern Memorial Healthcare | Chicago, IL, United States

Werkstudent Cybersecurity (m/w/d)

@ Brose Group | Bamberg, DE, 96052