https://arxiv.org/abs/2205.11916
@misc{kojima2023large,
title={Large Language Models are Zero-Shot Reasoners},
author={Takeshi Kojima and Shixiang Shane Gu and Machel Reid and Yutaka Matsuo and Yusuke Iwasawa},
year={2023},
eprint={2205.11916},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
chain of thought (CoT)
推論をいかに正しく行わせるか。
we show that LLMs are decent zero-shot reasoners by simply adding “Let’s think step by step”