layout | permalink |
---|---|
page |
/logicspider/ |
Large-scale language models have achieved human-level performance on a variety of natural language understanding tasks. However, the commonly used language understanding tasks are inadequate in terms of measuring the capability of logical reasoning of a model. Logical reasoning is a central component for an intelligent system and should be sufficiently and independently evaluated. In this project, we aim to build a large-scale logical reasoning dataset, test the first-order logical reasoning capabilities of large-scale language models (Roberta, GPT-3, etc.) and investigate neuro-symbolic approaches for natural language reasoning.