Commonsense validation and explanation
Abstract
Common-sense reasoning[1] is a field of artificial intelligence and machine learning that focuses on helping computers understand and interact with people more naturally by finding ways to collect these assumptions and teach them to computers. Common-sense reasoning has been most successful in the field of natural language processing (NLP).Without common-sense, it won’t be easy to build versatile and unsupervised NLP systems in an increasingly digital and mobile world. When we talk to each other and talk online, we try to be as interesting and take advantage of new ways to express things. There’s more to it than one would think. If we say, “can you put an elephant into the fridge?” you could answer the question quite easily despite the fact, in all probability, you had never pictured an elephant in the fridge. This is an example of we as humans, not just knowing about the world, but knowing how to apply our knowledge to things we haven’t thought about before. It remains a challenging question on how to evaluate whether a system has a sense-making capability. Existing benchmarks measure common-sense knowledge indirectly and without explanation. In this thesis, we directly test whether a system can differentiate natural language statements that make sense from those that do not make sense. A system is also asked to identify the most relevant reason why a given statement is against common-sense. We have used models trained over large-scale language modeling tasks and human performance, showing that there are different challenges for system sense-making.
Collections
- M Tech Dissertations [923]