William W. Cohen's Papers: Neural Knowledge Representation
- Haitian Sun, William W. Cohen, Ruslan Salakhutdinov (2023): Answering Ambiguous Questions with a Database of Questions, Answers, and Revisions in progress.
- Following up the 'QA is the new KR' paper, we present a new collection of question-answer pairs automatically generated from Wikipedia which are more specific and ambiiguous than generated questions used in prior work, and show that this can be used to answer ambiguous questions. On the challenging ASQA benchmark, which requires generating long-form answers that summarize the multiple answers to an ambiguous question, our method improves performance by 10-15%. The new queston DB can also be used to improve diverse passage retrieval.
- Wenhu Chen, William W. Cohen, Michiel De Jong, Nitish Gupta, Alessandro Presta, Pat Verga, John Wieting (2023): QA Is the New KR: Question-Answer Pairs as Knowledge Bases in AAAI-2023.
- Proposes that symbolic KBs can be replaced with a collection of question-answer pairs automatically generated from a corpus, augmented with entity-linking annotations. Like a symbolic KB, this representation is well-suited to structured queries involving joins and aggregation, and can support 'multi-hop' reasoning. However, it has the advantage that the information in it is closely aligned to likely user information needs, as modeled by the question generation process.
- Haitian Sun, William W. Cohen, Ruslan Salakhutdinov (2023): Scenario-based Question Answering with Interacting Contextual Properties in ICLR-2023.
- Wenhu Chen, Pat Verga, Michiel de Jong, John Wieting, William W. Cohen (2022): Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering in EACL-2022.
- Extends the techniques of Mention Memory in several important ways. (1) The memory is a memory of generated question-answer pairs, which is more interpretable than neural entity-mention encodings; (2) it is based on pre-trained T5, not a custom Transformer; and (3) it allows use of the token-level encoding of retrieved QA pairs as well as neural encodings of them for reasoning. Using QA pairs instead of passages allows a clever pre-training trick for learning to retrieve, and the model greatly outperfoms a prior similar model (i.e., RePAQ) on smaller QA benchmarks.
- Haitian Sun, William W. Cohen, Ruslan Salakhutdinov (2021): ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers in ACL 2022.
- A novel dataset with (1) long context documents containing information that is related in logically complex ways; (2) multi-hop questions that require compositional logical reasoning. Intended as a more realistic version of ShARC, a QA task considered in 'End-to-End Multihop Retrieval for Compositional Question Answering over Long Documents'
- Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, William Cohen (2021): Mention Memory: incorporating textual knowledge into Transformers through entity mention attention in ICLR 2021.
- Similar to the Entities-as-Experts model, but uses a much larger memory of entity mentions, which allows the model to potentially provide meaningful provenance for information. The model, called TOME, outperforms Entities-as-Experts on several tasks, and required some non-trivial technical innovations relating to memory pre-training and efficient retrieval.
- Haitian Sun, William W. Cohen, Ruslan Salakhutdinov (2021): End-to-End Multihop Retrieval for Compositional Question Answering over Long Documents in preparation.
- Adapts many of the ideas used for multihop KBQA to a new task - answering multihop questions over a large document. Retrieval steps in this "DocHopper" system retrieve passages of a document, and the retrieved items are combined with a question neurally: i.e., rather than appending text to a question and re-encoding that discrete object, what is retrieved is a vector summary of the document, which is mixed with the previous question encoding. This is fast, fully differentiable, allows retrieval of large document subsections, and gets a new SOTA on three datasets.
- Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Wang, William W. Cohen (2021): Open Question Answering Over Tables and Text in ICLR-2021.
- Answering open QA multi-hop questions over tables and text with a clever ``early fusion'' idea, which proposes and indexes likely reasoning chains, and uses large-document Transformers to merge these noisy evidence chains.
- Pat Verga, Haitian Sun, Livio Baldini Soares, and William W. Cohen (2021): Adaptable and Interpretable Neural Memory Over Symbolic Knowledge in NAACL-2021.
- Most recent paper on Fact-Injected Language Model (FILM), which includes an Entities-as-Experts style memory of neural entity encodings, plus a second "fact memory" of KG triples. FILM has good results on KBQA tasks, and allows one to use an edited KB with retraining.
- Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, William W. Cohen (2020): Differentiable Open-Ended Commonsense Reasoning in NAACL-2021.
- Extends DrKIT's virtual KB to a corpus of documents of common-sense statements ("facts"). In DrFact, entities are replaced by noisy and ambiguous concepts, and navigation is between documents with overlapping sets of mentions. Also introduces new "open" tasks for common-sense QA.
- Haitian Sun, Andrew O. Arnold, Tania Bedrax-Weiss, Fernando Pereira, William W. Cohen (2020): Faithful Embeddings for Knowledge Base Queries in NeurIPS2020.
- An extension to Neural Query Language (NQL) which extends the query language to work with a "centroid-sketch" representation of sets. The centroid encoders a geometric area, and the sketch is a randomized data structure that adds capacity to the sketch, allowing faithful differential logical reasoning to be combined with good generalization.
- Pat Verga, Haitian Sun, Livio Baldini Soares, and William W. Cohen (2020): Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge in arxiv.
- Earlier draft of the NAACL paper on FILM (Fact-Injected LM).
- William W. Cohen, Fan Yang, and Kathryn Rivard Mazaitis (2020): TensorLog: A Probabilistic Database Implemented Using Deep-Learning Infrastructure in JAIR.
- Most complete paper on TensorLog, a predecessor of NQL/EmQL that was a Prolog-like logic, not a dataflow query language.
- William W. Cohen, Haitian Sun, R. Alex Hofer, Matthew Siegler (2020): Scalable Neural Methods for Reasoning With a Symbolic Knowledge Base in ICLR-2020.
- Paper on Neural Query Language (NQL) a differentiable dataflow query language. NQL is useful for building KBQA systems that can be trained from denotations, but relies heavily on sparse-matrix operations that are not implemented in all accelerators.
- Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W. Cohen (2020): Differentiable Reasoning over a Virtual Knowledge Base in ICLR-2020.
- Describes DrKIT, which allows one to answer multihop chain queries on a "virtual KB"---a corpus of entity-linked documents. In DrKIT, entity mentions are indexed for neural retrieval with a rich representation of their context, and reasoning consists of navigating between co-occurring mentions.
- William W. Cohen, Haitian Sun, Alex Hofer, Matthew Siegler (2019): Differentiable Representations For Multihop Inference Rules in arxiv.
- Earlier version of ICLR paper on NQL.
- William W. Cohen, Matthew Siegler, Alex Hofer (2019): Neural Query Language: A Knowledge Base Query Language for Tensorflow in arxiv.
- Earlier version of ICLR paper on NQL focusing on the language constructs used.
[Selected papers| By topic: GNAT System| Retrieval Augmented LMs| Applications| Collaborative Filtering| Intelligent Tutoring| Explanation-Based Learning| Formal Results| Learning in Graphs| Inductive Logic Programming| Neural Knowledge Representation| Topic Modeling| Matching/Data Integration| Deep Learning| Prediction-powered inference| Rule Learning| Text Categorization| Info Extraction/Reading/QA| By year: All papers]