William W. Cohen's Papers: Retrieval Augmented LMs
- Tal Schuster, Adam D. Lelkes, Haitian Sun, Jai Gupta, Jonathan Berant, William W. Cohen, Donald Metzler (2024): SEMQA: Semi-Extractive Multi-Source Question Answering in NAACL-2024.
- Yury Zemlyanskiy, Michiel de Jong, Luke Vilnis, Santiago Ontañón, William W. Cohen, Sumit Sanghai, Joshua Ainslie (2024): MEMORY-VQ: Compression for Tractable Internet-Scale Memory in NAACL-2024.
- Haitian Sun, William W. Cohen, Ruslan Salakhutdinov (2023): Answering Ambiguous Questions with a Database of Questions, Answers, and Revisions in progress.
- Following up the 'QA is the new KR' paper, we present a new collection of question-answer pairs automatically generated from Wikipedia which are more specific and ambiiguous than generated questions used in prior work, and show that this can be used to answer ambiguous questions. On the challenging ASQA benchmark, which requires generating long-form answers that summarize the multiple answers to an ambiguous question, our method improves performance by 10-15%. The new queston DB can also be used to improve diverse passage retrieval.
- Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Sumit Sanghai, William W. Cohen, Joshua Ainslie (2023): GLIMMER: generalized late-interaction memory reranker in progress.
- Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Joshua Ainslie, Sumit Sanghai, Fei Sha, William W. Cohen (2023): Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute in ICML-2023.
- Wenhu Chen, Hexiang Hu, Xi Chen, Pat Verga, William W. Cohen (2023): MuRAG: Multimodal Retrieval-Augmented Generator for Open Question Answering over Images and Text in EACL-2023.
- Michiel de Jong, Yury Zemlyanskiy, Joshua Ainslie, Nicholas FitzGerald, Sumit Sanghai, Fei Sha, William Cohen (2023): FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference in ACL-2023 (Findings).
- Wenhu Chen, Hexiang Hu, Chitwan Saharia, William W. Cohen (2023): Re-Imagen: Retrieval-Augmented Text-to-Image Generator in ICLR-2023.
- Wenhu Chen, William W. Cohen, Michiel De Jong, Nitish Gupta, Alessandro Presta, Pat Verga, John Wieting (2023): QA Is the New KR: Question-Answer Pairs as Knowledge Bases in AAAI-2023.
- Proposes that symbolic KBs can be replaced with a collection of question-answer pairs automatically generated from a corpus, augmented with entity-linking annotations. Like a symbolic KB, this representation is well-suited to structured queries involving joins and aggregation, and can support 'multi-hop' reasoning. However, it has the advantage that the information in it is closely aligned to likely user information needs, as modeled by the question generation process.
- Haitian Sun, William W. Cohen, Ruslan Salakhutdinov (2023): Scenario-based Question Answering with Interacting Contextual Properties in ICLR-2023.
- Bernd Bohnet, Vinh Q. Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, Tom Kwiatkowski, Ji Ma, Jianmo Ni, Tal Schuster, William W. Cohen, Michael Collins, Dipanjan Das, Donald Metzler, Slav Petrov, and Kellie Webster (2022): Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models in progress.
- Wenhu Chen, Pat Verga, Michiel de Jong, John Wieting, William W. Cohen (2022): Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering in EACL-2022.
- Extends the techniques of Mention Memory in several important ways. (1) The memory is a memory of generated question-answer pairs, which is more interpretable than neural entity-mention encodings; (2) it is based on pre-trained T5, not a custom Transformer; and (3) it allows use of the token-level encoding of retrieved QA pairs as well as neural encodings of them for reasoning. Using QA pairs instead of passages allows a clever pre-training trick for learning to retrieve, and the model greatly outperfoms a prior similar model (i.e., RePAQ) on smaller QA benchmarks.
- Vidhisha Balachandran and Bhuwan Dhingra and Haitian Sun and Michael Collins and William W. Cohen (2021): Investigating the Effect of Background Knowledge on Natural Questions in DeeLIO-2021.
- Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures
- Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Fei Sha, William Cohen (2021): Mention Memory: incorporating textual knowledge into Transformers through entity mention attention in ICLR 2021.
- Similar to the Entities-as-Experts model, but uses a much larger memory of entity mentions, which allows the model to potentially provide meaningful provenance for information. The model, called TOME, outperforms Entities-as-Experts on several tasks, and required some non-trivial technical innovations relating to memory pre-training and efficient retrieval.
- Haitian Sun, William W. Cohen, Ruslan Salakhutdinov (2021): End-to-End Multihop Retrieval for Compositional Question Answering over Long Documents in preparation.
- Adapts many of the ideas used for multihop KBQA to a new task - answering multihop questions over a large document. Retrieval steps in this "DocHopper" system retrieve passages of a document, and the retrieved items are combined with a question neurally: i.e., rather than appending text to a question and re-encoding that discrete object, what is retrieved is a vector summary of the document, which is mixed with the previous question encoding. This is fast, fully differentiable, allows retrieval of large document subsections, and gets a new SOTA on three datasets.
- Haitian Sun, Pat Verga, Bhuwan Dhingra, Ruslan Salakhutdinov, William W. Cohen (2021): Reasoning Over Virtual Knowledge Bases With Open Predicate Relations in ICML2021.
- Modifies the FILM model by using a virtual KB of small text passages containing pairs of entities. This required adding a Matching-the-Blanks pretraining phase, but got strong results on a number of QA-from-corpora tasks.
- Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Wang, William W. Cohen (2021): Open Question Answering Over Tables and Text in ICLR-2021.
- Answering open QA multi-hop questions over tables and text with a clever ``early fusion'' idea, which proposes and indexes likely reasoning chains, and uses large-document Transformers to merge these noisy evidence chains.
- Pat Verga, Haitian Sun, Livio Baldini Soares, and William W. Cohen (2021): Adaptable and Interpretable Neural Memory Over Symbolic Knowledge in NAACL-2021.
- Most recent paper on Fact-Injected Language Model (FILM), which includes an Entities-as-Experts style memory of neural entity encodings, plus a second "fact memory" of KG triples. FILM has good results on KBQA tasks, and allows one to use an edited KB with retraining.
- Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, William W. Cohen (2020): Differentiable Open-Ended Commonsense Reasoning in NAACL-2021.
- Extends DrKIT's virtual KB to a corpus of documents of common-sense statements ("facts"). In DrFact, entities are replaced by noisy and ambiguous concepts, and navigation is between documents with overlapping sets of mentions. Also introduces new "open" tasks for common-sense QA.
- Pat Verga, Haitian Sun, Livio Baldini Soares, and William W. Cohen (2020): Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge in arxiv.
- Earlier draft of the NAACL paper on FILM (Fact-Injected LM).
- Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W. Cohen (2020): Differentiable Reasoning over a Virtual Knowledge Base in ICLR-2020.
- Describes DrKIT, which allows one to answer multihop chain queries on a "virtual KB"---a corpus of entity-linked documents. In DrKIT, entity mentions are indexed for neural retrieval with a rich representation of their context, and reasoning consists of navigating between co-occurring mentions.
[Selected papers| By topic: GNAT System| Retrieval Augmented LMs| Applications| Collaborative Filtering| Intelligent Tutoring| Explanation-Based Learning| Formal Results| Learning in Graphs| Inductive Logic Programming| Neural Knowledge Representation| Topic Modeling| Matching/Data Integration| Deep Learning| Prediction-powered inference| Rule Learning| Text Categorization| Info Extraction/Reading/QA| By year: All papers]