Year Conf. Topic Cited Paper Authors Url
2019 ACL # optim-adam, arch-lstm, arch-att, arch-memo, arch-coverage, pre-elmo, pre-bert, task-spanlab 3 Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, Hannaneh Hajishirzi https://www.aclweb.org/anthology/P19-1436.pdf
2019 ACL # optim-adam, reg-dropout, reg-stopping, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq 3 E3: Entailment-driven Extracting and Editing for Conversational Machine Reading Victor Zhong, Luke Zettlemoyer https://www.aclweb.org/anthology/P19-1223.pdf
2019 ACL # optim-adam, train-augment, arch-rnn, arch-transformer, pre-bert, task-spanlab 2 RankQA: Neural Question Answering with Answer Re-Ranking Bernhard Kratzwald, Anna Eigenmann, Stefan Feuerriegel https://www.aclweb.org/anthology/P19-1611.pdf
2019 ACL # train-augment, arch-rnn, arch-lstm, arch-bilstm, pre-bert, struct-hmm, adv-examp, adv-train, task-spanlab, task-lm, task-seq2seq 6 Combating Adversarial Misspellings with Robust Word Recognition Danish Pruthi, Bhuwan Dhingra, Zachary C. Lipton https://www.aclweb.org/anthology/P19-1561.pdf
2019 ACL # optim-adam, reg-dropout, train-mll, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-spanlab, task-lm, task-seq2seq 4 Cross-Lingual Training for Automatic Question Generation Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee, Ganesh Ramakrishnan, Preethi Jyothi https://www.aclweb.org/anthology/P19-1481.pdf
2019 ACL # norm-layer, train-mll, train-transfer, arch-att, arch-subword, arch-transformer, task-textclass, task-spanlab, task-lm, task-seq2seq 5 Large-Scale Transfer Learning for Natural Language Generation Sergey Golovanov, Rauf Kurbanov, Sergey Nikolenko, Kyryl Truskovskyi, Alexander Tselousov, Thomas Wolf https://www.aclweb.org/anthology/P19-1608.pdf
2019 ACL # optim-adam, optim-amsgrad, optim-projection, reg-stopping, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-coverage, arch-transformer, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq, task-relation 0 Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, Samuel R. Bowman https://www.aclweb.org/anthology/P19-1439.pdf
2019 ACL # optim-adam, arch-rnn, arch-gru, arch-gnn, arch-att, arch-selfatt, comb-ensemble, pre-elmo, pre-bert, task-textclass, task-spanlab, task-seq2seq 6 Multi-hop Reading Comprehension across Multiple Documents by Reasoning over Heterogeneous Graphs Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xiaodong He, Bowen Zhou https://www.aclweb.org/anthology/P19-1260.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-bert, task-spanlab, task-seq2seq 0 Reading Turn by Turn: Hierarchical Attention Architecture for Spoken Dialogue Comprehension Zhengyuan Liu, Nancy Chen https://www.aclweb.org/anthology/P19-1543.pdf
2019 ACL # optim-adam, arch-lstm, pre-bert, latent-vae, task-textpair, task-spanlab, task-lm, task-tree 5 Latent Retrieval for Weakly Supervised Open Domain Question Answering Kenton Lee, Ming-Wei Chang, Kristina Toutanova https://www.aclweb.org/anthology/P19-1612.pdf
2019 ACL # optim-adam, optim-adagrad, reg-dropout, train-augment, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, search-beam, pre-glove, pre-bert, adv-examp, latent-vae, task-spanlab, task-seq2seq 3 Learning to Ask Unanswerable Questions for Machine Reading Comprehension Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu https://www.aclweb.org/anthology/P19-1415.pdf
2019 ACL # optim-adam, reg-dropout, norm-gradient, arch-lstm, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-memo, pre-glove, task-spanlab, task-seq2seq 0 Inferential Machine Comprehension: Answering Questions by Recursively Deducing the Evidence Chain from Text Jianxing Yu, Zhengjun Zha, Jian Yin https://www.aclweb.org/anthology/P19-1217.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-memo, arch-bilinear, pre-glove, task-spanlab 0 Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension Daesik Kim, Seonhoon Kim, Nojun Kwak https://www.aclweb.org/anthology/P19-1347.pdf
2019 ACL # optim-adam, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, comb-ensemble, pre-elmo, pre-bert, loss-margin, task-spanlab, task-lm 2 Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, Sujian Li https://www.aclweb.org/anthology/P19-1226.pdf
2019 ACL # arch-cnn, arch-att, arch-copy, pre-bert, task-textpair, task-spanlab, task-lm 4 Careful Selection of Knowledge to Solve Open Book Question Answering Pratyay Banerjee, Kuntal Kumar Pal, Arindam Mitra, Chitta Baral https://www.aclweb.org/anthology/P19-1615.pdf
2019 ACL # optim-adam, optim-adadelta, optim-projection, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, pre-glove, task-spanlab, task-condlm 3 Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives Yi Tay, Shuohang Wang, Anh Tuan Luu, Jie Fu, Minh C. Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, Aston Zhang https://www.aclweb.org/anthology/P19-1486.pdf
2019 ACL # optim-adam, arch-att, arch-coverage, task-textclass, task-spanlab, task-relation 2 Errudite: Scalable, Reproducible, and Testable Error Analysis Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, Daniel Weld https://www.aclweb.org/anthology/P19-1073.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, comb-ensemble, pre-elmo, pre-bert, task-seqlab, task-spanlab, task-seq2seq 0 MCˆ2: Multi-perspective Convolutional Cube for Conversational Machine Reading Comprehension Xuanyu Zhang https://www.aclweb.org/anthology/P19-1622.pdf
2019 ACL # optim-adam, reg-dropout, reg-labelsmooth, train-transfer, arch-att, arch-selfatt, arch-copy, arch-bilinear, arch-coverage, arch-transformer, search-greedy, search-beam, pre-glove, task-spanlab, task-seq2seq, task-tree 1 Complex Question Decomposition for Semantic Parsing Haoyu Zhang, Jingjing Cai, Jianjun Xu, Ji Wang https://www.aclweb.org/anthology/P19-1440.pdf
2019 ACL # optim-adam, pool-max, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, task-spanlab, task-lm 7 Multi-hop Reading Comprehension through Question Decomposition and Rescoring Sewon Min, Victor Zhong, Luke Zettlemoyer, Hannaneh Hajishirzi https://www.aclweb.org/anthology/P19-1613.pdf
2019 ACL # optim-adam, arch-att, arch-subword, arch-transformer, search-beam, pre-fasttext, task-spanlab, task-lm, task-seq2seq 5 ELI5: Long Form Question Answering Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, Michael Auli https://www.aclweb.org/anthology/P19-1346.pdf
2019 ACL # optim-adam, pool-max, arch-rnn, arch-att, arch-selfatt, pre-bert, task-spanlab 8 Compositional Questions Do Not Necessitate Multi-hop Reasoning Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, Luke Zettlemoyer https://www.aclweb.org/anthology/P19-1416.pdf
2019 ACL # arch-att, arch-transformer, comb-ensemble, search-beam, pre-bert, task-spanlab, task-lm, task-seq2seq 9 Synthetic QA Corpora Generation with Roundtrip Consistency Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, Michael Collins https://www.aclweb.org/anthology/P19-1620.pdf
2019 ACL # optim-adam, train-augment, arch-cnn, arch-att, arch-bilinear, arch-coverage, task-spanlab, task-lm, task-relation 2 Are Red Roses Red? Evaluating Consistency of Question-Answering Models Marco Tulio Ribeiro, Carlos Guestrin, Sameer Singh https://www.aclweb.org/anthology/P19-1621.pdf
2019 ACL # optim-adam, train-mtl, arch-rnn, arch-birnn, arch-lstm, arch-att, arch-coverage, arch-transformer, comb-ensemble, search-beam, pre-bert, task-textpair, task-extractive, task-spanlab, task-lm, task-seq2seq 4 Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita https://www.aclweb.org/anthology/P19-1225.pdf
2019 ACL # optim-adam, comb-ensemble, pre-bert, adv-examp, task-textpair, task-spanlab, task-seq2seq 3 Misleading Failures of Partial-input Baselines Shi Feng, Eric Wallace, Jordan Boyd-Graber https://www.aclweb.org/anthology/P19-1554.pdf
2019 ACL # optim-adam, reg-dropout, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, pre-glove, task-spanlab, task-seq2seq 2 Explicit Utilization of General Knowledge in Machine Reading Comprehension Chao Wang, Hui Jiang https://www.aclweb.org/anthology/P19-1219.pdf
2019 ACL # optim-adam, init-glorot, reg-decay, arch-gnn, arch-att, arch-selfatt, arch-memo, arch-coverage, pre-bert, task-spanlab, task-lm 7 Cognitive Graph for Multi-Hop Reading Comprehension at Scale Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, Jie Tang https://www.aclweb.org/anthology/P19-1259.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-att, arch-bilinear, task-textclass, task-spanlab, task-seq2seq 0 ChID: A Large-scale Chinese IDiom Dataset for Cloze Test Chujie Zheng, Minlie Huang, Aixin Sun https://www.aclweb.org/anthology/P19-1075.pdf
2019 ACL # optim-adam, reg-dropout, activ-relu, pool-mean, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-bilinear, arch-transformer, pre-word2vec, pre-glove, pre-bert, task-spanlab 0 Open-Domain Why-Question Answering with Adversarial Learning to Encode Answer Texts Jong-Hoon Oh, Kazuma Kadowaki, Julien Kloetzer, Ryu Iida, Kentaro Torisawa https://www.aclweb.org/anthology/P19-1414.pdf
2019 ACL # train-mtl, train-transfer, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-textclass, task-spanlab, task-lm, task-seq2seq, task-relation 4 BAM! Born-Again Multi-Task Networks for Natural Language Understanding Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, Quoc V. Le https://www.aclweb.org/anthology/P19-1595.pdf
2019 ACL # optim-adam, reg-dropout, train-mtl, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-paravec, pre-bert, adv-examp, task-spanlab 2 Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading Comprehension Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li https://www.aclweb.org/anthology/P19-1221.pdf
2019 ACL # optim-adam, train-augment, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-subword, pre-elmo, pre-bert, adv-examp, task-spanlab, task-seq2seq, task-alignment 1 Improving the Robustness of Question Answering Systems to Question Paraphrasing Wee Chung Gan, Hwee Tou Ng https://www.aclweb.org/anthology/P19-1610.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, arch-lstm, arch-bilstm, arch-att, pre-glove, pre-elmo, pre-bert, struct-crf, task-textpair, task-seqlab, task-spanlab, task-seq2seq 2 Augmenting Neural Networks with First-order Logic Tao Li, Vivek Srikumar https://www.aclweb.org/anthology/P19-1028.pdf
2019 ACL # task-spanlab 0 Aiming beyond the Obvious: Identifying Non-Obvious Cases in Semantic Similarity Datasets Nicole Peinelt, Maria Liakata, Dong Nguyen https://www.aclweb.org/anthology/P19-1268.pdf
2019 ACL # optim-adam, train-mll, train-transfer, arch-gru, arch-att, arch-memo, arch-transformer, pre-glove, pre-bert, task-spanlab, task-lm, task-seq2seq, task-alignment 3 XQA: A Cross-lingual Open-domain Question Answering Dataset Jiahua Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun https://www.aclweb.org/anthology/P19-1227.pdf
2019 ACL # optim-adam, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-bilinear, pre-bert, nondif-minrisk, nondif-reinforce, loss-nce, task-spanlab, task-lm, task-relation 7 Entity-Relation Extraction as Multi-Turn Question Answering Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, Jiwei Li https://www.aclweb.org/anthology/P19-1129.pdf
2019 ACL # optim-sgd, optim-projection, reg-dropout, reg-stopping, reg-labelsmooth, train-mll, arch-rnn, arch-lstm, arch-att, arch-selfatt, pre-bert, task-textclass, task-spanlab, task-lm, task-tree 5 Training Neural Response Selection for Task-Oriented Dialogue Systems Matthew Henderson, Ivan Vulić, Daniela Gerz, Iñigo Casanueva, Paweł Budzianowski, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrkšić, Pei-Hao Su https://www.aclweb.org/anthology/P19-1536.pdf
2019 ACL # optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-memo, pre-glove, pre-bert, task-spanlab, task-seq2seq 4 Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, Jianfeng Gao https://www.aclweb.org/anthology/P19-1539.pdf
2019 ACL # optim-adam, reg-dropout, reg-worddropout, reg-stopping, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, pre-bert, latent-vae, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-tree 2 Unsupervised Question Answering by Cloze Translation Patrick Lewis, Ludovic Denoyer, Sebastian Riedel https://www.aclweb.org/anthology/P19-1484.pdf
2019 ACL # optim-adam, reg-dropout, norm-gradient, arch-lstm, arch-cnn, arch-att, arch-bilinear, arch-coverage, arch-subword, search-beam, pre-elmo, task-spanlab, task-lm, task-seq2seq, task-relation 0 Generating Question-Answer Hierarchies Kalpesh Krishna, Mohit Iyyer https://www.aclweb.org/anthology/P19-1224.pdf
2019 ACL # optim-adam, train-mtl, train-transfer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-gating, arch-transformer, comb-ensemble, search-beam, pre-glove, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq 5 Multi-style Generative Reading Comprehension Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita https://www.aclweb.org/anthology/P19-1220.pdf
2019 ACL # optim-adam, optim-adadelta, reg-dropout, reg-labelsmooth, norm-layer, train-parallel, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-residual, arch-subword, pre-glove, pre-bert, struct-crf, task-seqlab, task-spanlab, task-lm, task-seq2seq 1 A Lightweight Recurrent Network for Sequence Modeling Biao Zhang, Rico Sennrich https://www.aclweb.org/anthology/P19-1149.pdf
2019 ACL # optim-adam, train-mll, arch-att, task-textpair, task-spanlab, task-seq2seq 0 HEAD-QA: A Healthcare Dataset for Complex Reasoning David Vilares, Carlos Gómez-Rodríguez https://www.aclweb.org/anthology/P19-1092.pdf
2019 ACL # optim-adam, optim-projection, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-gating, arch-memo, pre-glove, adv-examp, adv-train, task-spanlab 2 Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA Yichen Jiang, Mohit Bansal https://www.aclweb.org/anthology/P19-1262.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-gating, arch-memo, arch-bilinear, pre-glove, pre-elmo, pre-bert, task-spanlab 3 Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension Yichen Jiang, Nitish Joshi, Yen-Chun Chen, Mohit Bansal https://www.aclweb.org/anthology/P19-1261.pdf
2019 ACL # arch-coverage, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm 1 Human vs. Muppet: A Conservative Estimate of Human Performance on the GLUE Benchmark Nikita Nangia, Samuel R. Bowman https://www.aclweb.org/anthology/P19-1449.pdf
2019 ACL # optim-sgd, optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-memo, arch-copy, search-beam, pre-glove, task-spanlab, task-seq2seq, task-tree 0 Reinforced Dynamic Reasoning for Conversational Question Generation Boyuan Pan, Hao Li, Ziyu Yao, Deng Cai, Huan Sun https://www.aclweb.org/anthology/P19-1203.pdf
2019 ACL # train-mtl, arch-lstm, arch-att, arch-coverage, pre-glove, task-textpair, task-spanlab 0 Delta Embedding Learning Xiao Zhang, Ji Wu, Dejing Dou https://www.aclweb.org/anthology/P19-1322.pdf
2019 ACL # optim-adam, arch-rnn, arch-att, arch-memo, arch-coverage, pre-bert, task-spanlab, task-lm, task-seq2seq, task-relation 0 TWEETQA: A Social Media Focused Question Answering Dataset Wenhan Xiong, Jiawei Wu, Hong Wang, Vivek Kulkarni, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang https://www.aclweb.org/anthology/P19-1496.pdf
2019 ACL # optim-adam, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-memo, arch-transformer, pre-glove, pre-bert, nondif-reinforce, task-spanlab 1 Episodic Memory Reader: Learning What to Remember for Question Answering from Streaming Data Moonsu Han, Minki Kang, Hyunwoo Jung, Sung Ju Hwang https://www.aclweb.org/anthology/P19-1434.pdf
2019 ACL # init-glorot, arch-lstm, arch-att, arch-selfatt, arch-bilinear, pre-word2vec, pre-glove, pre-elmo, task-textclass, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-relation 5 Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks Shikhar Vashishth, Manik Bhandari, Prateek Yadav, Piyush Rai, Chiranjib Bhattacharyya, Partha Talukdar https://www.aclweb.org/anthology/P19-1320.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, train-augment, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, search-beam, pre-glove, pre-bert, latent-vae, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq 0 A Cross-Sentence Latent Variable Model for Semi-Supervised Text Sequence Matching Jihun Choi, Taeuk Kim, Sang-goo Lee https://www.aclweb.org/anthology/P19-1469.pdf
2019 ACL # optim-sgd, optim-adam, reg-dropout, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-cloze 116 Multi-Task Deep Neural Networks for Natural Language Understanding Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao https://www.aclweb.org/anthology/P19-1441.pdf
2019 ACL # optim-adam, optim-adadelta, reg-dropout, pool-max, arch-rnn, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-memo, search-beam, pre-glove, pre-elmo, pre-bert, task-textpair, task-spanlab, task-seq2seq 4 Multi-Hop Paragraph Retrieval for Open-Domain Question Answering Yair Feldman, Ran El-Yaniv https://www.aclweb.org/anthology/P19-1222.pdf
2019 ACL # reg-stopping, train-transfer, arch-rnn, arch-att, pre-glove, pre-bert, task-spanlab, task-lm, task-tree 12 MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension Alon Talmor, Jonathan Berant https://www.aclweb.org/anthology/P19-1485.pdf
2019 ACL # optim-sgd, optim-adam, optim-projection, reg-dropout, norm-layer, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-gating, arch-transformer, comb-ensemble, pre-fasttext, task-spanlab, task-seq2seq 1 Token-level Dynamic Self-Attention Network for Multi-Passage Reading Comprehension Yimeng Zhuang, Huadong Wang https://www.aclweb.org/anthology/P19-1218.pdf
2019 ACL # optim-adam, reg-dropout, norm-gradient, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-memo, search-beam, pre-glove, pre-elmo, task-spanlab 2 Exploiting Explicit Paths for Multi-hop Reading Comprehension Souvik Kundu, Tushar Khot, Ashish Sabharwal, Peter Clark https://www.aclweb.org/anthology/P19-1263.pdf
2019 ACL # train-transfer, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-transformer, pre-bert, task-textclass, task-spanlab, task-lm, task-seq2seq, task-cloze 2 Exploring Pre-trained Language Models for Event Extraction and Generation Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, Dongsheng Li https://www.aclweb.org/anthology/P19-1522.pdf
2019 ACL # optim-sgd, reg-dropout, pool-max, pool-mean, arch-lstm, arch-bilstm, arch-cnn, arch-att, pre-bert, struct-crf, task-spanlab, task-lm 0 Eliciting Knowledge from Experts: Automatic Transcript Parsing for Cognitive Task Analysis Junyi Du, He Jiang, Jiaming Shen, Xiang Ren https://www.aclweb.org/anthology/P19-1420.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, pool-max, arch-lstm, arch-gnn, arch-cnn, arch-att, arch-memo, pre-bert, struct-crf, task-seqlab, task-spanlab 6 Dynamically Fused Graph Network for Multi-hop Reasoning Lin Qiu, Yunxuan Xiao, Yanru Qu, Hao Zhou, Lei Li, Weinan Zhang, Yong Yu https://www.aclweb.org/anthology/P19-1617.pdf
2019 ACL # optim-adam, pool-max, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, adv-train, task-spanlab, task-lm, task-seq2seq 0 Asking the Crowd: Question Analysis, Evaluation and Generation for Open Discussion on Online Forums Zi Chai, Xinyu Xing, Xiaojun Wan, Bo Huang https://www.aclweb.org/anthology/P19-1497.pdf
2019 ACL # optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-copy, task-spanlab, task-lm, task-seq2seq 1 Interconnected Question Generation with Coreference Alignment and Conversation Flow Modeling Yifan Gao, Piji Li, Irwin King, Michael R. Lyu https://www.aclweb.org/anthology/P19-1480.pdf
2019 ACL # train-augment, train-parallel, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-transformer, pre-glove, pre-bert, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-relation 0 Self-Attention Architectures for Answer-Agnostic Neural Question Generation Thomas Scialom, Benjamin Piwowarski, Jacopo Staiano https://www.aclweb.org/anthology/P19-1604.pdf
2019 EMNLP # optim-adam, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-elmo, pre-bert, adv-examp, task-textclass, task-spanlab, task-lm, task-seq2seq, task-cloze 0 AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh https://www.aclweb.org/anthology/D19-3002.pdf
2019 EMNLP # optim-adam, train-mll, train-transfer, arch-att, arch-selfatt, comb-ensemble, pre-bert, task-spanlab 6 A Span-Extraction Dataset for Chinese Machine Reading Comprehension Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu https://www.aclweb.org/anthology/D19-1600.pdf
2019 EMNLP # arch-gating, task-textpair, task-spanlab 0 Redcoat: A Collaborative Annotation Tool for Hierarchical Entity Typing Michael Stewart, Wei Liu, Rachel Cardell-Oliver https://www.aclweb.org/anthology/D19-3033.pdf
2019 EMNLP # optim-adam, arch-att, arch-selfatt, comb-ensemble, pre-bert, adv-examp, task-spanlab 0 QAInfomax: Learning Robust Question Answering System by Mutual Information Maximization Yi-Ting Yeh, Yun-Nung Chen https://www.aclweb.org/anthology/D19-1333.pdf
2019 EMNLP # optim-adam, reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-gru, arch-att, search-beam, task-spanlab, task-lm, task-seq2seq 0 Multi-task Learning for Natural Language Generation in Task-Oriented Dialogue Chenguang Zhu, Michael Zeng, Xuedong Huang https://www.aclweb.org/anthology/D19-1123.pdf
2019 EMNLP # optim-adam, optim-projection, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, pre-glove, pre-elmo, pre-bert, latent-vae, task-spanlab 0 Answering questions by learning to rank - Learning to rank by answering questions George Sebastian Pirtoaca, Traian Rebedea, Stefan Ruseti https://www.aclweb.org/anthology/D19-1256.pdf
2019 EMNLP # reg-dropout, pool-max, arch-lstm, arch-att, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq 0 Aggregating Bidirectional Encoder Representations Using MatchLSTM for Sequence Matching Bo Shao, Yeyun Gong, Weizhen Qi, Nan Duan, Xiaola Lin https://www.aclweb.org/anthology/D19-1626.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-subword, search-beam, pre-bert, task-spanlab, task-lm, task-seq2seq 0 Restoring ancient text using deep learning: a case study on Greek epigraphy Yannis Assael, Thea Sommerschield, Jonathan Prag https://www.aclweb.org/anthology/D19-1668.pdf
2019 EMNLP # init-glorot, reg-dropout, reg-stopping, norm-gradient, train-transfer, train-augment, train-parallel, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, comb-ensemble, search-beam, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq 2 Denoising based Sequence-to-Sequence Pre-training for Text Generation Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, Jingming Liu https://www.aclweb.org/anthology/D19-1412.pdf
2019 EMNLP # optim-adam, reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-gating, arch-coverage, pre-fasttext, pre-bert, task-spanlab, task-lm 1 Interactive Language Learning by Question Answering Xingdi Yuan, Marc-Alexandre Côté, Jie Fu, Zhouhan Lin, Chris Pal, Yoshua Bengio, Adam Trischler https://www.aclweb.org/anthology/D19-1280.pdf
2019 EMNLP # optim-adam, reg-dropout, pool-max, arch-lstm, arch-att, arch-selfatt, pre-glove, task-spanlab 0 Ranking and Sampling in Open-Domain Question Answering Yanfu Xu, Zheng Lin, Yuanxin Liu, Rui Liu, Weiping Wang, Dan Meng https://www.aclweb.org/anthology/D19-1245.pdf
2019 EMNLP # optim-adam, train-mll, train-augment, pool-max, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-tree 0 Do NLP Models Know Numbers? Probing Numeracy in Embeddings Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, Matt Gardner https://www.aclweb.org/anthology/D19-1534.pdf
2019 EMNLP # optim-adam, reg-decay, norm-layer, train-mll, pool-mean, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, task-spanlab, task-lm, task-seq2seq 0 Cross-Lingual Machine Reading Comprehension Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu https://www.aclweb.org/anthology/D19-1169.pdf
2019 EMNLP # reg-stopping, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, meta-arch 1 Show Your Work: Improved Reporting of Experimental Results Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, Noah A. Smith https://www.aclweb.org/anthology/D19-1224.pdf
2019 EMNLP # optim-adam, optim-projection, train-mll, train-transfer, pool-mean, arch-att, arch-transformer, pre-fasttext, pre-bert, loss-cca, task-spanlab, task-lm, task-seq2seq, task-lexicon 0 Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model Tsung-Yuan Hsu, Chi-Liang Liu, Hung-yi Lee https://www.aclweb.org/anthology/D19-1607.pdf
2019 EMNLP # optim-adam, arch-transformer, pre-elmo, adv-examp, task-textclass, task-textpair, task-spanlab 0 Evaluating adversarial attacks against multiple fact verification systems James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal https://www.aclweb.org/anthology/D19-1292.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, reg-decay, train-mll, train-transfer, pool-max, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze, task-relation 22 Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT Shijie Wu, Mark Dredze https://www.aclweb.org/anthology/D19-1077.pdf
2019 EMNLP # optim-adam, reg-decay, train-augment, arch-att, comb-ensemble, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-relation 0 CFO: A Framework for Building Production NLP Systems Rishav Chakravarti, Cezar Pendus, Andrzej Sakrajda, Anthony Ferritto, Lin Pan, Michael Glass, Vittorio Castelli, J William Murdock, Radu Florian, Salim Roukos, Avi Sil https://www.aclweb.org/anthology/D19-3006.pdf
2019 EMNLP # reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, pre-bert, latent-vae, task-textpair, task-spanlab, task-seq2seq, task-tree, task-graph 0 Don’t paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing Jonathan Herzig, Jonathan Berant https://www.aclweb.org/anthology/D19-1394.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-copy, arch-coverage, search-beam, pre-glove, latent-vae, task-spanlab, task-seq2seq 1 Incorporating External Knowledge into Machine Reading for Generative Question Answering Bin Bi, Chen Wu, Ming Yan, Wei Wang, Jiangnan Xia, Chenliang Li https://www.aclweb.org/anthology/D19-1255.pdf
2019 EMNLP # arch-lstm, arch-att, pre-glove, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-tree 1 “Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding Ben Zhou, Daniel Khashabi, Qiang Ning, Dan Roth https://www.aclweb.org/anthology/D19-1332.pdf
2019 EMNLP # optim-adam, train-mtl, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq, task-relation, task-tree 0 Discourse-Aware Semantic Self-Attention for Narrative Reading Comprehension Todor Mihaylov, Anette Frank https://www.aclweb.org/anthology/D19-1257.pdf
2019 EMNLP # optim-adam, optim-adadelta, arch-lstm, arch-bilstm, arch-att, arch-bilinear, search-beam, pre-glove, task-spanlab, task-lm 0 Everything Happens for a Reason: Discovering the Purpose of Actions in Procedural Text Bhavana Dalvi, Niket Tandon, Antoine Bosselut, Wen-tau Yih, Peter Clark https://www.aclweb.org/anthology/D19-1457.pdf
2019 EMNLP # optim-adam, optim-projection, arch-cnn, arch-att, arch-coverage, arch-transformer, pre-glove, pre-skipthought, pre-bert, task-spanlab, task-lm, task-relation, task-tree 0 Linking artificial and human neural representations of language Jon Gauthier, Roger Levy https://www.aclweb.org/anthology/D19-1050.pdf
2019 EMNLP # arch-att, arch-coverage, pre-bert, task-spanlab, task-tree 1 QuaRTz: An Open-Domain Dataset of Qualitative Relationship Questions Oyvind Tafjord, Matt Gardner, Kevin Lin, Peter Clark https://www.aclweb.org/anthology/D19-1608.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-cnn, arch-att, arch-copy, arch-coverage, pre-glove, pre-bert, task-textpair, task-spanlab, task-seq2seq 1 Better Rewards Yield Better Summaries: Learning to Summarise Without References Florian Böhm, Yang Gao, Christian M. Meyer, Ori Shapira, Ido Dagan, Iryna Gurevych https://www.aclweb.org/anthology/D19-1307.pdf
2019 EMNLP # optim-adam, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, task-spanlab, task-lm, task-seq2seq 0 Multi-Task Learning with Language Modeling for Question Generation Wenjie Zhou, Minghua Zhang, Yunfang Wu https://www.aclweb.org/anthology/D19-1337.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, train-mtl, train-mll, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-copy, pre-glove, pre-elmo, pre-bert, adv-train, task-spanlab, task-lm, task-seq2seq 2 Adversarial Domain Adaptation for Machine Reading Comprehension Huazheng Wang, Zhe Gan, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Hongning Wang https://www.aclweb.org/anthology/D19-1254.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-patience, arch-rnn, arch-att, arch-selfatt, pre-bert, task-spanlab, task-lm 2 Answering Complex Open-domain Questions Through Iterative Query Generation Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, Christopher D. Manning https://www.aclweb.org/anthology/D19-1261.pdf
2019 EMNLP # optim-adam, optim-adagrad, norm-layer, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, search-beam, pre-glove, pre-paravec, adv-train, latent-topic, task-textpair, task-extractive, task-spanlab, task-lm, task-condlm, task-seq2seq 0 Topic-Guided Coherence Modeling for Sentence Ordering by Preserving Global and Local Information Byungkook Oh, Seungmin Seo, Cheolheon Shin, Eunju Jo, Kyong-Ho Lee https://www.aclweb.org/anthology/D19-1232.pdf
2019 EMNLP # arch-att, arch-coverage, pre-bert, task-spanlab, task-lm 0 A Summarization System for Scientific Documents Shai Erera, Michal Shmueli-Scheuer, Guy Feigenblat, Ora Peled Nakash, Odellia Boni, Haggai Roitman, Doron Cohen, Bar Weiner, Yosi Mass, Or Rivlin, Guy Lev, Achiya Jerbi, Jonathan Herzig, Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, Francesca Bonin, David Konopnicki https://www.aclweb.org/anthology/D19-3036.pdf
2019 EMNLP # optim-adam, optim-projection, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-fasttext, pre-elmo, pre-bert, struct-crf, task-spanlab, task-lm, task-seq2seq, task-cloze 0 Learning with Limited Data for Multilingual Reading Comprehension Kyungjae Lee, Sunghyun Park, Hojae Han, Jinyoung Yeo, Seung-won Hwang, Juho Lee https://www.aclweb.org/anthology/D19-1283.pdf
2019 EMNLP # arch-lstm, arch-bilstm, arch-att, arch-memo, task-textpair, task-spanlab, task-lm 0 Memory Grounded Conversational Reasoning Seungwhan Moon, Pararth Shah, Rajen Subba, Anuj Kumar https://www.aclweb.org/anthology/D19-3025.pdf
2019 EMNLP # optim-adam, optim-projection, train-mtl, train-mll, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-glove, pre-elmo, pre-bert, latent-vae, task-spanlab, task-lm, task-seq2seq 2 Knowledge Enhanced Contextual Word Representations Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A. Smith https://www.aclweb.org/anthology/D19-1005.pdf
2019 EMNLP # optim-adam, optim-adadelta, optim-projection, reg-dropout, train-augment, pool-max, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-transformer, search-greedy, search-beam, pre-elmo, pre-bert, nondif-reinforce, task-textpair, task-seqlab, task-spanlab, task-lm, task-condlm, task-seq2seq 0 Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering Shiyue Zhang, Mohit Bansal https://www.aclweb.org/anthology/D19-1253.pdf
2019 EMNLP # optim-adam, arch-cnn, arch-att, pre-fasttext, pre-bert, task-spanlab, task-seq2seq 0 Finding Generalizable Evidence by Learning to Convince Q&A Models Ethan Perez, Siddharth Karamcheti, Rob Fergus, Jason Weston, Douwe Kiela, Kyunghyun Cho https://www.aclweb.org/anthology/D19-1244.pdf
2019 EMNLP # optim-sgd, optim-adam, init-glorot, reg-dropout, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-copy, search-beam, pre-glove, task-spanlab, task-seq2seq 0 Improving Question Generation With to the Point Context Jingjing Li, Yifan Gao, Lidong Bing, Irwin King, Michael R. Lyu https://www.aclweb.org/anthology/D19-1317.pdf
2019 EMNLP # arch-lstm, arch-bilstm, arch-att, comb-ensemble, pre-word2vec, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm 0 PubMedQA: A Dataset for Biomedical Research Question Answering Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, Xinghua Lu https://www.aclweb.org/anthology/D19-1259.pdf
2019 EMNLP # optim-adam, train-transfer, arch-transformer, pre-bert, task-spanlab, task-lm 11 Social IQa: Commonsense Reasoning about Social Interactions Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, Yejin Choi https://www.aclweb.org/anthology/D19-1454.pdf
2019 EMNLP # optim-adam, pre-glove, pre-bert, task-textpair, task-spanlab 3 Towards Debiasing Fact Verification Models Tal Schuster, Darsh Shah, Yun Jie Serene Yeo, Daniel Roberto Filizzola Ortiz, Enrico Santus, Regina Barzilay https://www.aclweb.org/anthology/D19-1341.pdf
2019 EMNLP # optim-adam, arch-att, arch-copy, search-beam, task-spanlab, task-seq2seq 0 Generating Highly Relevant Questions Jiazuo Qiu, Deyi Xiong https://www.aclweb.org/anthology/D19-1614.pdf
2019 EMNLP # optim-adam, norm-layer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-elmo, pre-bert, task-textclass, task-textpair, task-extractive, task-spanlab, task-lm, task-seq2seq, task-cloze 0 Fine-tune BERT with Sparse Self-Attention Mechanism Baiyun Cui, Yingming Li, Ming Chen, Zhongfei Zhang https://www.aclweb.org/anthology/D19-1361.pdf
2019 EMNLP # optim-adam, optim-projection, arch-lstm, arch-bilstm, arch-att, pre-bert, task-spanlab 0 Movie Plot Analysis via Turning Point Identification Pinelopi Papalampidi, Frank Keller, Mirella Lapata https://www.aclweb.org/anthology/D19-1180.pdf
2019 EMNLP # optim-adam, reg-patience, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm 10 Language Models as Knowledge Bases? Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller https://www.aclweb.org/anthology/D19-1250.pdf
2019 EMNLP # optim-projection, train-mtl, arch-cnn, arch-att, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-cloze 9 Patient Knowledge Distillation for BERT Model Compression Siqi Sun, Yu Cheng, Zhe Gan, Jingjing Liu https://www.aclweb.org/anthology/D19-1441.pdf
2019 EMNLP # optim-adam, train-mtl, train-mll, train-parallel, arch-att, arch-selfatt, comb-ensemble, pre-bert, task-spanlab, task-seq2seq, task-alignment 0 BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on Novels Yimin Jing, Deyi Xiong, Zhen Yan https://www.aclweb.org/anthology/D19-1249.pdf
2019 EMNLP # optim-adagrad, arch-rnn, arch-lstm, arch-att, arch-copy, pre-bert, nondif-reinforce, task-extractive, task-spanlab, task-lm, task-condlm, task-seq2seq 1 Answers Unite! Unsupervised Metrics for Reinforced Summarization Models Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano https://www.aclweb.org/anthology/D19-1320.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, pre-bert, latent-vae, task-textclass, task-textpair, task-spanlab, task-lm 0 Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering Vikas Yadav, Steven Bethard, Mihai Surdeanu https://www.aclweb.org/anthology/D19-1260.pdf
2019 EMNLP # optim-adam, train-augment, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-gating, pre-glove, adv-train, task-textpair, task-spanlab, meta-arch 3 Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning Yichen Jiang, Mohit Bansal https://www.aclweb.org/anthology/D19-1455.pdf
2019 EMNLP # optim-adam, reg-decay, norm-gradient, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-coverage, nondif-reinforce, task-extractive, task-spanlab, task-condlm, task-seq2seq 0 Reading Like HER: Human Reading Inspired Extractive Summarization Ling Luo, Xiang Ao, Yan Song, Feiyang Pan, Min Yang, Qing He https://www.aclweb.org/anthology/D19-1300.pdf
2019 EMNLP # optim-sgd, optim-adam, arch-rnn, arch-gru, arch-cnn, arch-att, arch-selfatt, search-beam, task-spanlab, task-lm, task-seq2seq 1 Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation Ze Yang, Can Xu, Wei Wu, Zhoujun Li https://www.aclweb.org/anthology/D19-1512.pdf
2019 EMNLP # optim-adam, optim-projection, arch-rnn, arch-gru, arch-cnn, arch-att, arch-memo, arch-copy, task-spanlab, task-seq2seq 0 DyKgChat: Benchmarking Dialogue Generation Grounding on Dynamic Knowledge Graphs Yi-Lin Tuan, Yun-Nung Chen, Hung-yi Lee https://www.aclweb.org/anthology/D19-1194.pdf
2019 EMNLP # optim-adam, arch-att, arch-selfatt, comb-ensemble, pre-bert, task-textpair, task-spanlab 1 Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension Daniel Andor, Luheng He, Kenton Lee, Emily Pitler https://www.aclweb.org/anthology/D19-1609.pdf
2019 EMNLP # optim-adam, reg-decay, norm-gradient, arch-gnn, arch-att, arch-selfatt, arch-transformer, pre-bert, task-spanlab, task-tree 0 NumNet: Machine Reading Comprehension with Numerical Reasoning Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, Zhiyuan Liu https://www.aclweb.org/anthology/D19-1251.pdf
2019 EMNLP # optim-adam, optim-projection, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, arch-coverage, search-beam, nondif-reinforce, task-spanlab, task-condlm, task-seq2seq 0 Let’s Ask Again: Refine Network for Automatic Question Generation Preksha Nema, Akash Kumar Mohankumar, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran https://www.aclweb.org/anthology/D19-1326.pdf
2019 EMNLP # optim-adam, reg-dropout, train-mtl, train-transfer, pool-max, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-skipthought, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq, task-cloze 0 Transfer Fine-Tuning: A BERT Case Study Yuki Arase, Jun’ichi Tsujii https://www.aclweb.org/anthology/D19-1542.pdf
2019 EMNLP # optim-sgd, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, pre-glove, pre-bert, task-spanlab, task-seq2seq 0 ParaQG: A System for Generating Questions and Answers from Paragraphs Vishwajeet Kumar, Sivaanandh Muneeswaran, Ganesh Ramakrishnan, Yuan-Fang Li https://www.aclweb.org/anthology/D19-3030.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq, task-cloze 4 Revealing the Dark Secrets of BERT Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky https://www.aclweb.org/anthology/D19-1445.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-bert, adv-examp, task-textpair, task-spanlab, task-lm, task-seq2seq, task-tree 2 CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, William L. Hamilton https://www.aclweb.org/anthology/D19-1458.pdf
2019 EMNLP # optim-adam, pool-max, arch-att, arch-bilinear, pre-glove, pre-bert, task-textclass, task-spanlab, task-lm, task-seq2seq 3 Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi https://www.aclweb.org/anthology/D19-1243.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-stopping, pool-max, arch-lstm, arch-bilstm, arch-att, arch-gating, arch-transformer, comb-ensemble, pre-fasttext, pre-bert, task-seqlab, task-spanlab 3 Don’t Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases Christopher Clark, Mark Yatskar, Luke Zettlemoyer https://www.aclweb.org/anthology/D19-1418.pdf
2019 EMNLP # optim-adam, train-mll, arch-lstm, arch-att, pre-bert, task-textpair, task-spanlab 0 GeoSQA: A Benchmark for Scenario-based Question Answering in the Geography Domain at High School Level Zixian Huang, Yulin Shen, Xiao Li, Yu’ang Wei, Gong Cheng, Lin Zhou, Xinyu Dai, Yuzhong Qu https://www.aclweb.org/anthology/D19-1597.pdf
2019 EMNLP # arch-att, task-textpair, task-spanlab, task-seq2seq 0 On the Importance of Delexicalization for Fact Verification Sandeep Suntwal, Mithun Paul, Rebecca Sharp, Mihai Surdeanu https://www.aclweb.org/anthology/D19-1340.pdf
2019 EMNLP # reg-dropout, reg-stopping, reg-patience, train-mtl, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-transformer, comb-ensemble, pre-bert, task-textclass, task-textpair, task-spanlab 0 MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, Jakob Grue Simonsen https://www.aclweb.org/anthology/D19-1475.pdf
2019 EMNLP # arch-lstm, arch-att, arch-copy, pre-glove, pre-bert, task-spanlab, task-seq2seq 1 Can You Unpack That? Learning to Rewrite Questions-in-Context Ahmed Elgohary, Denis Peskov, Jordan Boyd-Graber https://www.aclweb.org/anthology/D19-1605.pdf
2019 EMNLP # optim-adam, reg-labelsmooth, arch-lstm, arch-att, arch-selfatt, arch-transformer, latent-vae, task-spanlab, task-seq2seq, task-alignment 1 Hint-Based Training for Non-Autoregressive Machine Translation Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, Tie-Yan Liu https://www.aclweb.org/anthology/D19-1573.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-spanlab 0 Machine Reading Comprehension Using Structural Knowledge Graph-aware Network Delai Qiu, Yuanzhe Zhang, Xinwei Feng, Xiangwen Liao, Wenbin Jiang, Yajuan Lyu, Kang Liu, Jun Zhao https://www.aclweb.org/anthology/D19-1602.pdf
2019 EMNLP # optim-adam, train-mll, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, search-beam, task-spanlab, task-lm, task-seq2seq, task-cloze 1 Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs Angela Fan, Claire Gardent, Chloé Braud, Antoine Bordes https://www.aclweb.org/anthology/D19-1428.pdf
2019 EMNLP # optim-adam, optim-projection, init-glorot, reg-dropout, arch-att, arch-selfatt, arch-memo, arch-coverage, arch-transformer, search-beam, pre-elmo, pre-bert, task-spanlab, task-seq2seq, task-relation, task-tree 1 A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li https://www.aclweb.org/anthology/D19-1170.pdf
2019 EMNLP # arch-att, comb-ensemble, pre-bert, task-textpair, task-spanlab 3 Applying BERT to Document Retrieval with Birch Zeynep Akkalyoncu Yilmaz, Shengjin Wang, Wei Yang, Haotian Zhang, Jimmy Lin https://www.aclweb.org/anthology/D19-3004.pdf
2019 EMNLP # arch-coverage, pre-bert, loss-nce, task-textpair, task-spanlab 5 Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets Mor Geva, Yoav Goldberg, Jonathan Berant https://www.aclweb.org/anthology/D19-1107.pdf
2019 EMNLP # optim-adam, arch-att, arch-selfatt, arch-energy, pre-bert, task-spanlab 4 Multi-passage BERT: A Globally Normalized BERT Model for Open-domain Question Answering Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, Bing Xiang https://www.aclweb.org/anthology/D19-1599.pdf
2019 EMNLP # optim-sgd, optim-adam, reg-stopping, train-mll, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, task-spanlab, task-lm, task-seq2seq 1 Retrofitting Contextualized Word Embeddings with Paraphrases Weijia Shi, Muhao Chen, Pei Zhou, Kai-Wei Chang https://www.aclweb.org/anthology/D19-1113.pdf
2019 EMNLP # optim-adam, arch-lstm, arch-att, arch-selfatt, task-spanlab, task-seq2seq 0 Question-type Driven Question Generation Wenjie Zhou, Minghua Zhang, Yunfang Wu https://www.aclweb.org/anthology/D19-1622.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-stopping, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-memo, pre-glove, pre-elmo, pre-bert, task-textpair, task-spanlab, task-tree 1 What’s Missing: A Knowledge Gap Guided Approach for Multi-hop Question Answering Tushar Khot, Ashish Sabharwal, Peter Clark https://www.aclweb.org/anthology/D19-1281.pdf
2019 EMNLP # optim-adam, optim-projection, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-subword, comb-ensemble, search-beam, pre-word2vec, pre-glove, pre-elmo, adv-examp, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq 8 Universal Adversarial Triggers for Attacking and Analyzing NLP Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh https://www.aclweb.org/anthology/D19-1221.pdf
2019 EMNLP # optim-adam, arch-att, arch-selfatt, arch-coverage, pre-bert, nondif-reinforce, latent-vae, task-spanlab, task-tree 2 A Discrete Hard EM Approach for Weakly Supervised Question Answering Sewon Min, Danqi Chen, Hannaneh Hajishirzi, Luke Zettlemoyer https://www.aclweb.org/anthology/D19-1284.pdf
2019 EMNLP # optim-adam, arch-cnn, arch-att, arch-coverage, pre-glove, pre-bert, latent-topic, task-spanlab 1 Question Answering for Privacy Policies: Combining Computational and Legal Perspectives Abhilasha Ravichander, Alan W Black, Shomir Wilson, Thomas Norton, Norman Sadeh https://www.aclweb.org/anthology/D19-1500.pdf
2019 EMNLP # optim-sgd, optim-adam, reg-dropout, train-mtl, pool-max, pool-mean, arch-rnn, arch-lstm, arch-gru, arch-bigru, arch-att, arch-coverage, pre-word2vec, latent-topic, task-spanlab, task-seq2seq 0 A Neural Citation Count Prediction Model based on Peer Review Text Siqing Li, Wayne Xin Zhao, Eddy Jing Yin, Ji-Rong Wen https://www.aclweb.org/anthology/D19-1497.pdf
2019 EMNLP # optim-adam, train-augment, arch-att, arch-selfatt, pre-bert, task-spanlab, task-lm 3 Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning Pradeep Dasigi, Nelson F. Liu, Ana Marasović, Noah A. Smith, Matt Gardner https://www.aclweb.org/anthology/D19-1606.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-att, arch-memo, arch-copy, arch-bilinear, arch-coverage, task-textpair, task-spanlab, task-seq2seq, task-tree 0 Knowledge Aware Conversation Generation with Explainable Reasoning over Augmented Graphs Zhibin Liu, Zheng-Yu Niu, Hua Wu, Haifeng Wang https://www.aclweb.org/anthology/D19-1187.pdf
2019 EMNLP # arch-lstm, arch-bilstm, arch-cnn, pre-fasttext, pre-glove, pre-bert, adv-examp, adv-train, task-textclass, task-spanlab, task-seq2seq 3 Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack Emily Dinan, Samuel Humeau, Bharath Chintagunta, Jason Weston https://www.aclweb.org/anthology/D19-1461.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-att, arch-copy, arch-coverage, search-beam, pre-glove, latent-vae, task-spanlab, task-lm, task-condlm, task-seq2seq 1 Mixture Content Selection for Diverse Sequence Generation Jaemin Cho, Minjoon Seo, Hannaneh Hajishirzi https://www.aclweb.org/anthology/D19-1308.pdf
2019 NAA-CL # optim-adam, reg-dropout, reg-decay, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, search-beam, task-spanlab, task-seq2seq 0 Online Distilling from Checkpoints for Neural Machine Translation Hao-Ran Wei, Shujian Huang, Ran Wang, Xin-yu Dai, Jiajun Chen https://www.aclweb.org/anthology/N19-1192.pdf
2019 NAA-CL # optim-adam, arch-cnn, arch-att, arch-coverage, pre-elmo, pre-bert, task-textclass, task-spanlab, task-lm, task-cloze 19 BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis Hu Xu, Bing Liu, Lei Shu, Philip Yu https://www.aclweb.org/anthology/N19-1242.pdf
2019 NAA-CL # optim-adam, reg-stopping, pool-max, arch-rnn, arch-lstm, arch-cnn, arch-memo, pre-glove, loss-nce, task-spanlab, task-lm, task-seq2seq 0 Harry Potter and the Action Prediction Challenge from Natural Language David Vilares, Carlos Gómez-Rodríguez https://www.aclweb.org/anthology/N19-1218.pdf
2019 NAA-CL # arch-rnn, arch-lstm, arch-bilstm, arch-memo, task-spanlab, task-lm, task-tree 0 FreebaseQA: A New Factoid QA Data Set Matching Trivia-Style Question-Answer Pairs with Freebase Kelvin Jiang, Dekun Wu, Hui Jiang https://www.aclweb.org/anthology/N19-1028.pdf
2019 NAA-CL # optim-adam, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-att, pre-glove, task-spanlab, task-lm, task-seq2seq 0 The Lower The Simpler: Simplifying Hierarchical Recurrent Models Chao Wang, Hui Jiang https://www.aclweb.org/anthology/N19-1402.pdf
2019 NAA-CL # optim-sgd, optim-adam, arch-lstm, arch-att, arch-memo, adv-train, task-textclass, task-seqlab, task-spanlab, task-seq2seq 0 Document-Level Event Factuality Identification via Adversarial Neural Network Zhong Qian, Peifeng Li, Qiaoming Zhu, Guodong Zhou https://www.aclweb.org/anthology/N19-1287.pdf
2019 NAA-CL # optim-adam, reg-dropout, train-mtl, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-gating, arch-bilinear, pre-glove, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq 4 Multi-task Learning with Sample Re-weighting for Machine Reading Comprehension Yichong Xu, Xiaodong Liu, Yelong Shen, Jingjing Liu, Jianfeng Gao https://www.aclweb.org/anthology/N19-1271.pdf
2019 NAA-CL # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-memo, pre-elmo, task-spanlab, task-seq2seq 7 BAG: Bi-directional Attention Entity Graph Convolutional Network for Multi-hop Reasoning Question Answering Yu Cao, Meng Fang, Dacheng Tao https://www.aclweb.org/anthology/N19-1032.pdf
2019 NAA-CL # optim-adam, reg-dropout, arch-lstm, arch-att, pre-glove, pre-bert, task-spanlab, task-seq2seq 3 Fast Prototyping a Dialogue Comprehension System for Nurse-Patient Conversations on Symptom Monitoring Zhengyuan Liu, Hazel Lim, Nur Farah Ain Suhaimi, Shao Chuen Tong, Sharon Ong, Angela Ng, Sheldon Lee, Michael R. Macdonald, Savitha Ramasamy, Pavitra Krishnaswamy, Wai Leng Chow, Nancy F. Chen https://www.aclweb.org/anthology/N19-2004.pdf
2019 NAA-CL # optim-adam, reg-stopping, reg-patience, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-coverage, adv-examp, task-textclass, task-textpair, task-spanlab, task-seq2seq 10 Inoculation by Fine-Tuning: A Method for Analyzing Challenge Datasets Nelson F. Liu, Roy Schwartz, Noah A. Smith https://www.aclweb.org/anthology/N19-1225.pdf
2019 NAA-CL # optim-adam, reg-dropout, reg-decay, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze 3209 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova https://www.aclweb.org/anthology/N19-1423.pdf