Year Conf. Topic Cited Paper Authors Url
2019 ACL # optim-adam, train-mll, arch-cnn, arch-att, arch-transformer, pre-skipthought, pre-bert, task-extractive, task-lm, task-seq2seq, task-cloze 0 Sentence Centrality Revisited for Unsupervised Summarization Hao Zheng, Mirella Lapata https://www.aclweb.org/anthology/P19-1628.pdf
2019 ACL # optim-adam, reg-dropout, train-mtl, train-mll, pool-max, pool-mean, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-coverage, pre-glove, pre-skipthought, pre-elmo, pre-bert, adv-train, task-textpair, task-lm, task-cloze, task-relation 2 DisSent: Learning Sentence Representations from Explicit Discourse Relations Allen Nie, Erin Bennett, Noah Goodman https://www.aclweb.org/anthology/P19-1442.pdf
2019 ACL # optim-adam, optim-projection, norm-gradient, train-mll, train-transfer, activ-relu, pool-max, arch-rnn, arch-lstm, arch-subword, comb-ensemble, pre-fasttext, pre-skipthought, latent-vae, task-textpair, task-lm, task-seq2seq, task-cloze 0 Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning Shuai Tang, Virginia R. de Sa https://www.aclweb.org/anthology/P19-1397.pdf
2019 ACL # train-mll, pool-max, arch-lstm, arch-att, arch-memo, nondif-reinforce, adv-train, latent-vae, task-extractive, task-lm, task-seq2seq, task-cloze, task-context 1 Self-Supervised Dialogue Learning Jiawei Wu, Xin Wang, William Yang Wang https://www.aclweb.org/anthology/P19-1375.pdf
2019 ACL # train-mtl, train-transfer, arch-lstm, arch-att, pre-elmo, pre-bert, latent-vae, task-textclass, task-lm, task-seq2seq, task-cloze 6 Pretraining Methods for Dialog Context Representation Learning Shikib Mehri, Evgeniia Razumovskaia, Tiancheng Zhao, Maxine Eskenazi https://www.aclweb.org/anthology/P19-1373.pdf
2019 ACL # optim-adam, init-glorot, reg-dropout, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-coverage, arch-subword, arch-transformer, pre-word2vec, pre-elmo, pre-bert, struct-hmm, latent-vae, task-extractive, task-lm, task-seq2seq, task-cloze 0 HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization Xingxing Zhang, Furu Wei, Ming Zhou https://www.aclweb.org/anthology/P19-1499.pdf
2019 ACL # optim-adam, reg-dropout, reg-decay, reg-labelsmooth, train-mll, train-transfer, arch-att, arch-selfatt, arch-transformer, comb-ensemble, search-beam, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-cloze 1 A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning Gonçalo M. Correia, André F. T. Martins https://www.aclweb.org/anthology/P19-1292.pdf
2019 ACL # optim-sgd, optim-adam, reg-dropout, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-cloze 116 Multi-Task Deep Neural Networks for Natural Language Understanding Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao https://www.aclweb.org/anthology/P19-1441.pdf
2019 ACL # optim-adam, train-mtl, train-mll, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-cloze 27 ERNIE: Enhanced Language Representation with Informative Entities Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu https://www.aclweb.org/anthology/P19-1139.pdf
2019 ACL # optim-adam, arch-rnn, arch-lstm, arch-transformer, pre-elmo, pre-bert, task-textpair, task-lm, task-cloze 37 BERT Rediscovers the Classical NLP Pipeline Ian Tenney, Dipanjan Das, Ellie Pavlick https://www.aclweb.org/anthology/P19-1452.pdf
2019 ACL # optim-adam, optim-projection, train-mtl, train-transfer, arch-lstm, arch-coverage, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-textclass, task-lm, task-cloze 0 Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings Vihari Piratla, Sunita Sarawagi, Soumen Chakrabarti https://www.aclweb.org/anthology/P19-1168.pdf
2019 ACL # train-transfer, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-transformer, pre-bert, task-textclass, task-spanlab, task-lm, task-seq2seq, task-cloze 2 Exploring Pre-trained Language Models for Event Extraction and Generation Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, Dongsheng Li https://www.aclweb.org/anthology/P19-1522.pdf
2019 ACL # arch-lstm, arch-bilstm, arch-att, arch-transformer, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-textpair, task-lm, task-cloze 3 Classification and Clustering of Arguments with Contextualized Word Embeddings Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab, Iryna Gurevych https://www.aclweb.org/anthology/P19-1054.pdf
2019 EMNLP # optim-adam, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-elmo, pre-bert, adv-examp, task-textclass, task-spanlab, task-lm, task-seq2seq, task-cloze 0 AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh https://www.aclweb.org/anthology/D19-3002.pdf
2019 EMNLP # pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-skipthought, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-cloze 0 UER: An Open-Source Toolkit for Pre-training Models Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, Xiaoyong Du https://www.aclweb.org/anthology/D19-3041.pdf
2019 EMNLP # optim-projection, train-transfer, arch-rnn, arch-lstm, pre-elmo, pre-bert, adv-train, task-textclass, task-seqlab, task-lm, task-cloze 4 Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling Xiaochuang Han, Jacob Eisenstein https://www.aclweb.org/anthology/D19-1433.pdf
2019 EMNLP # optim-adam, reg-dropout, train-mtl, train-mll, train-transfer, arch-lstm, arch-att, arch-coverage, arch-subword, pre-elmo, pre-bert, task-textpair, task-lm, task-seq2seq, task-cloze, task-alignment 4 Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Ming Zhou https://www.aclweb.org/anthology/D19-1252.pdf
2019 EMNLP # arch-coverage, pre-bert, task-lm, task-cloze 3 Commonsense Knowledge Mining from Pretrained Models Joe Davison, Joshua Feldman, Alexander Rush https://www.aclweb.org/anthology/D19-1109.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, reg-decay, train-mll, train-transfer, pool-max, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze, task-relation 22 Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT Shijie Wu, Mark Dredze https://www.aclweb.org/anthology/D19-1077.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, pool-max, arch-rnn, arch-lstm, arch-gru, arch-att, arch-subword, search-beam, pre-fasttext, pre-bert, task-textclass, task-lm, task-seq2seq, task-cloze 1 Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects Jianmo Ni, Jiacheng Li, Julian McAuley https://www.aclweb.org/anthology/D19-1018.pdf
2019 EMNLP # optim-sgd, arch-lstm, arch-att, arch-selfatt, pre-word2vec, pre-bert, task-textclass, task-textpair, task-lm, task-cloze 0 News2vec: News Network Embedding with Subnode Information Ye Ma, Lu Zong, Yikang Yang, Jionglong Su https://www.aclweb.org/anthology/D19-1490.pdf
2019 EMNLP # train-transfer, arch-rnn, arch-att, comb-ensemble, pre-bert, task-extractive, task-lm, task-cloze 0 AMPERSAND: Argument Mining for PERSuAsive oNline Discussions Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, Alyssa Hwang https://www.aclweb.org/anthology/D19-1291.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, reg-worddropout, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, loss-cca, task-lm, task-seq2seq, task-cloze 1 The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives Elena Voita, Rico Sennrich, Ivan Titov https://www.aclweb.org/anthology/D19-1448.pdf
2019 EMNLP # optim-adam, optim-projection, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-fasttext, pre-elmo, pre-bert, struct-crf, task-spanlab, task-lm, task-seq2seq, task-cloze 0 Learning with Limited Data for Multilingual Reading Comprehension Kyungjae Lee, Sunghyun Park, Hojae Han, Jinyoung Yeo, Seung-won Hwang, Juho Lee https://www.aclweb.org/anthology/D19-1283.pdf
2019 EMNLP # optim-adam, train-transfer, arch-lstm, arch-att, arch-selfatt, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, loss-nce, task-textclass, task-lm, task-cloze 0 Multi-Granularity Representations of Dialog Shikib Mehri, Maxine Eskenazi https://www.aclweb.org/anthology/D19-1184.pdf
2019 EMNLP # optim-adam, norm-layer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-elmo, pre-bert, task-textclass, task-textpair, task-extractive, task-spanlab, task-lm, task-seq2seq, task-cloze 0 Fine-tune BERT with Sparse Self-Attention Mechanism Baiyun Cui, Yingming Li, Ming Chen, Zhongfei Zhang https://www.aclweb.org/anthology/D19-1361.pdf
2019 EMNLP # optim-adam, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-subword, pre-fasttext, pre-glove, pre-elmo, pre-bert, loss-nce, task-textclass, task-textpair, task-lm, task-cloze 0 Multi-label Categorization of Accounts of Sexism using a Neural Framework Pulkit Parikh, Harika Abburi, Pinkesh Badjatiya, Radhika Krishnan, Niyati Chhaya, Manish Gupta, Vasudeva Varma https://www.aclweb.org/anthology/D19-1174.pdf
2019 EMNLP # optim-projection, train-mtl, arch-cnn, arch-att, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-cloze 9 Patient Knowledge Distillation for BERT Model Compression Siqi Sun, Yu Cheng, Zhe Gan, Jingjing Liu https://www.aclweb.org/anthology/D19-1441.pdf
2019 EMNLP # optim-adam, reg-dropout, train-mtl, train-transfer, pool-max, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-skipthought, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq, task-cloze 0 Transfer Fine-Tuning: A BERT Case Study Yuki Arase, Jun’ichi Tsujii https://www.aclweb.org/anthology/D19-1542.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq, task-cloze 4 Revealing the Dark Secrets of BERT Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky https://www.aclweb.org/anthology/D19-1445.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-labelsmooth, norm-batch, norm-gradient, pool-max, arch-rnn, arch-lstm, arch-att, arch-subword, arch-transformer, search-beam, latent-vae, task-lm, task-seq2seq, task-cloze, task-relation 2 FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, Eduard Hovy https://www.aclweb.org/anthology/D19-1437.pdf
2019 EMNLP # optim-adam, reg-dropout, train-augment, arch-att, arch-transformer, comb-ensemble, pre-fasttext, pre-bert, adv-examp, adv-train, task-textclass, task-lm, task-cloze 1 Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, Wei Wang https://www.aclweb.org/anthology/D19-1496.pdf
2019 EMNLP # optim-sgd, optim-adam, optim-projection, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-elmo, pre-bert, task-textclass, task-lm, task-cloze 1 Visualizing and Understanding the Effectiveness of BERT Yaru Hao, Li Dong, Furu Wei, Ke Xu https://www.aclweb.org/anthology/D19-1424.pdf
2019 EMNLP # optim-adam, train-mll, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, search-beam, task-spanlab, task-lm, task-seq2seq, task-cloze 1 Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs Angela Fan, Claire Gardent, Chloé Braud, Antoine Bordes https://www.aclweb.org/anthology/D19-1428.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-labelsmooth, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-subword, arch-transformer, search-beam, pre-elmo, pre-bert, latent-vae, task-seqlab, task-extractive, task-lm, task-seq2seq, task-cloze 6 Text Summarization with Pretrained Encoders Yang Liu, Mirella Lapata https://www.aclweb.org/anthology/D19-1387.pdf
2019 EMNLP # reg-dropout, arch-lstm, arch-att, pre-glove, pre-elmo, pre-bert, task-seqlab, task-lm, task-cloze 1 BERT for Coreference Resolution: Baselines and Analysis Mandar Joshi, Omer Levy, Luke Zettlemoyer, Daniel Weld https://www.aclweb.org/anthology/D19-1588.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-decay, norm-layer, train-mll, train-parallel, arch-rnn, arch-att, arch-subword, arch-transformer, search-greedy, search-beam, pre-bert, task-lm, task-seq2seq, task-cloze 2 Mask-Predict: Parallel Decoding of Conditional Masked Language Models Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer https://www.aclweb.org/anthology/D19-1633.pdf
2019 EMNLP # optim-adam, train-mtl, pool-max, arch-rnn, arch-lstm, arch-att, arch-coverage, pre-glove, pre-bert, task-textpair, task-lm, task-cloze 3 Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, Samuel R. Bowman https://www.aclweb.org/anthology/D19-1286.pdf
2019 EMNLP # reg-dropout, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, struct-cfg, task-lm, task-seq2seq, task-cloze, task-tree 0 Tree Transformer: Integrating Tree Structures into Self-Attention Yaushian Wang, Hung-Yi Lee, Yun-Nung Chen https://www.aclweb.org/anthology/D19-1098.pdf
2019 EMNLP # optim-adam, arch-att, arch-transformer, comb-ensemble, pre-bert, task-lm, task-condlm, task-cloze, task-tree 11 Fusion of Detected Objects in Text for Visual Question Answering Chris Alberti, Jeffrey Ling, Michael Collins, David Reitter https://www.aclweb.org/anthology/D19-1219.pdf
2019 EMNLP # optim-projection, train-mll, train-transfer, arch-lstm, arch-att, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, pre-use, adv-train, loss-cca, task-textclass, task-textpair, task-lm, task-seq2seq, task-cloze 0 Multi-View Domain Adapted Sentence Embeddings for Low-Resource Unsupervised Duplicate Question Detection Nina Poerner, Hinrich Schütze https://www.aclweb.org/anthology/D19-1173.pdf
2019 EMNLP # arch-att, arch-transformer, pre-bert, task-lm, task-cloze 1 TalkDown: A Corpus for Condescension Detection in Context Zijian Wang, Christopher Potts https://www.aclweb.org/anthology/D19-1385.pdf
2019 NAA-CL # optim-adam, arch-cnn, arch-att, arch-coverage, pre-elmo, pre-bert, task-textclass, task-spanlab, task-lm, task-cloze 19 BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis Hu Xu, Bing Liu, Lei Shu, Philip Yu https://www.aclweb.org/anthology/N19-1242.pdf
2019 NAA-CL # reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-memo, pre-elmo, pre-bert, task-lm, task-cloze 16 Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence Chi Sun, Luyao Huang, Xipeng Qiu https://www.aclweb.org/anthology/N19-1035.pdf
2019 NAA-CL # optim-adam, reg-dropout, reg-decay, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze 3209 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova https://www.aclweb.org/anthology/N19-1423.pdf