Year Conf. Topic Cited Paper Authors Url
2019 ACL # optim-adam, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, task-seq2seq 2 Incremental Transformer with Deliberation Decoder for Document Grounded Conversations Zekang Li, Cheng Niu, Fandong Meng, Yang Feng, Qian Li, Jie Zhou https://www.aclweb.org/anthology/P19-1002.pdf
2019 ACL # optim-adam, reg-dropout, norm-layer, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-transformer, comb-ensemble, search-beam, pre-bert, latent-vae, task-condlm, task-seq2seq 1 Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems Hung Le, Doyen Sahoo, Nancy Chen, Steven Hoi https://www.aclweb.org/anthology/P19-1564.pdf
2019 ACL # optim-adam, train-mtl, train-transfer, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-extractive, task-lm, task-context 0 Self-Supervised Learning for Contextualized Extractive Summarization Hong Wang, Xin Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang https://www.aclweb.org/anthology/P19-1214.pdf
2019 ACL # optim-adam, reg-dropout, reg-stopping, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq 3 E3: Entailment-driven Extracting and Editing for Conversational Machine Reading Victor Zhong, Luke Zettlemoyer https://www.aclweb.org/anthology/P19-1223.pdf
2019 ACL # optim-adam, reg-dropout, reg-stopping, train-mll, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, comb-ensemble, pre-word2vec, task-lm, task-condlm, task-seq2seq 1 Attention-based Conditioning Methods for External Knowledge Integration Katerina Margatina, Christos Baziotis, Alexandros Potamianos https://www.aclweb.org/anthology/P19-1385.pdf
2019 ACL # train-transfer, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, pre-word2vec, pre-elmo, pre-bert, task-textclass, task-seqlab, task-lm, task-seq2seq 0 An Investigation of Transfer Learning-Based Sentiment Analysis in Japanese Enkhbold Bataa, Joshua Wu https://www.aclweb.org/anthology/P19-1458.pdf
2019 ACL # optim-adagrad, reg-dropout, norm-batch, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, task-lm, task-seq2seq 0 Training Hybrid Language Models by Marginalizing over Segmentations Edouard Grave, Sainbayar Sukhbaatar, Piotr Bojanowski, Armand Joulin https://www.aclweb.org/anthology/P19-1143.pdf
2019 ACL # optim-adam, reg-dropout, train-mll, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-spanlab, task-lm, task-seq2seq 4 Cross-Lingual Training for Automatic Question Generation Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee, Ganesh Ramakrishnan, Preethi Jyothi https://www.aclweb.org/anthology/P19-1481.pdf
2019 ACL # reg-dropout, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-coverage, arch-transformer, comb-ensemble, pre-glove, pre-elmo, pre-bert, task-relation 8 Head-Driven Phrase Structure Grammar Parsing on Penn Treebank Junru Zhou, Hai Zhao https://www.aclweb.org/anthology/P19-1230.pdf
2019 ACL # arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, search-beam, latent-vae, task-seq2seq 3 Imitation Learning for Non-Autoregressive Neural Machine Translation Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, Xu Sun https://www.aclweb.org/anthology/P19-1125.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, task-textpair, task-lm, task-seq2seq 4 Multimodal Transformer for Unaligned Multimodal Language Sequences Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov https://www.aclweb.org/anthology/P19-1656.pdf
2019 ACL # optim-sgd, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, comb-ensemble, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-textpair, task-seqlab, task-seq2seq 0 End-to-End Sequential Metaphor Identification Inspired by Linguistic Theories Rui Mao, Chenghua Lin, Frank Guerin https://www.aclweb.org/anthology/P19-1378.pdf
2019 ACL # arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, pre-elmo, task-seqlab, task-lm, task-tree 2 End-to-end Deep Reinforcement Learning Based Coreference Resolution Hongliang Fei, Xu Li, Dingcheng Li, Ping Li https://www.aclweb.org/anthology/P19-1064.pdf
2019 ACL # reg-labelsmooth, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, latent-vae, task-seq2seq 4 Syntactically Supervised Transformers for Faster Neural Machine Translation Nader Akoury, Kalpesh Krishna, Mohit Iyyer https://www.aclweb.org/anthology/P19-1122.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, pool-max, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-bert, struct-crf, task-seq2seq 1 Dense Procedure Captioning in Narrated Instructional Videos Botian Shi, Lei Ji, Yaobo Liang, Nan Duan, Peng Chen, Zhendong Niu, Ming Zhou https://www.aclweb.org/anthology/P19-1641.pdf
2019 ACL # optim-adam, pool-max, pool-mean, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-glove, task-textpair 0 Predicting Human Activities from User-Generated Content Steven Wilson, Rada Mihalcea https://www.aclweb.org/anthology/P19-1245.pdf
2019 ACL # optim-adam, reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, pre-fasttext, pre-glove, latent-vae, task-relation 1 An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier https://www.aclweb.org/anthology/P19-1048.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-gnn, arch-att, arch-selfatt, arch-memo, arch-bilinear, arch-transformer, comb-ensemble, pre-fasttext, task-textclass, task-relation, task-graph 2 Graph-based Dependency Parsing with Graph Neural Networks Tao Ji, Yuanbin Wu, Man Lan https://www.aclweb.org/anthology/P19-1237.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, reg-stopping, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-treelstm, arch-att, arch-selfatt, arch-coverage, latent-vae, task-textpair, task-lm, task-condlm, task-relation, task-tree 1 Learning Latent Trees with Stochastic Perturbations and Differentiable Dynamic Programming Caio Corro, Ivan Titov https://www.aclweb.org/anthology/P19-1551.pdf
2019 ACL # optim-adam, arch-rnn, arch-gru, arch-gnn, arch-att, arch-selfatt, comb-ensemble, pre-elmo, pre-bert, task-textclass, task-spanlab, task-seq2seq 6 Multi-hop Reading Comprehension across Multiple Documents by Reasoning over Heterogeneous Graphs Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xiaodong He, Bowen Zhou https://www.aclweb.org/anthology/P19-1260.pdf
2019 ACL # optim-adam, reg-dropout, norm-layer, train-transfer, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, task-seq2seq, task-relation 1 Neural Machine Translation with Reordering Embeddings Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita https://www.aclweb.org/anthology/P19-1174.pdf
2019 ACL # optim-adam, reg-dropout, norm-layer, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, comb-ensemble, task-seq2seq, task-tree 0 Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang, Bing Tian Dai, Dongxiang Zhang https://www.aclweb.org/anthology/P19-1619.pdf
2019 ACL # train-transfer, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-bert, adv-train, task-relation 6 Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, Saloni Potdar https://www.aclweb.org/anthology/P19-1132.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-bert, task-spanlab, task-seq2seq 0 Reading Turn by Turn: Hierarchical Attention Architecture for Spoken Dialogue Comprehension Zhengyuan Liu, Nancy Chen https://www.aclweb.org/anthology/P19-1543.pdf
2019 ACL # init-glorot, reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-bilinear, pre-elmo, pre-bert, struct-crf, task-textclass 12 Joint Slot Filling and Intent Detection via Capsule Neural Networks Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, Philip Yu https://www.aclweb.org/anthology/P19-1519.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, search-beam, task-seq2seq, task-alignment 3 On the Word Alignment from Neural Machine Translation Xintong Li, Guanlin Li, Lemao Liu, Max Meng, Shuming Shi https://www.aclweb.org/anthology/P19-1124.pdf
2019 ACL # optim-adagrad, reg-dropout, train-mll, pool-mean, arch-gnn, arch-att, arch-selfatt, arch-subword, loss-triplet 7 Multi-Channel Graph Neural Network for Entity Alignment Yixin Cao, Zhiyuan Liu, Chengjiang Li, Zhiyuan Liu, Juanzi Li, Tat-Seng Chua https://www.aclweb.org/anthology/P19-1140.pdf
2019 ACL # optim-adam, reg-dropout, reg-stopping, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, pre-glove, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm, task-relation 2 Multi-grained Named Entity Recognition Congying Xia, Chenwei Zhang, Tao Yang, Yaliang Li, Nan Du, Xian Wu, Wei Fan, Fenglong Ma, Philip Yu https://www.aclweb.org/anthology/P19-1138.pdf
2019 ACL # optim-adam, optim-adagrad, reg-dropout, train-augment, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, search-beam, pre-glove, pre-bert, adv-examp, latent-vae, task-spanlab, task-seq2seq 3 Learning to Ask Unanswerable Questions for Machine Reading Comprehension Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu https://www.aclweb.org/anthology/P19-1415.pdf
2019 ACL # optim-adam, optim-projection, train-mll, arch-lstm, arch-att, arch-selfatt, arch-subword, comb-ensemble, pre-fasttext, pre-elmo, pre-bert, task-textclass, task-lm 9 Multilingual Constituency Parsing with Self-Attention and Pre-Training Nikita Kitaev, Steven Cao, Dan Klein https://www.aclweb.org/anthology/P19-1340.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, arch-lstm, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, struct-crf, task-textclass 0 Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes Jie Cao, Michael Tanana, Zac Imel, Eric Poitras, David Atkins, Vivek Srikumar https://www.aclweb.org/anthology/P19-1563.pdf
2019 ACL # optim-projection, arch-lstm, arch-att, arch-selfatt, struct-crf, task-seqlab, task-relation 3 Crowdsourcing and Aggregating Nested Markable Annotations Chris Madge, Juntao Yu, Jon Chamberlain, Udo Kruschwitz, Silviu Paun, Massimo Poesio https://www.aclweb.org/anthology/P19-1077.pdf
2019 ACL # train-parallel, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-memo, pre-word2vec, adv-train, latent-vae 2 Rhetorically Controlled Encoder-Decoder for Modern Chinese Poetry Generation Zhiqiang Liu, Zuohui Fu, Jie Cao, Gerard de Melo, Yik-Cheung Tam, Cheng Niu, Jie Zhou https://www.aclweb.org/anthology/P19-1192.pdf
2019 ACL # optim-adam, optim-projection, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-subword, comb-ensemble, search-beam, search-viterbi, pre-word2vec, pre-elmo, pre-bert, task-lm 0 Cross-Domain Generalization of Neural Constituency Parsers Daniel Fried, Nikita Kitaev, Dan Klein https://www.aclweb.org/anthology/P19-1031.pdf
2019 ACL # optim-adam, reg-dropout, norm-gradient, arch-lstm, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-memo, pre-glove, task-spanlab, task-seq2seq 0 Inferential Machine Comprehension: Answering Questions by Recursively Deducing the Evidence Chain from Text Jianxing Yu, Zhengjun Zha, Jian Yin https://www.aclweb.org/anthology/P19-1217.pdf
2019 ACL # optim-adam, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, comb-ensemble, pre-elmo, pre-bert, loss-margin, task-spanlab, task-lm 2 Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, Sujian Li https://www.aclweb.org/anthology/P19-1226.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, train-mll, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, comb-ensemble, search-beam, task-lm, task-condlm, task-seq2seq 5 Sparse Sequence-to-Sequence Models Ben Peters, Vlad Niculae, André F. T. Martins https://www.aclweb.org/anthology/P19-1146.pdf
2019 ACL # arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, task-textclass, task-extractive, task-lm, task-seq2seq 4 Multi-News: A Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, Dragomir Radev https://www.aclweb.org/anthology/P19-1102.pdf
2019 ACL # optim-adam, optim-adadelta, optim-projection, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, pre-glove, task-spanlab, task-condlm 3 Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives Yi Tay, Shuohang Wang, Anh Tuan Luu, Jie Fu, Minh C. Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, Aston Zhang https://www.aclweb.org/anthology/P19-1486.pdf
2019 ACL # arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-seq2seq 5 Simultaneous Translation with Flexible Policy via Restricted Imitation Learning Baigong Zheng, Renjie Zheng, Mingbo Ma, Liang Huang https://www.aclweb.org/anthology/P19-1582.pdf
2019 ACL # reg-dropout, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, task-seq2seq 1 CNNs found to jump around more skillfully than RNNs: Compositional Generalization in Seq2seq Convolutional Networks Roberto Dessì, Marco Baroni https://www.aclweb.org/anthology/P19-1381.pdf
2019 ACL # optim-adam, reg-dropout, train-mtl, train-transfer, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-elmo, loss-svd, task-textclass, task-textpair, task-lm, task-seq2seq, meta-arch 1 Continual and Multi-Task Architecture Search Ramakanth Pasunuru, Mohit Bansal https://www.aclweb.org/anthology/P19-1185.pdf
2019 ACL # optim-adam, arch-att, arch-selfatt, arch-subword, arch-transformer, task-lm, task-seq2seq 30 Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, Ivan Titov https://www.aclweb.org/anthology/P19-1580.pdf
2019 ACL # optim-adam, reg-dropout, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-gnn, arch-att, arch-selfatt, arch-memo, arch-bilinear, comb-ensemble, pre-glove, task-textclass, task-condlm, task-seq2seq 10 Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog Zhe Gan, Yu Cheng, Ahmed Kholy, Linjie Li, Jingjing Liu, Jianfeng Gao https://www.aclweb.org/anthology/P19-1648.pdf
2019 ACL # optim-sgd, optim-adam, optim-projection, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-subword, pre-word2vec, pre-fasttext, pre-bert, struct-crf, task-textclass, task-seqlab, task-lm, task-seq2seq, meta-init 1 Few-Shot Representation Learning for Out-Of-Vocabulary Words Ziniu Hu, Ting Chen, Kai-Wei Chang, Yizhou Sun https://www.aclweb.org/anthology/P19-1402.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, comb-ensemble, pre-elmo, pre-bert, task-seqlab, task-spanlab, task-seq2seq 0 MCˆ2: Multi-perspective Convolutional Cube for Conversational Machine Reading Comprehension Xuanyu Zhang https://www.aclweb.org/anthology/P19-1622.pdf
2019 ACL # optim-projection, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-seq2seq, task-alignment 8 STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, Haifeng Wang https://www.aclweb.org/anthology/P19-1289.pdf
2019 ACL # optim-adam, reg-dropout, reg-labelsmooth, train-transfer, arch-att, arch-selfatt, arch-copy, arch-bilinear, arch-coverage, arch-transformer, search-greedy, search-beam, pre-glove, task-spanlab, task-seq2seq, task-tree 1 Complex Question Decomposition for Semantic Parsing Haoyu Zhang, Jingjing Cai, Jianjun Xu, Ji Wang https://www.aclweb.org/anthology/P19-1440.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, norm-layer, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, search-beam, latent-vae, task-lm, task-seq2seq 2 Improving Abstractive Document Summarization with Salient Information Modeling Yongjian You, Weijia Jia, Tianyi Liu, Wenmian Yang https://www.aclweb.org/anthology/P19-1205.pdf
2019 ACL # arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, adv-examp, task-textclass, task-seq2seq 2 On the Robustness of Self-Attentive Models Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, Cho-Jui Hsieh https://www.aclweb.org/anthology/P19-1147.pdf
2019 ACL # optim-adam, pool-max, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, task-spanlab, task-lm 7 Multi-hop Reading Comprehension through Question Decomposition and Rescoring Sewon Min, Victor Zhong, Luke Zettlemoyer, Hannaneh Hajishirzi https://www.aclweb.org/anthology/P19-1613.pdf
2019 ACL # optim-adam, reg-dropout, reg-decay, norm-layer, train-augment, arch-gnn, arch-att, arch-selfatt, arch-copy, arch-transformer, search-greedy, pre-glove, pre-bert, task-textclass, task-lm, task-seq2seq, task-tree, task-graph 2 Generating Logical Forms from Graph Representations of Text and Entities Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, Yasemin Altun https://www.aclweb.org/anthology/P19-1010.pdf
2019 ACL # optim-adam, reg-dropout, norm-layer, train-augment, arch-rnn, arch-att, arch-selfatt, arch-residual, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq 7 Learning Deep Transformer Models for Machine Translation Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, Lidia S. Chao https://www.aclweb.org/anthology/P19-1176.pdf
2019 ACL # optim-adam, reg-dropout, reg-labelsmooth, arch-lstm, arch-att, arch-selfatt, arch-memo, arch-coverage, comb-ensemble, pre-glove, task-tree 0 Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang https://www.aclweb.org/anthology/P19-1417.pdf
2019 ACL # optim-adam, reg-dropout, train-mll, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seqlab, task-seq2seq 4 Assessing the Ability of Self-Attention Networks to Learn Word Order Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, Zhaopeng Tu https://www.aclweb.org/anthology/P19-1354.pdf
2019 ACL # optim-adagrad, reg-dropout, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, search-beam, task-lm, task-condlm, task-seq2seq 1 Informative Image Captioning with External Sources of Information Sanqiang Zhao, Piyush Sharma, Tomer Levinboim, Radu Soricut https://www.aclweb.org/anthology/P19-1650.pdf
2019 ACL # optim-adam, reg-dropout, train-transfer, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, pre-elmo, pre-bert, adv-train, task-textclass, task-lm, task-seq2seq, task-relation 2 Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction Christoph Alt, Marc Hübner, Leonhard Hennig https://www.aclweb.org/anthology/P19-1134.pdf
2019 ACL # optim-adam, optim-projection, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, task-textpair, task-lm 20 What Does BERT Learn about the Structure of Language? Ganesh Jawahar, Benoît Sagot, Djamé Seddah https://www.aclweb.org/anthology/P19-1356.pdf
2019 ACL # optim-sgd, optim-adam, optim-projection, arch-lstm, arch-treelstm, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seqlab, task-lm, task-seq2seq 5 Lattice Transformer for Speech Translation Pei Zhang, Niyu Ge, Boxing Chen, Kai Fan https://www.aclweb.org/anthology/P19-1649.pdf
2019 ACL # optim-adam, pool-max, arch-rnn, arch-att, arch-selfatt, pre-bert, task-spanlab 8 Compositional Questions Do Not Necessitate Multi-hop Reasoning Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, Luke Zettlemoyer https://www.aclweb.org/anthology/P19-1416.pdf
2019 ACL # optim-sgd, optim-projection, arch-rnn, arch-birnn, arch-lstm, arch-att, arch-selfatt, search-beam, adv-examp, task-lm, task-condlm, task-seq2seq 1 TIGS: An Inference Algorithm for Text Infilling with Gradient Search Dayiheng Liu, Jie Fu, Pengfei Liu, Jiancheng Lv https://www.aclweb.org/anthology/P19-1406.pdf
2019 ACL # reg-dropout, norm-batch, norm-gradient, activ-relu, arch-rnn, arch-lstm, arch-gcnn, arch-att, arch-selfatt, arch-transformer, task-lm, task-seq2seq, meta-arch 194 Transformer-XL: Attentive Language Models beyond a Fixed-Length Context Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, Ruslan Salakhutdinov https://www.aclweb.org/anthology/P19-1285.pdf
2019 ACL # optim-adam, reg-stopping, pool-max, arch-lstm, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, task-condlm, task-seq2seq 1 Constructing Interpretive Spatio-Temporal Features for Multi-Turn Responses Selection Junyu Lu, Chenbin Zhang, Zeying Xie, Guang Ling, Tom Chao Zhou, Zenglin Xu https://www.aclweb.org/anthology/P19-1006.pdf
2019 ACL # reg-dropout, norm-layer, pool-mean, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, search-beam, adv-train, task-seq2seq 0 Reference Network for Neural Machine Translation Han Fu, Chenghao Liu, Jianling Sun https://www.aclweb.org/anthology/P19-1287.pdf
2019 ACL # optim-adam, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-bert, struct-crf, task-seqlab, task-seq2seq 0 Scaling up Open Tagging from Tens to Thousands: Comprehension Empowered Attribute Value Extraction from Product Title Huimin Xu, Wenting Wang, Xin Mao, Xinyu Jiang, Man Lan https://www.aclweb.org/anthology/P19-1514.pdf
2019 ACL # optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, search-beam, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, nondif-reinforce, task-extractive, task-lm, task-condlm, task-seq2seq, task-lexicon 2 Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts Elizabeth Clark, Asli Celikyilmaz, Noah A. Smith https://www.aclweb.org/anthology/P19-1264.pdf
2019 ACL # optim-adam, init-glorot, reg-dropout, norm-gradient, train-mtl, train-mll, pool-max, pool-mean, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, search-beam, search-viterbi, pre-glove, pre-bert, struct-crf, task-seqlab, task-lm 3 GCDT: A Global Context Enhanced Deep Transition Architecture for Sequence Labeling Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, Jie Zhou https://www.aclweb.org/anthology/P19-1233.pdf
2019 ACL # optim-adam, reg-dropout, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, pre-glove, task-spanlab, task-seq2seq 2 Explicit Utilization of General Knowledge in Machine Reading Comprehension Chao Wang, Hui Jiang https://www.aclweb.org/anthology/P19-1219.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-transformer, latent-topic, task-seq2seq 2 Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information Pengcheng Yang, Zhihan Zhang, Fuli Luo, Lei Li, Chengyang Huang, Xu Sun https://www.aclweb.org/anthology/P19-1257.pdf
2019 ACL # reg-dropout, train-transfer, arch-att, arch-selfatt, arch-transformer, pre-bert, task-lm 0 BERT-based Lexical Substitution Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou https://www.aclweb.org/anthology/P19-1328.pdf
2019 ACL # optim-adam, init-glorot, reg-decay, arch-gnn, arch-att, arch-selfatt, arch-memo, arch-coverage, pre-bert, task-spanlab, task-lm 7 Cognitive Graph for Multi-Hop Reading Comprehension at Scale Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, Jie Tang https://www.aclweb.org/anthology/P19-1259.pdf
2019 ACL # optim-adam, init-glorot, norm-batch, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-glove, task-textclass, task-textpair, task-lm, task-seq2seq 4 Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks Yi Tay, Aston Zhang, Anh Tuan Luu, Jinfeng Rao, Shuai Zhang, Shuohang Wang, Jie Fu, Siu Cheung Hui https://www.aclweb.org/anthology/P19-1145.pdf
2019 ACL # optim-adam, reg-dropout, train-augment, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, comb-ensemble, search-beam, task-condlm, task-seq2seq 1 Negative Lexically Constrained Decoding for Paraphrase Generation Tomoyuki Kajiwara https://www.aclweb.org/anthology/P19-1607.pdf
2019 ACL # optim-adam, reg-dropout, train-mtl, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-paravec, pre-bert, adv-examp, task-spanlab 2 Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading Comprehension Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li https://www.aclweb.org/anthology/P19-1221.pdf
2019 ACL # reg-labelsmooth, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, search-beam, pre-word2vec, task-seqlab, task-seq2seq, task-relation 0 Poetry to Prose Conversion in Sanskrit as a Linearisation Task: A Case for Low-Resource Languages Amrith Krishna, Vishnu Sharma, Bishal Santra, Aishik Chakraborty, Pavankumar Satuluri, Pawan Goyal https://www.aclweb.org/anthology/P19-1111.pdf
2019 ACL # optim-adam, train-augment, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-subword, pre-elmo, pre-bert, adv-examp, task-spanlab, task-seq2seq, task-alignment 1 Improving the Robustness of Question Answering Systems to Question Paraphrasing Wee Chung Gan, Hwee Tou Ng https://www.aclweb.org/anthology/P19-1610.pdf
2019 ACL # optim-adam, init-glorot, reg-dropout, reg-worddropout, reg-stopping, train-mtl, arch-rnn, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-textclass, task-lm, task-condlm 1 Neural Legal Judgment Prediction in English Ilias Chalkidis, Ion Androutsopoulos, Nikolaos Aletras https://www.aclweb.org/anthology/P19-1424.pdf
2019 ACL # optim-sgd, optim-adam, reg-dropout, train-mll, train-transfer, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, loss-margin, task-lm, task-seq2seq 0 Self-Supervised Neural Machine Translation Dana Ruiter, Cristina España-Bonet, Josef van Genabith https://www.aclweb.org/anthology/P19-1178.pdf
2019 ACL # optim-adam, train-curriculum, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-memo, task-lm, task-relation 0 Learning a Matching Model with Co-teaching for Multi-turn Response Selection in Retrieval-based Dialogue Systems Jiazhan Feng, Chongyang Tao, Wei Wu, Yansong Feng, Dongyan Zhao, Rui Yan https://www.aclweb.org/anthology/P19-1370.pdf
2019 ACL # optim-adam, init-glorot, reg-dropout, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-coverage, arch-subword, arch-transformer, pre-word2vec, pre-elmo, pre-bert, struct-hmm, latent-vae, task-extractive, task-lm, task-seq2seq, task-cloze 0 HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization Xingxing Zhang, Furu Wei, Ming Zhou https://www.aclweb.org/anthology/P19-1499.pdf
2019 ACL # optim-adam, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-elmo, pre-bert, task-lm, task-seq2seq 1 Attention Is (not) All You Need for Commonsense Reasoning Tassilo Klein, Moin Nabi https://www.aclweb.org/anthology/P19-1477.pdf
2019 ACL # optim-amsgrad, init-glorot, reg-dropout, reg-labelsmooth, norm-gradient, arch-rnn, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-memo, arch-bilinear, search-beam, task-lm, task-condlm, task-seq2seq 0 Ordinal and Attribute Aware Response Generation in a Multimodal Dialogue System Hardik Chauhan, Mauajama Firdaus, Asif Ekbal, Pushpak Bhattacharyya https://www.aclweb.org/anthology/P19-1540.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, search-greedy, search-beam, nondif-reinforce, latent-vae, task-seq2seq 3 Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, Jie Zhou https://www.aclweb.org/anthology/P19-1288.pdf
2019 ACL # optim-adam, reg-dropout, pool-max, pool-mean, arch-rnn, arch-lstm, arch-gnn, arch-cnn, arch-att, arch-selfatt, arch-copy, latent-topic, task-textclass, task-seqlab, task-seq2seq, task-graph 2 Coherent Comments Generation for Chinese Articles with a Graph-to-Sequence Model Wei Li, Jingjing Xu, Yancheng He, ShengLi Yan, Yunfang Wu, Xu Sun https://www.aclweb.org/anthology/P19-1479.pdf
2019 ACL # optim-adam, reg-stopping, norm-layer, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-memo, arch-copy, arch-transformer, search-beam, latent-vae, task-lm, task-seq2seq, task-tree 4 Decomposable Neural Paraphrase Generation Zichao Li, Xin Jiang, Lifeng Shang, Qun Liu https://www.aclweb.org/anthology/P19-1332.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-gru, arch-att, arch-selfatt, pre-fasttext, latent-topic, task-textclass, task-lm, task-seq2seq 0 Fine-Grained Spoiler Detection from Large-Scale Review Corpora Mengting Wan, Rishabh Misra, Ndapa Nakashole, Julian McAuley https://www.aclweb.org/anthology/P19-1248.pdf
2019 ACL # optim-adam, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-bilinear, pre-bert, nondif-minrisk, nondif-reinforce, loss-nce, task-spanlab, task-lm, task-relation 7 Entity-Relation Extraction as Multi-Turn Question Answering Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, Jiwei Li https://www.aclweb.org/anthology/P19-1129.pdf
2019 ACL # optim-adam, optim-projection, arch-att, arch-selfatt, arch-transformer, search-beam, nondif-reinforce, task-seq2seq 0 Look Harder: A Neural Machine Translation Model with Hard Attention Sathish Reddy Indurthi, Insoo Chung, Sangha Kim https://www.aclweb.org/anthology/P19-1290.pdf
2019 ACL # train-mll, train-transfer, pool-max, pool-mean, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-subword, task-textclass, task-seq2seq 4 Exploiting Sentential Context for Neural Machine Translation Xing Wang, Zhaopeng Tu, Longyue Wang, Shuming Shi https://www.aclweb.org/anthology/P19-1624.pdf
2019 ACL # optim-sgd, optim-projection, reg-dropout, reg-stopping, reg-labelsmooth, train-mll, arch-rnn, arch-lstm, arch-att, arch-selfatt, pre-bert, task-textclass, task-spanlab, task-lm, task-tree 5 Training Neural Response Selection for Task-Oriented Dialogue Systems Matthew Henderson, Ivan Vulić, Daniela Gerz, Iñigo Casanueva, Paweł Budzianowski, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrkšić, Pei-Hao Su https://www.aclweb.org/anthology/P19-1536.pdf
2019 ACL # optim-adam, train-mtl, train-transfer, train-augment, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-fasttext, adv-train, loss-svd, task-lm, task-seq2seq, task-lexicon, task-alignment 1 Domain Adaptation of Neural Machine Translation by Lexicon Induction Junjie Hu, Mengzhou Xia, Graham Neubig, Jaime Carbonell https://www.aclweb.org/anthology/P19-1286.pdf
2019 ACL # optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-memo, pre-glove, pre-bert, task-spanlab, task-seq2seq 4 Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, Jianfeng Gao https://www.aclweb.org/anthology/P19-1539.pdf
2019 ACL # optim-adam, reg-dropout, reg-worddropout, reg-stopping, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, pre-bert, latent-vae, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-tree 2 Unsupervised Question Answering by Cloze Translation Patrick Lewis, Ludovic Denoyer, Sebastian Riedel https://www.aclweb.org/anthology/P19-1484.pdf
2019 ACL # optim-adam, optim-projection, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-memo, task-textpair, task-seq2seq 1 ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation Hainan Zhang, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng https://www.aclweb.org/anthology/P19-1362.pdf
2019 ACL # reg-dropout, reg-patience, arch-rnn, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-textclass, task-lm 1 Large-Scale Multi-Label Text Classification on EU Legislation Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, Ion Androutsopoulos https://www.aclweb.org/anthology/P19-1636.pdf
2019 ACL # optim-adam, train-mtl, train-transfer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-gating, arch-transformer, comb-ensemble, search-beam, pre-glove, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq 5 Multi-style Generative Reading Comprehension Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita https://www.aclweb.org/anthology/P19-1220.pdf
2019 ACL # optim-adam, optim-adadelta, reg-dropout, reg-labelsmooth, norm-layer, train-parallel, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-residual, arch-subword, pre-glove, pre-bert, struct-crf, task-seqlab, task-spanlab, task-lm, task-seq2seq 1 A Lightweight Recurrent Network for Sequence Modeling Biao Zhang, Rico Sennrich https://www.aclweb.org/anthology/P19-1149.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-treelstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-seqlab, task-seq2seq, task-relation 5 Lattice-Based Transformer Encoder for Neural Machine Translation Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, Kehai Chen https://www.aclweb.org/anthology/P19-1298.pdf
2019 ACL # optim-adam, optim-projection, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-gating, arch-memo, pre-glove, adv-examp, adv-train, task-spanlab 2 Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA Yichen Jiang, Mohit Bansal https://www.aclweb.org/anthology/P19-1262.pdf
2019 ACL # optim-adam, optim-projection, reg-stopping, train-mtl, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-residual, arch-transformer, search-beam, task-condlm, task-seq2seq 4 Distilling Translations with Visual Awareness Julia Ive, Pranava Madhyastha, Lucia Specia https://www.aclweb.org/anthology/P19-1653.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-gating, arch-memo, arch-bilinear, pre-glove, pre-elmo, pre-bert, task-spanlab 3 Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension Yichen Jiang, Nitish Joshi, Yen-Chun Chen, Mohit Bansal https://www.aclweb.org/anthology/P19-1261.pdf
2019 ACL # optim-adam, reg-dropout, reg-decay, reg-labelsmooth, train-mll, train-transfer, arch-att, arch-selfatt, arch-transformer, comb-ensemble, search-beam, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-cloze 1 A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning Gonçalo M. Correia, André F. T. Martins https://www.aclweb.org/anthology/P19-1292.pdf
2019 ACL # optim-adam, init-glorot, reg-dropout, reg-labelsmooth, norm-layer, arch-rnn, arch-lstm, arch-treelstm, arch-gnn, arch-cnn, arch-att, arch-selfatt, arch-residual, arch-energy, arch-transformer, search-beam, task-seq2seq 2 Self-Attentional Models for Lattice Inputs Matthias Sperber, Graham Neubig, Ngoc-Quan Pham, Alex Waibel https://www.aclweb.org/anthology/P19-1115.pdf
2019 ACL # optim-adagrad, reg-decay, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-glove, adv-train, task-textpair, task-seqlab 5 OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs Seungwhan Moon, Pararth Shah, Anuj Kumar, Rajen Subba https://www.aclweb.org/anthology/P19-1081.pdf
2019 ACL # optim-adam, reg-dropout, train-mll, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, pre-fasttext, adv-train, task-lm, task-seq2seq, task-lexicon 5 Effective Cross-lingual Transfer of Neural Machine Translation Models without Shared Vocabularies Yunsu Kim, Yingbo Gao, Hermann Ney https://www.aclweb.org/anthology/P19-1120.pdf
2019 ACL # reg-dropout, reg-patience, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, pre-glove, task-textpair 0 Recognising Agreement and Disagreement between Stances with Reason Comparing Networks Chang Xu, Cecile Paris, Surya Nepal, Ross Sparks https://www.aclweb.org/anthology/P19-1460.pdf
2019 ACL # optim-adam, optim-projection, norm-layer, train-mll, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, task-seq2seq 9 Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, William Yang Wang https://www.aclweb.org/anthology/P19-1360.pdf
2019 ACL # optim-sgd, reg-dropout, reg-worddropout, arch-rnn, arch-att, arch-selfatt, arch-transformer, pre-fasttext, pre-bert, latent-vae, task-textclass, task-lm, task-seq2seq 6 Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation Ning Dai, Jianze Liang, Xipeng Qiu, Xuanjing Huang https://www.aclweb.org/anthology/P19-1601.pdf
2019 ACL # optim-sgd, reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-copy, arch-coverage, comb-ensemble, pre-glove, task-seq2seq 1 A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, Chin-Yew Lin https://www.aclweb.org/anthology/P19-1256.pdf
2019 ACL # arch-att, arch-selfatt, pre-elmo, pre-bert 0 PTB Graph Parsing with Tree Approximation Yoshihide Kato, Shigeki Matsubara https://www.aclweb.org/anthology/P19-1530.pdf
2019 ACL # optim-sgd, optim-adam, norm-layer, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert 8 Matching the Blanks: Distributional Similarity for Relation Learning Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, Tom Kwiatkowski https://www.aclweb.org/anthology/P19-1279.pdf
2019 ACL # optim-sgd, optim-adam, optim-projection, train-mtl, train-mll, arch-att, arch-selfatt, arch-transformer, task-lm, task-seq2seq 2 A Multi-Task Architecture on Relevance-based Neural Query Translation Sheikh Muhammad Sarwar, Hamed Bonab, James Allan https://www.aclweb.org/anthology/P19-1639.pdf
2019 ACL # optim-adagrad, reg-dropout, reg-norm, norm-gradient, train-mtl, arch-lstm, arch-bilstm, arch-treelstm, arch-gnn, arch-cnn, arch-att, arch-selfatt, pre-glove, task-seq2seq, task-graph 0 Tree Communication Models for Sentiment Analysis Yuan Zhang, Yue Zhang https://www.aclweb.org/anthology/P19-1342.pdf
2019 ACL # train-transfer, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, task-seq2seq, task-relation, task-alignment 1 Sentence-Level Agreement for Neural Machine Translation Mingming Yang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Min Zhang, Tiejun Zhao https://www.aclweb.org/anthology/P19-1296.pdf
2019 ACL # optim-adam, reg-dropout, pool-max, pool-kmax, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-residual, arch-copy, arch-bilinear, task-seq2seq 1 BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization Kai Wang, Xiaojun Quan, Rui Wang https://www.aclweb.org/anthology/P19-1207.pdf
2019 ACL # optim-adam, init-glorot, norm-layer, train-mtl, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, latent-topic, task-textclass, task-lm, task-seq2seq 0 Text Categorization by Learning Predominant Sense of Words as Auxiliary Task Kazuya Shimura, Jiyi Li, Fumiyo Fukumoto https://www.aclweb.org/anthology/P19-1105.pdf
2019 ACL # optim-adam, reg-dropout, reg-patience, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, pre-word2vec, task-textclass, task-extractive, task-lm, task-seq2seq 11 Is Attention Interpretable? Sofia Serrano, Noah A. Smith https://www.aclweb.org/anthology/P19-1282.pdf
2019 ACL # reg-decay, arch-lstm, arch-att, arch-selfatt, comb-ensemble, pre-glove, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-tree 8 Explain Yourself! Leveraging Language Models for Commonsense Reasoning Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, Richard Socher https://www.aclweb.org/anthology/P19-1487.pdf
2019 ACL # optim-adam, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-memo, arch-transformer, pre-glove, pre-bert, nondif-reinforce, task-spanlab 1 Episodic Memory Reader: Learning What to Remember for Question Answering from Streaming Data Moonsu Han, Minki Kang, Hyunwoo Jung, Sung Ju Hwang https://www.aclweb.org/anthology/P19-1434.pdf
2019 ACL # optim-adam, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-copy, pre-word2vec, task-seqlab, task-seq2seq 0 Vocabulary Pyramid Network: Multi-Pass Encoding and Decoding with Multi-Level Vocabularies for Response Generation Cao Liu, Shizhu He, Kang Liu, Jun Zhao https://www.aclweb.org/anthology/P19-1367.pdf
2019 ACL # optim-adam, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-transformer, pre-bert, task-seq2seq 5 Scoring Sentence Singletons and Pairs for Abstractive Summarization Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, Fei Liu https://www.aclweb.org/anthology/P19-1209.pdf
2019 ACL # init-glorot, arch-lstm, arch-att, arch-selfatt, arch-bilinear, pre-word2vec, pre-glove, pre-elmo, task-textclass, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-relation 5 Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks Shikhar Vashishth, Manik Bhandari, Prateek Yadav, Piyush Rai, Chiranjib Bhattacharyya, Partha Talukdar https://www.aclweb.org/anthology/P19-1320.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, reg-norm, reg-labelsmooth, train-mll, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, task-seqlab, task-seq2seq, task-relation 3 Leveraging Local and Global Patterns for Self-Attention Networks Mingzhou Xu, Derek F. Wong, Baosong Yang, Yue Zhang, Lidia S. Chao https://www.aclweb.org/anthology/P19-1295.pdf
2019 ACL # optim-adam, arch-att, arch-selfatt, arch-transformer, pre-bert, task-textpair, task-seqlab, task-graph 4 Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, Debasis Ganguly https://www.aclweb.org/anthology/P19-1513.pdf
2019 ACL # optim-adam, norm-layer, train-transfer, pool-max, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq 1 A Compact and Language-Sensitive Multilingual Translation Method Yining Wang, Long Zhou, Jiajun Zhang, Feifei Zhai, Jingfang Xu, Chengqing Zong https://www.aclweb.org/anthology/P19-1117.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, train-augment, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, search-beam, pre-glove, pre-bert, latent-vae, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq 0 A Cross-Sentence Latent Variable Model for Semi-Supervised Text Sequence Matching Jihun Choi, Taeuk Kim, Sang-goo Lee https://www.aclweb.org/anthology/P19-1469.pdf
2019 ACL # optim-adam, optim-adagrad, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, search-beam, task-seq2seq, task-tree 1 Improving Multi-turn Dialogue Modelling with Utterance ReWriter Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, Jie Zhou https://www.aclweb.org/anthology/P19-1003.pdf
2019 ACL # optim-sgd, optim-adam, optim-projection, reg-dropout, reg-stopping, norm-layer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-fasttext, task-lm 13 COMET: Commonsense Transformers for Automatic Knowledge Graph Construction Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi https://www.aclweb.org/anthology/P19-1470.pdf
2019 ACL # optim-sgd, optim-adam, reg-dropout, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-cloze 116 Multi-Task Deep Neural Networks for Natural Language Understanding Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao https://www.aclweb.org/anthology/P19-1441.pdf
2019 ACL # optim-sgd, optim-adam, train-mll, train-transfer, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-textclass, task-lm 0 Self-Attentive, Multi-Context One-Class Classification for Unsupervised Anomaly Detection on Text Lukas Ruff, Yury Zemlyanskiy, Robert Vandermeulen, Thomas Schnake, Marius Kloft https://www.aclweb.org/anthology/P19-1398.pdf
2019 ACL # optim-adam, optim-adadelta, reg-dropout, pool-max, arch-rnn, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-memo, search-beam, pre-glove, pre-elmo, pre-bert, task-textpair, task-spanlab, task-seq2seq 4 Multi-Hop Paragraph Retrieval for Open-Domain Question Answering Yair Feldman, Ran El-Yaniv https://www.aclweb.org/anthology/P19-1222.pdf
2019 ACL # optim-sgd, optim-adam, optim-projection, reg-dropout, norm-gradient, train-mtl, train-mll, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, pre-word2vec, struct-crf, adv-examp, adv-train, task-textclass, task-seqlab, task-lm, task-seq2seq 5 Dual Adversarial Neural Transfer for Low-Resource Named Entity Recognition Joey Tianyi Zhou, Hao Zhang, Di Jin, Hongyuan Zhu, Meng Fang, Rick Siow Mong Goh, Kenneth Kwok https://www.aclweb.org/anthology/P19-1336.pdf
2019 ACL # optim-adam, init-glorot, train-mll, train-augment, arch-att, arch-selfatt, arch-residual, arch-transformer, struct-hmm, adv-train, latent-vae, task-textclass, task-lm, task-seq2seq 4 Unsupervised Paraphrasing without Translation Aurko Roy, David Grangier https://www.aclweb.org/anthology/P19-1605.pdf
2019 ACL # optim-adam, reg-dropout, norm-layer, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-memo, pre-word2vec, pre-elmo, pre-bert, task-textclass 4 One Time of Interaction May Not Be Enough: Go Deep with an Interaction-over-Interaction Network for Response Selection in Dialogues Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, Rui Yan https://www.aclweb.org/anthology/P19-1001.pdf
2019 ACL # optim-adam, train-mtl, train-mll, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-cloze 27 ERNIE: Enhanced Language Representation with Informative Entities Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu https://www.aclweb.org/anthology/P19-1139.pdf
2019 ACL # train-mll, train-transfer, train-augment, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-fasttext, loss-svd, task-seqlab, task-lm, task-seq2seq, task-lexicon 1 Generalized Data Augmentation for Low-Resource Translation Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, Graham Neubig https://www.aclweb.org/anthology/P19-1579.pdf
2019 ACL # arch-lstm, arch-gru, arch-treelstm, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, search-beam, pre-fasttext, pre-bert, task-seq2seq 2 Generating Diverse Translations with Sentence Codes Raphael Shu, Hideki Nakayama, Kyunghyun Cho https://www.aclweb.org/anthology/P19-1177.pdf
2019 ACL # arch-lstm, arch-gru, arch-att, arch-selfatt, arch-memo, arch-copy, arch-bilinear, arch-coverage, search-beam, task-lm, task-condlm, task-seq2seq 2 PaperRobot: Incremental Draft Generation of Scientific Ideas Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knight, Heng Ji, Mohit Bansal, Yi Luan https://www.aclweb.org/anthology/P19-1191.pdf
2019 ACL # optim-sgd, optim-adam, optim-projection, reg-dropout, norm-layer, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-gating, arch-transformer, comb-ensemble, pre-fasttext, task-spanlab, task-seq2seq 1 Token-level Dynamic Self-Attention Network for Multi-Passage Reading Comprehension Yimeng Zhuang, Huadong Wang https://www.aclweb.org/anthology/P19-1218.pdf
2019 ACL # arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, adv-examp, adv-train, task-textclass, task-seq2seq 1 Effective Adversarial Regularization for Neural Machine Translation Motoki Sato, Jun Suzuki, Shun Kiyono https://www.aclweb.org/anthology/P19-1020.pdf
2019 ACL # arch-att, arch-selfatt, arch-subword, arch-transformer, pre-fasttext, pre-glove, pre-bert, latent-topic, task-lm 0 Context-specific Language Modeling for Human Trafficking Detection from Online Advertisements Saeideh Shahrokh Esfahani, Michael J. Cafarella, Maziyar Baran Pouyan, Gregory DeAngelo, Elena Eneva, Andy E. Fano https://www.aclweb.org/anthology/P19-1114.pdf
2019 ACL # optim-sgd, reg-dropout, arch-rnn, arch-lstm, arch-gcnn, arch-cnn, arch-att, arch-selfatt, arch-gating, arch-memo, arch-transformer, task-lm, task-seq2seq, task-alignment, meta-arch 1 Improving Neural Language Models by Segmenting, Attending, and Predicting the Future Hongyin Luo, Lan Jiang, Yonatan Belinkov, James Glass https://www.aclweb.org/anthology/P19-1144.pdf
2019 ACL # optim-adam, optim-projection, reg-labelsmooth, train-transfer, activ-relu, pool-max, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, pre-glove, pre-bert, latent-topic, task-textpair, task-lm, task-seq2seq 2 Zero-shot Word Sense Disambiguation using Sense Definition Embeddings Sawan Kumar, Sharmistha Jat, Karan Saxena, Partha Talukdar https://www.aclweb.org/anthology/P19-1568.pdf
2019 ACL # optim-adam, reg-norm, train-mtl, train-mll, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-glove, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-textclass, task-textpair, task-lm 0 EigenSent: Spectral sentence embeddings using higher-order Dynamic Mode Decomposition Subhradeep Kayal, George Tsatsaronis https://www.aclweb.org/anthology/P19-1445.pdf
2019 ACL # train-transfer, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, task-seqlab, task-lm, task-seq2seq 5 Strategies for Structuring Story Generation Angela Fan, Mike Lewis, Yann Dauphin https://www.aclweb.org/anthology/P19-1254.pdf
2019 ACL # optim-adam, optim-projection, init-glorot, pool-max, arch-rnn, arch-lstm, arch-treelstm, arch-cnn, arch-att, arch-selfatt, arch-memo, pre-glove, task-condlm, task-seq2seq 6 Attention Guided Graph Convolutional Networks for Relation Extraction Zhijiang Guo, Yan Zhang, Wei Lu https://www.aclweb.org/anthology/P19-1024.pdf
2019 ACL # arch-lstm, arch-bilstm, arch-att, arch-selfatt, struct-crf, nondif-reinforce, task-seqlab, task-condlm 0 A Prism Module for Semantic Disentanglement in Name Entity Recognition Kun Liu, Shen Li, Daqi Zheng, Zhengdong Lu, Sheng Gao, Si Li https://www.aclweb.org/anthology/P19-1532.pdf
2019 ACL # optim-adam, reg-stopping, norm-gradient, train-mtl, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-gating, task-seq2seq, task-relation, task-tree 3 Symbolic Inductive Bias for Visually Grounded Learning of Spoken Language Grzegorz Chrupała https://www.aclweb.org/anthology/P19-1647.pdf
2019 ACL # optim-adam, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-elmo, pre-bert, task-seqlab, task-lm, task-seq2seq, task-relation, meta-arch 76 Energy and Policy Considerations for Deep Learning in NLP Emma Strubell, Ananya Ganesh, Andrew McCallum https://www.aclweb.org/anthology/P19-1355.pdf
2019 ACL # optim-adam, pool-max, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, adv-train, task-spanlab, task-lm, task-seq2seq 0 Asking the Crowd: Question Analysis, Evaluation and Generation for Open Discussion on Online Forums Zi Chai, Xinyu Xing, Xiaojun Wan, Bo Huang https://www.aclweb.org/anthology/P19-1497.pdf
2019 ACL # optim-adam, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-bilinear, pre-glove, pre-elmo, pre-bert, struct-crf, task-seq2seq, task-relation 3 A Unified Linear-Time Framework for Sentence-Level Discourse Parsing Xiang Lin, Shafiq Joty, Prathyusha Jwalapuram, M Saiful Bari https://www.aclweb.org/anthology/P19-1410.pdf
2019 ACL # arch-rnn, arch-lstm, arch-gru, arch-gnn, arch-att, arch-selfatt, task-seq2seq, task-tree 5 Representing Schema Structure with Graph Neural Networks for Text-to-SQL Parsing Ben Bogin, Jonathan Berant, Matt Gardner https://www.aclweb.org/anthology/P19-1448.pdf
2019 ACL # optim-adagrad, init-glorot, norm-gradient, pool-max, arch-lstm, arch-att, arch-selfatt, arch-bilinear, pre-glove, task-textclass, task-seq2seq 1 Evaluating Discourse in Structured Text Representations Elisa Ferracane, Greg Durrett, Junyi Jessy Li, Katrin Erk https://www.aclweb.org/anthology/P19-1062.pdf
2019 ACL # optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-copy, task-spanlab, task-lm, task-seq2seq 1 Interconnected Question Generation with Coreference Alignment and Conversation Flow Modeling Yifan Gao, Piji Li, Irwin King, Michael R. Lyu https://www.aclweb.org/anthology/P19-1480.pdf
2019 ACL # optim-adam, reg-dropout, reg-stopping, reg-patience, train-mtl, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, latent-vae, task-textpair, task-lm, task-seq2seq 7 Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, Yoshua Bengio https://www.aclweb.org/anthology/P19-1004.pdf
2019 ACL # optim-adam, optim-adagrad, reg-dropout, reg-labelsmooth, norm-layer, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, pre-paravec, task-seq2seq 9 Hierarchical Transformers for Multi-Document Summarization Yang Liu, Mirella Lapata https://www.aclweb.org/anthology/P19-1500.pdf
2019 ACL # optim-adam, reg-stopping, train-mll, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-seq2seq 6 When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion Elena Voita, Rico Sennrich, Ivan Titov https://www.aclweb.org/anthology/P19-1116.pdf
2019 ACL # optim-adam, init-glorot, train-mtl, arch-lstm, arch-att, arch-selfatt, comb-ensemble, pre-elmo, struct-crf, task-seq2seq, task-relation, task-tree 1 How to Best Use Syntax in Semantic Role Labelling Yufei Wang, Mark Johnson, Stephen Wan, Yifang Sun, Wei Wang https://www.aclweb.org/anthology/P19-1529.pdf
2019 ACL # train-augment, train-parallel, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-transformer, pre-glove, pre-bert, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-relation 0 Self-Attention Architectures for Answer-Agnostic Neural Question Generation Thomas Scialom, Benjamin Piwowarski, Jacopo Staiano https://www.aclweb.org/anthology/P19-1604.pdf
2019 ACL # optim-adagrad, reg-dropout, arch-att, arch-selfatt, arch-transformer, task-lm, task-seq2seq 11 Adaptive Attention Span in Transformers Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin https://www.aclweb.org/anthology/P19-1032.pdf
2019 EMNLP # arch-att, arch-selfatt, latent-topic, task-lm 0 CodeSwitch-Reddit: Exploration of Written Multilingual Discourse in Online Discussion Forums Ella Rabinovich, Masih Sultani, Suzanne Stevenson https://www.aclweb.org/anthology/D19-1484.pdf
2019 EMNLP # optim-adam, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-elmo, pre-bert, adv-examp, task-textclass, task-spanlab, task-lm, task-seq2seq, task-cloze 0 AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh https://www.aclweb.org/anthology/D19-3002.pdf
2019 EMNLP # optim-adam, train-mll, train-transfer, arch-att, arch-selfatt, comb-ensemble, pre-bert, task-spanlab 6 A Span-Extraction Dataset for Chinese Machine Reading Comprehension Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu https://www.aclweb.org/anthology/D19-1600.pdf
2019 EMNLP # optim-sgd, optim-projection, reg-dropout, reg-stopping, norm-layer, train-mll, train-transfer, train-parallel, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-bert, task-lm, task-seq2seq 1 Simple, Scalable Adaptation for Neural Machine Translation Ankur Bapna, Orhan Firat https://www.aclweb.org/anthology/D19-1165.pdf
2019 EMNLP # init-glorot, arch-rnn, arch-lstm, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, latent-vae, task-lm, task-seq2seq 0 Parallel Iterative Edit Models for Local Sequence Transduction Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, Vihari Piratla https://www.aclweb.org/anthology/D19-1435.pdf
2019 EMNLP # optim-adam, init-glorot, train-mtl, train-mll, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, search-beam, task-seq2seq 1 NCLS: Neural Cross-Lingual Summarization Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, Chengqing Zong https://www.aclweb.org/anthology/D19-1302.pdf
2019 EMNLP # optim-adam, arch-att, arch-selfatt, comb-ensemble, pre-bert, adv-examp, task-spanlab 0 QAInfomax: Learning Robust Question Answering System by Mutual Information Maximization Yi-Ting Yeh, Yun-Nung Chen https://www.aclweb.org/anthology/D19-1333.pdf
2019 EMNLP # optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, arch-coverage, comb-ensemble, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-lm, task-relation, task-tree 0 Syntax-aware Multilingual Semantic Role Labeling Shexia He, Zuchao Li, Hai Zhao https://www.aclweb.org/anthology/D19-1538.pdf
2019 EMNLP # reg-dropout, train-augment, pool-max, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-coverage, arch-transformer, pre-glove, task-textclass 0 Improving Relation Extraction with Knowledge-attention Pengfei Li, Kezhi Mao, Xuefeng Yang, Qi Li https://www.aclweb.org/anthology/D19-1022.pdf
2019 EMNLP # optim-adam, optim-projection, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, pre-glove, pre-elmo, pre-bert, latent-vae, task-spanlab 0 Answering questions by learning to rank - Learning to rank by answering questions George Sebastian Pirtoaca, Traian Rebedea, Stefan Ruseti https://www.aclweb.org/anthology/D19-1256.pdf
2019 EMNLP # pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-skipthought, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-cloze 0 UER: An Open-Source Toolkit for Pre-training Models Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, Xiaoyong Du https://www.aclweb.org/anthology/D19-3041.pdf
2019 EMNLP # optim-adam, train-augment, arch-cnn, arch-att, arch-selfatt, pre-glove, adv-examp, adv-train, task-textclass, task-seqlab, task-lm 2 Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, Pushmeet Kohli https://www.aclweb.org/anthology/D19-1419.pdf
2019 EMNLP # optim-adam, optim-projection, arch-lstm, arch-att, arch-selfatt, comb-ensemble, pre-glove, task-condlm 5 Dual Attention Networks for Visual Reference Resolution in Visual Dialog Gi-Cheon Kang, Jaeseo Lim, Byoung-Tak Zhang https://www.aclweb.org/anthology/D19-1209.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, reg-stopping, reg-decay, reg-labelsmooth, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, task-lm, task-seq2seq, task-alignment 0 Jointly Learning to Align and Translate with Transformer Models Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, Matthias Paulik https://www.aclweb.org/anthology/D19-1453.pdf
2019 EMNLP # reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-coverage, pre-glove, pre-elmo, pre-bert, loss-svd, task-lm 0 Detect Camouflaged Spam Content via StoneSkipping: Graph and Text Joint Embedding for Chinese Character Variation Representation Zhuoren Jiang, Zhe Gao, Guoxiu He, Yangyang Kang, Changlong Sun, Qiong Zhang, Luo Si, Xiaozhong Liu https://www.aclweb.org/anthology/D19-1640.pdf
2019 EMNLP # init-glorot, reg-dropout, reg-stopping, norm-gradient, train-transfer, train-augment, train-parallel, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, comb-ensemble, search-beam, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq 2 Denoising based Sequence-to-Sequence Pre-training for Text Generation Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, Jingming Liu https://www.aclweb.org/anthology/D19-1412.pdf
2019 EMNLP # optim-adam, reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-gating, arch-coverage, pre-fasttext, pre-bert, task-spanlab, task-lm 1 Interactive Language Learning by Question Answering Xingdi Yuan, Marc-Alexandre Côté, Jie Fu, Zhouhan Lin, Chris Pal, Yoshua Bengio, Adam Trischler https://www.aclweb.org/anthology/D19-1280.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seq2seq 0 Encoders Help You Disambiguate Word Senses in Neural Machine Translation Gongbo Tang, Rico Sennrich, Joakim Nivre https://www.aclweb.org/anthology/D19-1149.pdf
2019 EMNLP # optim-projection, train-mll, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-subword, struct-crf, task-seqlab, task-seq2seq, task-lexicon, task-alignment 0 Entity Projection via Machine Translation for Cross-Lingual NER Alankar Jain, Bhargavi Paranjape, Zachary C. Lipton https://www.aclweb.org/anthology/D19-1100.pdf
2019 EMNLP # optim-adam, reg-dropout, pool-max, arch-lstm, arch-att, arch-selfatt, pre-glove, task-spanlab 0 Ranking and Sampling in Open-Domain Question Answering Yanfu Xu, Zheng Lin, Yuanxin Liu, Rui Liu, Weiping Wang, Dan Meng https://www.aclweb.org/anthology/D19-1245.pdf
2019 EMNLP # optim-adam, reg-decay, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, pre-word2vec, pre-glove, pre-elmo, pre-bert, loss-nce, task-textpair, task-alignment 0 A Gated Self-attention Memory Network for Answer Selection Tuan Lai, Quan Hung Tran, Trung Bui, Daisuke Kihara https://www.aclweb.org/anthology/D19-1610.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-recnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seq2seq 2 Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu https://www.aclweb.org/anthology/D19-1135.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, norm-layer, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-residual, search-beam, struct-crf, task-lm, task-seq2seq 0 Efficient Convolutional Neural Networks for Diacritic Restoration Sawsan Alqahtani, Ajay Mishra, Mona Diab https://www.aclweb.org/anthology/D19-1151.pdf
2019 EMNLP # optim-adam, train-mll, train-augment, pool-max, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-tree 0 Do NLP Models Know Numbers? Probing Numeracy in Embeddings Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, Matt Gardner https://www.aclweb.org/anthology/D19-1534.pdf
2019 EMNLP # reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-lm 1 Quantity doesn’t buy quality syntax with neural language models Marten van Schijndel, Aaron Mueller, Tal Linzen https://www.aclweb.org/anthology/D19-1592.pdf
2019 EMNLP # optim-adam, reg-decay, norm-layer, train-mll, pool-mean, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, task-spanlab, task-lm, task-seq2seq 0 Cross-Lingual Machine Reading Comprehension Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu https://www.aclweb.org/anthology/D19-1169.pdf
2019 EMNLP # optim-adam, init-glorot, train-mtl, train-mll, train-active, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, arch-transformer, search-viterbi, pre-glove, struct-crf, adv-train, task-textclass, task-seqlab, task-lm, task-seq2seq, task-relation 0 Hierarchically-Refined Label Attention Network for Sequence Labeling Leyang Cui, Yue Zhang https://www.aclweb.org/anthology/D19-1422.pdf
2019 EMNLP # optim-adam, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-word2vec, pre-bert, task-seqlab, task-seq2seq, task-relation, task-tree 0 A Syntax-aware Multi-task Learning Framework for Chinese Semantic Role Labeling Qingrong Xia, Zhenghua Li, Min Zhang https://www.aclweb.org/anthology/D19-1541.pdf
2019 EMNLP # optim-adam, optim-projection, norm-layer, arch-att, arch-selfatt, arch-residual, arch-subword, arch-transformer, task-seq2seq 1 Synchronously Generating Two Languages with Interactive Decoding Yining Wang, Jiajun Zhang, Long Zhou, Yuchen Liu, Chengqing Zong https://www.aclweb.org/anthology/D19-1330.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-decay, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, pre-bert, task-textclass, task-seqlab, task-lm, task-seq2seq 0 Subword Language Model for Query Auto-Completion Gyuwan Kim https://www.aclweb.org/anthology/D19-1507.pdf
2019 EMNLP # optim-sgd, optim-adam, activ-tanh, arch-rnn, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, loss-nce, task-seq2seq, task-relation 0 Minimally Supervised Learning of Affective Events Using Discourse Relations Jun Saito, Yugo Murawaki, Sadao Kurohashi https://www.aclweb.org/anthology/D19-1581.pdf
2019 EMNLP # optim-adadelta, reg-dropout, reg-stopping, train-mtl, train-mll, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-transformer, task-lm, task-seq2seq 0 One Model to Learn Both: Zero Pronoun Prediction and Translation Longyue Wang, Zhaopeng Tu, Xing Wang, Shuming Shi https://www.aclweb.org/anthology/D19-1085.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, reg-decay, train-mll, train-transfer, pool-max, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze, task-relation 22 Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT Shijie Wu, Mark Dredze https://www.aclweb.org/anthology/D19-1077.pdf
2019 EMNLP # reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, task-textclass, task-lm 0 DENS: A Dataset for Multi-class Emotion Analysis Chen Liu, Muhammad Osama, Anderson De Andrade https://www.aclweb.org/anthology/D19-1656.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, train-mtl, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, arch-subword, comb-ensemble, pre-word2vec, pre-fasttext, pre-elmo, struct-crf, task-lm, task-seq2seq, task-relation, task-tree 0 Semi-Supervised Semantic Role Labeling with Cross-View Training Rui Cai, Mirella Lapata https://www.aclweb.org/anthology/D19-1094.pdf
2019 EMNLP # optim-projection, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-coverage, arch-transformer, pre-glove, pre-bert, task-seq2seq 0 Generating Questions for Knowledge Bases via Incorporating Diversified Contexts and Answer-Aware Loss Cao Liu, Kang Liu, Shizhu He, Zaiqing Nie, Jun Zhao https://www.aclweb.org/anthology/D19-1247.pdf
2019 EMNLP # optim-adam, train-transfer, arch-lstm, arch-att, arch-selfatt, arch-memo, arch-transformer, comb-ensemble, pre-bert, task-seq2seq 0 MoEL: Mixture of Empathetic Listeners Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, Pascale Fung https://www.aclweb.org/anthology/D19-1012.pdf
2019 EMNLP # optim-adam, optim-projection, norm-layer, train-mtl, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, pre-word2vec, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm, task-seq2seq 0 Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations Christian Hadiwinoto, Hwee Tou Ng, Wee Chung Gan https://www.aclweb.org/anthology/D19-1533.pdf
2019 EMNLP # train-transfer, arch-cnn, arch-att, arch-selfatt, arch-gating, arch-transformer, pre-bert 0 Humor Detection: A Transformer Gets the Last Laugh Orion Weller, Kevin Seppi https://www.aclweb.org/anthology/D19-1372.pdf
2019 EMNLP # optim-adam, train-mll, train-active, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, comb-ensemble, task-lm, task-seq2seq, task-alignment 0 Learning to Copy for Automatic Post-Editing Xuancheng Huang, Yang Liu, Huanbo Luan, Jingfang Xu, Maosong Sun https://www.aclweb.org/anthology/D19-1634.pdf
2019 EMNLP # optim-sgd, arch-lstm, arch-att, arch-selfatt, pre-word2vec, pre-bert, task-textclass, task-textpair, task-lm, task-cloze 0 News2vec: News Network Embedding with Subnode Information Ye Ma, Lu Zong, Yikang Yang, Jionglong Su https://www.aclweb.org/anthology/D19-1490.pdf
2019 EMNLP # optim-adam, train-transfer, train-active, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, search-beam, pre-fasttext, pre-bert, task-textclass, task-lm, task-condlm, task-seq2seq 14 Learning to Speak and Act in a Fantasy Text Adventure Game Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, Jason Weston https://www.aclweb.org/anthology/D19-1062.pdf
2019 EMNLP # optim-adam, train-mtl, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq, task-relation, task-tree 0 Discourse-Aware Semantic Self-Attention for Narrative Reading Comprehension Todor Mihaylov, Anette Frank https://www.aclweb.org/anthology/D19-1257.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-memo, arch-bilinear, pre-glove, struct-crf, loss-cca, task-seqlab, task-condlm, task-seq2seq, task-relation 0 Partners in Crime: Multi-view Sequential Inference for Movie Understanding Nikos Papasarantopoulos, Lea Frermann, Mirella Lapata, Shay B. Cohen https://www.aclweb.org/anthology/D19-1212.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-att, arch-selfatt, arch-transformer, task-seq2seq 1 Transformer Dissection: An Unified Understanding for Transformer’s Attention via the Lens of Kernel Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov https://www.aclweb.org/anthology/D19-1443.pdf
2019 EMNLP # optim-adam, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, comb-ensemble, pre-fasttext, pre-elmo, latent-vae, task-textclass, task-seq2seq, task-tree 1 Capturing Argument Interaction in Semantic Role Labeling with Capsule Networks Xinchi Chen, Chunchuan Lyu, Ivan Titov https://www.aclweb.org/anthology/D19-1544.pdf
2019 EMNLP # optim-adam, train-augment, pool-max, pool-mean, arch-cnn, arch-att, arch-selfatt, arch-memo, latent-vae, task-textclass, meta-init 0 Tackling Long-Tailed Relations and Uncommon Entities in Knowledge Graph Completion Zihao Wang, Kwunping Lai, Piji Li, Lidong Bing, Wai Lam https://www.aclweb.org/anthology/D19-1024.pdf
2019 EMNLP # optim-adam, reg-dropout, pool-max, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-transformer, task-seq2seq, task-tree 0 Asking Clarification Questions in Knowledge-Based Question Answering Jingjing Xu, Yuechen Wang, Duyu Tang, Nan Duan, Pengcheng Yang, Qi Zeng, Ming Zhou, Xu Sun https://www.aclweb.org/anthology/D19-1172.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, norm-gradient, train-transfer, pool-max, pool-mean, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, search-viterbi, pre-glove, pre-bert, struct-crf, task-seqlab, task-lm, task-seq2seq 0 CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding Yijin Liu, Fandong Meng, Jinchao Zhang, Jie Zhou, Yufeng Chen, Jinan Xu https://www.aclweb.org/anthology/D19-1097.pdf
2019 EMNLP # optim-adam, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, task-spanlab, task-lm, task-seq2seq 0 Multi-Task Learning with Language Modeling for Question Generation Wenjie Zhou, Minghua Zhang, Yunfang Wu https://www.aclweb.org/anthology/D19-1337.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, pool-max, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, comb-ensemble, pre-bert, task-textpair, task-lm 0 Original Semantics-Oriented Attention and Deep Fusion Network for Sentence Matching Mingtong Liu, Yujie Zhang, Jinan Xu, Yufeng Chen https://www.aclweb.org/anthology/D19-1267.pdf
2019 EMNLP # optim-adam, reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, struct-crf, task-textclass, task-extractive, task-lm, task-seq2seq 0 Pretrained Language Models for Sequential Sentence Classification Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld https://www.aclweb.org/anthology/D19-1383.pdf
2019 EMNLP # optim-adam, train-mtl, train-active, train-augment, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, pre-elmo, pre-bert, latent-vae, task-textclass, task-lm, task-seq2seq 19 EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks Jason Wei, Kai Zou https://www.aclweb.org/anthology/D19-1670.pdf
2019 EMNLP # optim-adam, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, comb-ensemble, task-seq2seq, task-tree, task-graph 0 Modeling Graph Structure in Transformer for Better AMR-to-Text Generation Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, Guodong Zhou https://www.aclweb.org/anthology/D19-1548.pdf
2019 EMNLP # train-mll, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq, task-tree 0 Towards Understanding Neural Machine Translation with Word Importance Shilin He, Zhaopeng Tu, Xing Wang, Longyue Wang, Michael Lyu, Shuming Shi https://www.aclweb.org/anthology/D19-1088.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, train-mtl, train-mll, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-copy, pre-glove, pre-elmo, pre-bert, adv-train, task-spanlab, task-lm, task-seq2seq 2 Adversarial Domain Adaptation for Machine Reading Comprehension Huazheng Wang, Zhe Gan, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Hongning Wang https://www.aclweb.org/anthology/D19-1254.pdf
2019 EMNLP # optim-projection, init-glorot, train-mll, arch-gnn, arch-att, arch-selfatt, arch-coverage, loss-margin 0 Semi-supervised Entity Alignment via Joint Knowledge Embedding Model and Cross-graph Model Chengjiang Li, Yixin Cao, Lei Hou, Jiaxin Shi, Juanzi Li, Tat-Seng Chua https://www.aclweb.org/anthology/D19-1274.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, arch-gnn, arch-att, arch-selfatt, arch-copy, arch-transformer, pre-bert, latent-vae, task-tree 0 Answering Conversational Questions on Structured Data without Logical Forms Thomas Mueller, Francesco Piccinno, Peter Shaw, Massimo Nicosia, Yasemin Altun https://www.aclweb.org/anthology/D19-1603.pdf
2019 EMNLP # reg-dropout, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-memo, arch-coverage, arch-transformer, pre-elmo, pre-bert, task-lm 1 GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge Luyao Huang, Chi Sun, Xipeng Qiu, Xuanjing Huang https://www.aclweb.org/anthology/D19-1355.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-patience, arch-rnn, arch-att, arch-selfatt, pre-bert, task-spanlab, task-lm 2 Answering Complex Open-domain Questions Through Iterative Query Generation Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, Christopher D. Manning https://www.aclweb.org/anthology/D19-1261.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-gru, arch-bigru, arch-att, arch-selfatt, pre-glove, task-textpair 0 Modeling the Relationship between User Comments and Edits in Document Revision Xuchao Zhang, Dheeraj Rajagopal, Michael Gamon, Sujay Kumar Jauhar, ChangTien Lu https://www.aclweb.org/anthology/D19-1505.pdf
2019 EMNLP # optim-adam, train-mtl, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-memo, pre-word2vec, task-textclass, task-lm 0 Label-Specific Document Representation for Multi-Label Text Classification Lin Xiao, Xin Huang, Boli Chen, Liping Jing https://www.aclweb.org/anthology/D19-1044.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-treelstm, arch-cnn, arch-att, arch-selfatt, arch-subword, pre-glove, task-textclass, task-seq2seq 0 Investigating Dynamic Routing in Tree-Structured LSTM for Sentiment Analysis Jin Wang, Liang-Chih Yu, K. Robert Lai, Xuejie Zhang https://www.aclweb.org/anthology/D19-1343.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, search-viterbi, pre-word2vec, pre-skipthought, struct-crf, task-seqlab, task-lm, task-seq2seq, task-relation 0 Learning to Infer Entities, Properties and their Relations from Clinical Conversations Nan Du, Mingqiu Wang, Linh Tran, Gang Lee, Izhak Shafran https://www.aclweb.org/anthology/D19-1503.pdf
2019 EMNLP # optim-adam, optim-adagrad, norm-layer, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, search-beam, pre-glove, pre-paravec, adv-train, latent-topic, task-textpair, task-extractive, task-spanlab, task-lm, task-condlm, task-seq2seq 0 Topic-Guided Coherence Modeling for Sentence Ordering by Preserving Global and Local Information Byungkook Oh, Seungmin Seo, Cheolheon Shin, Eunju Jo, Kyong-Ho Lee https://www.aclweb.org/anthology/D19-1232.pdf
2019 EMNLP # arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, task-seq2seq 0 Multi-agent Learning for Neural Machine Translation Tianchi Bi, Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang https://www.aclweb.org/anthology/D19-1079.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, norm-layer, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, search-beam, task-seq2seq, task-alignment 0 Contrastive Attention Mechanism for Abstractive Sentence Summarization Xiangyu Duan, Hongfei Yu, Mingming Yin, Min Zhang, Weihua Luo, Yue Zhang https://www.aclweb.org/anthology/D19-1301.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, reg-worddropout, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, loss-cca, task-lm, task-seq2seq, task-cloze 1 The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives Elena Voita, Rico Sennrich, Ivan Titov https://www.aclweb.org/anthology/D19-1448.pdf
2019 EMNLP # optim-adam, optim-projection, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-fasttext, pre-elmo, pre-bert, struct-crf, task-spanlab, task-lm, task-seq2seq, task-cloze 0 Learning with Limited Data for Multilingual Reading Comprehension Kyungjae Lee, Sunghyun Park, Hojae Han, Jinyoung Yeo, Seung-won Hwang, Juho Lee https://www.aclweb.org/anthology/D19-1283.pdf
2019 EMNLP # optim-sgd, optim-adam, optim-projection, reg-dropout, norm-layer, train-mll, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-subword, pre-elmo, pre-bert, struct-crf, loss-triplet, task-seqlab, task-lm, task-seq2seq, task-context 14 Cloze-driven Pretraining of Self-attention Networks Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli https://www.aclweb.org/anthology/D19-1539.pdf
2019 EMNLP # optim-adam, train-mtl, train-transfer, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-greedy, search-beam, latent-vae, task-seqlab, task-lm, task-seq2seq, task-tree 0 Latent Part-of-Speech Sequences for Neural Machine Translation Xuewen Yang, Yingru Liu, Dongliang Xie, Xin Wang, Niranjan Balasubramanian https://www.aclweb.org/anthology/D19-1072.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, norm-layer, arch-rnn, arch-cnn, arch-att, arch-selfatt, task-textpair, task-condlm 0 DEBUG: A Dense Bottom-Up Grounding Approach for Natural Language Video Localization Chujie Lu, Long Chen, Chilie Tan, Xiaolin Li, Jun Xiao https://www.aclweb.org/anthology/D19-1518.pdf
2019 EMNLP # optim-adam, optim-projection, arch-rnn, arch-birnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, search-beam, adv-train, task-lm, task-condlm, task-seq2seq, task-relation 0 Stick to the Facts: Learning towards a Fidelity-oriented E-Commerce Product Description Generation Zhangming Chan, Xiuying Chen, Yongliang Wang, Juntao Li, Zhiqiang Zhang, Kun Gai, Dongyan Zhao, Rui Yan https://www.aclweb.org/anthology/D19-1501.pdf
2019 EMNLP # optim-adam, optim-projection, train-mtl, train-mll, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-glove, pre-elmo, pre-bert, latent-vae, task-spanlab, task-lm, task-seq2seq 2 Knowledge Enhanced Contextual Word Representations Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A. Smith https://www.aclweb.org/anthology/D19-1005.pdf
2019 EMNLP # optim-projection, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer 1 UR-FUNNY: A Multimodal Language Dataset for Understanding Humor Md Kamrul Hasan, Wasifur Rahman, AmirAli Bagher Zadeh, Jianyuan Zhong, Md Iftekhar Tanveer, Louis-Philippe Morency, Mohammed (Ehsan) Hoque https://www.aclweb.org/anthology/D19-1211.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, search-beam, adv-train, task-seq2seq 0 Towards Controllable and Personalized Review Generation Pan Li, Alexander Tuzhilin https://www.aclweb.org/anthology/D19-1319.pdf
2019 EMNLP # optim-adam, optim-adadelta, optim-projection, reg-dropout, train-augment, pool-max, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-transformer, search-greedy, search-beam, pre-elmo, pre-bert, nondif-reinforce, task-textpair, task-seqlab, task-spanlab, task-lm, task-condlm, task-seq2seq 0 Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering Shiyue Zhang, Mohit Bansal https://www.aclweb.org/anthology/D19-1253.pdf
2019 EMNLP # optim-adam, arch-lstm, arch-att, arch-selfatt, arch-memo, arch-coverage, comb-ensemble, task-tree 1 PullNet: Open Domain Question Answering with Iterative Retrieval on Knowledge Bases and Text Haitian Sun, Tania Bedrax-Weiss, William Cohen https://www.aclweb.org/anthology/D19-1242.pdf
2019 EMNLP # optim-adam, train-transfer, arch-lstm, arch-att, arch-selfatt, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, loss-nce, task-textclass, task-lm, task-cloze 0 Multi-Granularity Representations of Dialog Shikib Mehri, Maxine Eskenazi https://www.aclweb.org/anthology/D19-1184.pdf
2019 EMNLP # arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seq2seq, task-relation, task-graph 0 Self-Attention with Structural Position Representations Xing Wang, Zhaopeng Tu, Longyue Wang, Shuming Shi https://www.aclweb.org/anthology/D19-1145.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-patience, arch-rnn, arch-birnn, arch-lstm, arch-gru, arch-att, arch-selfatt, struct-crf, task-lm, task-seq2seq 0 CASA-NLU: Context-Aware Self-Attentive Natural Language Understanding for Task-Oriented Chatbots Arshit Gupta, Peng Zhang, Garima Lalwani, Mona Diab https://www.aclweb.org/anthology/D19-1127.pdf
2019 EMNLP # optim-sgd, optim-adam, reg-dropout, pool-max, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, pre-word2vec, pre-bert, task-textpair 2 Bridging the Gap between Relevance Matching and Semantic Matching for Short Text Similarity Modeling Jinfeng Rao, Linqing Liu, Yi Tay, Wei Yang, Peng Shi, Jimmy Lin https://www.aclweb.org/anthology/D19-1540.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, norm-batch, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, pre-word2vec, pre-glove, task-textclass, task-seq2seq 0 Sequential Learning of Convolutional Features for Effective Text Classification Avinash Madasu, Vijjini Anvesh Rao https://www.aclweb.org/anthology/D19-1567.pdf
2019 EMNLP # optim-adadelta, reg-dropout, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, task-relation 0 Event Detection with Multi-Order Graph Convolution and Aggregated Attention Haoran Yan, Xiaolong Jin, Xiangbin Meng, Jiafeng Guo, Xueqi Cheng https://www.aclweb.org/anthology/D19-1582.pdf
2019 EMNLP # optim-adam, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, search-viterbi, pre-elmo, pre-bert, struct-crf, loss-nce, task-seqlab, task-lm 0 Effective Use of Transformer Networks for Entity Tracking Aditya Gupta, Greg Durrett https://www.aclweb.org/anthology/D19-1070.pdf
2019 EMNLP # optim-adam, reg-decay, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-bert, task-lm 1 Attending to Future Tokens for Bidirectional Sequence Generation Carolin Lawrence, Bhushan Kotnis, Mathias Niepert https://www.aclweb.org/anthology/D19-1001.pdf
2019 EMNLP # optim-sgd, optim-adam, init-glorot, reg-dropout, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-copy, search-beam, pre-glove, task-spanlab, task-seq2seq 0 Improving Question Generation With to the Point Context Jingjing Li, Yifan Gao, Lidong Bing, Irwin King, Michael R. Lyu https://www.aclweb.org/anthology/D19-1317.pdf
2019 EMNLP # optim-adam, optim-projection, init-glorot, train-mll, train-active, train-augment, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, pre-glove, pre-elmo, pre-bert, task-seq2seq, task-tree, task-graph, task-alignment 0 Translate and Label! An Encoder-Decoder Approach for Cross-lingual Semantic Role Labeling Angel Daza, Anette Frank https://www.aclweb.org/anthology/D19-1056.pdf
2019 EMNLP # optim-adadelta, reg-stopping, pool-max, pool-mean, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-fasttext, pre-glove, pre-bert, task-textclass, task-seq2seq 0 Enhancing Local Feature Extraction with Global Representation for Neural Text Classification Guocheng Niu, Hengru Xu, Bolei He, Xinyan Xiao, Hua Wu, Sheng Gao https://www.aclweb.org/anthology/D19-1047.pdf
2019 EMNLP # optim-adam, norm-layer, train-mll, train-augment, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-elmo, pre-bert, task-lm, task-condlm, task-seq2seq 19 LXMERT: Learning Cross-Modality Encoder Representations from Transformers Hao Tan, Mohit Bansal https://www.aclweb.org/anthology/D19-1514.pdf
2019 EMNLP # optim-adam, train-mtl, train-mll, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, task-lm, task-seq2seq 0 Harnessing Pre-Trained Neural Networks with Rules for Formality Style Transfer Yunli Wang, Yu Wu, Lili Mou, Zhoujun Li, Wenhan Chao https://www.aclweb.org/anthology/D19-1365.pdf
2019 EMNLP # optim-adam, reg-dropout, train-mtl, arch-lstm, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, loss-cca, task-seq2seq 0 Context-aware Interactive Attention for Multi-modal Sentiment and Emotion Analysis Dushyant Singh Chauhan, Md Shad Akhtar, Asif Ekbal, Pushpak Bhattacharyya https://www.aclweb.org/anthology/D19-1566.pdf
2019 EMNLP # optim-adam, reg-dropout, train-mtl, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-textclass, task-seq2seq 0 Enhancing Context Modeling with a Query-Guided Capsule Network for Document-level Translation Zhengxin Yang, Jinchao Zhang, Fandong Meng, Shuhao Gu, Yang Feng, Jie Zhou https://www.aclweb.org/anthology/D19-1164.pdf
2019 EMNLP # arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-transformer, task-textpair, task-seq2seq 1 Retrieval-guided Dialogue Response Generation via a Matching-to-Generation Framework Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, Shuming Shi https://www.aclweb.org/anthology/D19-1195.pdf
2019 EMNLP # optim-adam, norm-layer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-elmo, pre-bert, task-textclass, task-textpair, task-extractive, task-spanlab, task-lm, task-seq2seq, task-cloze 0 Fine-tune BERT with Sparse Self-Attention Mechanism Baiyun Cui, Yingming Li, Ming Chen, Zhongfei Zhang https://www.aclweb.org/anthology/D19-1361.pdf
2019 EMNLP # optim-projection, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seq2seq, task-relation 0 Recurrent Positional Embedding for Neural Machine Translation Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita https://www.aclweb.org/anthology/D19-1139.pdf
2019 EMNLP # optim-adam, reg-stopping, reg-patience, reg-labelsmooth, train-mll, arch-rnn, arch-att, arch-selfatt, arch-residual, arch-copy, arch-coverage, arch-subword, arch-transformer, comb-ensemble, search-beam, task-seqlab, task-seq2seq 0 Deep Copycat Networks for Text-to-Text Generation Julia Ive, Pranava Madhyastha, Lucia Specia https://www.aclweb.org/anthology/D19-1318.pdf
2019 EMNLP # optim-adam, reg-patience, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm 10 Language Models as Knowledge Bases? Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller https://www.aclweb.org/anthology/D19-1250.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, reg-stopping, train-mll, train-transfer, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, struct-crf, adv-train, task-seqlab, task-lm, task-seq2seq, task-relation, task-lexicon 0 Low-Resource Sequence Labeling via Unsupervised Multilingual Contextualized Representations Zuyi Bao, Rui Huang, Chen Li, Kenny Zhu https://www.aclweb.org/anthology/D19-1095.pdf
2019 EMNLP # reg-dropout, pool-mean, arch-lstm, arch-att, arch-selfatt, arch-copy, task-seq2seq 0 Table-to-Text Generation with Effective Hierarchical Encoder on Three Dimensions (Row, Column and Time) Heng Gong, Xiaocheng Feng, Bing Qin, Ting Liu https://www.aclweb.org/anthology/D19-1310.pdf
2019 EMNLP # optim-adam, train-mtl, train-mll, train-parallel, arch-att, arch-selfatt, comb-ensemble, pre-bert, task-spanlab, task-seq2seq, task-alignment 0 BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on Novels Yimin Jing, Deyi Xiong, Zhen Yan https://www.aclweb.org/anthology/D19-1249.pdf
2019 EMNLP # optim-sgd, optim-adam, train-mll, train-transfer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, comb-ensemble, adv-feat, adv-train, task-lm, task-seq2seq 0 Pushing the Limits of Low-Resource Morphological Inflection Antonios Anastasopoulos, Graham Neubig https://www.aclweb.org/anthology/D19-1091.pdf
2019 EMNLP # optim-adam, train-augment, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-gating, pre-glove, adv-train, task-textpair, task-spanlab, meta-arch 3 Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning Yichen Jiang, Mohit Bansal https://www.aclweb.org/anthology/D19-1455.pdf
2019 EMNLP # optim-adam, init-glorot, train-mll, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, comb-ensemble, search-greedy, search-beam, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-lm, task-seq2seq, task-relation 0 Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing - A Tale of Two Parsers Revisited Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, Joakim Nivre https://www.aclweb.org/anthology/D19-1277.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, train-transfer, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-gnn, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-bilinear, pre-word2vec, pre-bert, task-seq2seq, task-relation, task-tree 0 Leveraging Dependency Forest for Neural Medical Relation Extraction Linfeng Song, Yue Zhang, Daniel Gildea, Mo Yu, Zhiguo Wang, Jinsong Su https://www.aclweb.org/anthology/D19-1020.pdf
2019 EMNLP # optim-sgd, optim-adam, arch-rnn, arch-gru, arch-cnn, arch-att, arch-selfatt, search-beam, task-spanlab, task-lm, task-seq2seq 1 Read, Attend and Comment: A Deep Architecture for Automatic News Comment Generation Ze Yang, Can Xu, Wei Wu, Zhoujun Li https://www.aclweb.org/anthology/D19-1512.pdf
2019 EMNLP # reg-dropout, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-coverage, search-beam, task-extractive, task-condlm, task-seq2seq 0 Set to Ordered Text: Generating Discharge Instructions from Medical Billing Codes Litton J Kurisinkel, Nancy Chen https://www.aclweb.org/anthology/D19-1638.pdf
2019 EMNLP # optim-adam, arch-att, arch-selfatt, comb-ensemble, pre-bert, task-textpair, task-spanlab 1 Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension Daniel Andor, Luheng He, Kenton Lee, Emily Pitler https://www.aclweb.org/anthology/D19-1609.pdf
2019 EMNLP # optim-sgd, reg-dropout, pool-max, arch-cnn, arch-att, arch-selfatt, arch-memo, pre-word2vec, task-textclass 0 Improving Distantly-Supervised Relation Extraction with Joint Label Embedding Linmei Hu, Luhao Zhang, Chuan Shi, Liqiang Nie, Weili Guan, Cheng Yang https://www.aclweb.org/anthology/D19-1395.pdf
2019 EMNLP # optim-adam, norm-layer, pool-max, pool-mean, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-transformer, pre-glove, pre-bert, latent-topic, task-lm, task-seq2seq 1 Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations Peixiang Zhong, Di Wang, Chunyan Miao https://www.aclweb.org/anthology/D19-1016.pdf
2019 EMNLP # optim-adam, reg-dropout, train-transfer, train-augment, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-transformer, pre-bert, pre-use, task-textpair, task-lm, task-seq2seq, task-relation 0 Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange Steven Y. Feng, Aaron W. Li, Jesse Hoey https://www.aclweb.org/anthology/D19-1272.pdf
2019 EMNLP # optim-adam, reg-dropout, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-copy, arch-bilinear, arch-coverage, arch-transformer, search-beam, latent-vae, task-seq2seq, task-relation, task-tree, task-graph 0 Core Semantic First: A Top-down Approach for AMR Parsing Deng Cai, Wai Lam https://www.aclweb.org/anthology/D19-1393.pdf
2019 EMNLP # optim-adam, arch-lstm, arch-att, arch-selfatt 2 Help, Anna! Visual Navigation with Natural Multimodal Assistance via Retrospective Curiosity-Encouraging Imitation Learning Khanh Nguyen, Hal Daumé III https://www.aclweb.org/anthology/D19-1063.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, search-beam, task-seq2seq, task-tree 1 JuICe: A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation Rajas Agashe, Srinivasan Iyer, Luke Zettlemoyer https://www.aclweb.org/anthology/D19-1546.pdf
2019 EMNLP # optim-adam, reg-decay, norm-gradient, arch-gnn, arch-att, arch-selfatt, arch-transformer, pre-bert, task-spanlab, task-tree 0 NumNet: Machine Reading Comprehension with Numerical Reasoning Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, Zhiyuan Liu https://www.aclweb.org/anthology/D19-1251.pdf
2019 EMNLP # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-gru, arch-gnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, pre-glove, pre-bert, task-seq2seq, task-graph 0 Enhancing AMR-to-Text Generation with Dual Graph Representations Leonardo F. R. Ribeiro, Claire Gardent, Iryna Gurevych https://www.aclweb.org/anthology/D19-1314.pdf
2019 EMNLP # optim-adam, optim-projection, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, arch-coverage, search-beam, nondif-reinforce, task-spanlab, task-condlm, task-seq2seq 0 Let’s Ask Again: Refine Network for Automatic Question Generation Preksha Nema, Akash Kumar Mohankumar, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran https://www.aclweb.org/anthology/D19-1326.pdf
2019 EMNLP # arch-lstm, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, pre-bert, task-lm, task-condlm, task-seq2seq 0 Encode, Tag, Realize: High-Precision Text Editing Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, Aliaksei Severyn https://www.aclweb.org/anthology/D19-1510.pdf
2019 EMNLP # optim-adam, reg-dropout, train-mtl, train-transfer, pool-max, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-skipthought, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq, task-cloze 0 Transfer Fine-Tuning: A BERT Case Study Yuki Arase, Jun’ichi Tsujii https://www.aclweb.org/anthology/D19-1542.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-stopping, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, pre-elmo, struct-crf, latent-vae, task-seqlab, task-lm, task-seq2seq, task-relation, task-tree 0 Semantic Role Labeling with Iterative Structure Refinement Chunchuan Lyu, Shay B. Cohen, Ivan Titov https://www.aclweb.org/anthology/D19-1099.pdf
2019 EMNLP # optim-adam, arch-cnn, arch-att, arch-selfatt, arch-transformer, task-seq2seq 0 Towards Knowledge-Based Recommender Dialog System Qibin Chen, Junyang Lin, Yichang Zhang, Ming Ding, Yukuo Cen, Hongxia Yang, Jie Tang https://www.aclweb.org/anthology/D19-1189.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-stopping, train-augment, arch-lstm, arch-att, arch-selfatt, task-textclass, task-seq2seq, task-tree 0 Clause-Wise and Recursive Decoding for Complex and Cross-Domain Text-to-SQL Generation Dongjun Lee https://www.aclweb.org/anthology/D19-1624.pdf
2019 EMNLP # optim-adagrad, norm-gradient, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, arch-coverage, search-beam, task-lm, task-seq2seq, task-graph 0 Concept Pointer Network for Abstractive Summarization Wenbo Wang, Yang Gao, Heyan Huang, Yuxiang Zhou https://www.aclweb.org/anthology/D19-1304.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-memo, arch-transformer 0 Video Dialog via Progressive Inference and Cross-Transformer Weike Jin, Zhou Zhao, Mao Gu, Jun Xiao, Furu Wei, Yueting Zhuang https://www.aclweb.org/anthology/D19-1217.pdf
2019 EMNLP # optim-sgd, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, pre-glove, pre-bert, task-spanlab, task-seq2seq 0 ParaQG: A System for Generating Questions and Answers from Paragraphs Vishwajeet Kumar, Sivaanandh Muneeswaran, Ganesh Ramakrishnan, Yuan-Fang Li https://www.aclweb.org/anthology/D19-3030.pdf
2019 EMNLP # reg-dropout, train-mll, arch-rnn, arch-lstm, arch-att, arch-selfatt, struct-crf, adv-train 0 Emotion Detection with Neural Personal Discrimination Xiabing Zhou, Zhongqing Wang, Shoushan Li, Guodong Zhou, Min Zhang https://www.aclweb.org/anthology/D19-1552.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq, task-cloze 4 Revealing the Dark Secrets of BERT Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky https://www.aclweb.org/anthology/D19-1445.pdf
2019 EMNLP # optim-adam, pool-max, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, search-viterbi, struct-crf, task-seqlab 1 Doc2EDAG: An End-to-End Document-level Framework for Chinese Financial Event Extraction Shun Zheng, Wei Cao, Wei Xu, Jiang Bian https://www.aclweb.org/anthology/D19-1032.pdf
2019 EMNLP # reg-dropout, pool-max, arch-cnn, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-word2vec, adv-train, task-textclass, task-textpair 0 Self-Attention Enhanced CNNs and Collaborative Curriculum Learning for Distantly Supervised Relation Extraction Yuyun Huang, Jinhua Du https://www.aclweb.org/anthology/D19-1037.pdf
2019 EMNLP # optim-adam, reg-dropout, arch-rnn, arch-birnn, arch-att, arch-selfatt, arch-transformer, task-seq2seq 1 Hierarchical Modeling of Global Context for Document-Level Neural Machine Translation Xin Tan, Longyin Zhang, Deyi Xiong, Guodong Zhou https://www.aclweb.org/anthology/D19-1168.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt 0 Sampling Matters! An Empirical Study of Negative Sampling Strategies for Learning of Matching Models in Retrieval-based Dialogue Systems Jia Li, Chongyang Tao, Wei Wu, Yansong Feng, Dongyan Zhao, Rui Yan https://www.aclweb.org/anthology/D19-1128.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-memo, search-viterbi, struct-crf, latent-vae, task-seqlab, task-seq2seq 1 Learning Explicit and Implicit Structures for Targeted Sentiment Analysis Hao Li, Wei Lu https://www.aclweb.org/anthology/D19-1550.pdf
2019 EMNLP # optim-adam, reg-dropout, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-elmo, task-seq2seq 0 Open Domain Web Keyphrase Extraction Beyond Language Modeling Lee Xiong, Chuan Hu, Chenyan Xiong, Daniel Campos, Arnold Overwijk https://www.aclweb.org/anthology/D19-1521.pdf
2019 EMNLP # optim-adam, optim-projection, train-augment, arch-lstm, arch-gnn, arch-att, arch-selfatt, arch-copy, arch-coverage, pre-glove, pre-bert, task-seq2seq, task-tree 0 Editing-Based SQL Query Generation for Cross-Domain Context-Dependent Questions Rui Zhang, Tao Yu, Heyang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher, Dragomir Radev https://www.aclweb.org/anthology/D19-1537.pdf
2019 EMNLP # optim-adagrad, init-glorot, train-augment, arch-lstm, arch-att, arch-selfatt, task-textclass, task-textpair, task-seq2seq 1 Induction Networks for Few-Shot Text Classification Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, Jian Sun https://www.aclweb.org/anthology/D19-1403.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, reg-norm, train-mtl, arch-rnn, arch-lstm, arch-att, arch-selfatt, search-beam, nondif-reinforce, loss-margin 0 Incorporating Graph Attention Mechanism into Knowledge Graph Reasoning Based on Deep Reinforcement Learning Heng Wang, Shuangyin Li, Rong Pan, Mingzhi Mao https://www.aclweb.org/anthology/D19-1264.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, train-mtl, train-mll, train-transfer, pool-mean, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-relation, task-tree, task-lexicon 8 75 Languages, 1 Model: Parsing Universal Dependencies Universally Dan Kondratyuk, Milan Straka https://www.aclweb.org/anthology/D19-1279.pdf
2019 EMNLP # train-mtl, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-transformer, pre-bert 1 SUM-QE: a BERT-based Summary Quality Estimation Model Stratos Xenouleas, Prodromos Malakasiotis, Marianna Apidianaki, Ion Androutsopoulos https://www.aclweb.org/anthology/D19-1618.pdf
2019 EMNLP # reg-dropout, reg-stopping, norm-layer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq 1 Joey NMT: A Minimalist NMT Toolkit for Novices Julia Kreutzer, Joost Bastings, Stefan Riezler https://www.aclweb.org/anthology/D19-3019.pdf
2019 EMNLP # optim-sgd, optim-adam, optim-projection, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-elmo, pre-bert, task-textclass, task-lm, task-cloze 1 Visualizing and Understanding the Effectiveness of BERT Yaru Hao, Li Dong, Furu Wei, Ke Xu https://www.aclweb.org/anthology/D19-1424.pdf
2019 EMNLP # optim-adam, optim-projection, init-glorot, reg-norm, train-mll, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, comb-ensemble, pre-elmo, pre-bert, task-lm, task-relation 0 Specializing Word Embeddings (for Parsing) by Information Bottleneck Xiang Lisa Li, Jason Eisner https://www.aclweb.org/anthology/D19-1276.pdf
2019 EMNLP # optim-adam, reg-dropout, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-memo, pre-glove 1 Dually Interactive Matching Network for Personalized Response Selection in Retrieval-Based Chatbots Jia-Chen Gu, Zhen-Hua Ling, Xiaodan Zhu, Quan Liu https://www.aclweb.org/anthology/D19-1193.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, task-lm, task-condlm, task-seq2seq 1 Neural Naturalist: Generating Fine-Grained Image Comparisons Maxwell Forbes, Christine Kaeser-Chen, Piyush Sharma, Serge Belongie https://www.aclweb.org/anthology/D19-1065.pdf
2019 EMNLP # train-parallel, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, task-lm, task-seq2seq 4 Adaptively Sparse Transformers Gonçalo M. Correia, Vlad Niculae, André F. T. Martins https://www.aclweb.org/anthology/D19-1223.pdf
2019 EMNLP # optim-adam, optim-adadelta, optim-projection, init-glorot, reg-dropout, norm-layer, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-residual, arch-bilinear, arch-coverage, arch-transformer, pre-glove, pre-elmo, pre-bert, task-textclass, task-seq2seq, task-relation, task-tree 0 Syntax-Enhanced Self-Attention-Based Semantic Role Labeling Yue Zhang, Rui Wang, Luo Si https://www.aclweb.org/anthology/D19-1057.pdf
2019 EMNLP # optim-adam, reg-labelsmooth, arch-lstm, arch-att, arch-selfatt, arch-transformer, latent-vae, task-spanlab, task-seq2seq, task-alignment 1 Hint-Based Training for Non-Autoregressive Machine Translation Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, Tie-Yan Liu https://www.aclweb.org/anthology/D19-1573.pdf
2019 EMNLP # optim-adam, optim-projection, init-glorot, reg-dropout, reg-labelsmooth, norm-layer, norm-gradient, arch-rnn, arch-att, arch-selfatt, arch-residual, arch-subword, arch-transformer, search-beam, pre-bert, task-lm, task-seq2seq 2 Improving Deep Transformer with Depth-Scaled Initialization and Merged Attention Biao Zhang, Ivan Titov, Rico Sennrich https://www.aclweb.org/anthology/D19-1083.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-spanlab 0 Machine Reading Comprehension Using Structural Knowledge Graph-aware Network Delai Qiu, Yuanzhe Zhang, Xinwei Feng, Xiangwen Liao, Wenbin Jiang, Yajuan Lyu, Kang Liu, Jun Zhao https://www.aclweb.org/anthology/D19-1602.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-stopping, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, pre-word2vec, pre-glove, task-textclass, task-textpair, task-seq2seq 0 You Shall Know a User by the Company It Keeps: Dynamic Representations for Social Media Users in NLP Marco Del Tredici, Diego Marcheggiani, Sabine Schulte im Walde, Raquel Fernández https://www.aclweb.org/anthology/D19-1477.pdf
2019 EMNLP # optim-adam, train-mll, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, search-beam, task-spanlab, task-lm, task-seq2seq, task-cloze 1 Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs Angela Fan, Claire Gardent, Chloé Braud, Antoine Bordes https://www.aclweb.org/anthology/D19-1428.pdf
2019 EMNLP # optim-adam, optim-projection, init-glorot, reg-dropout, arch-att, arch-selfatt, arch-memo, arch-coverage, arch-transformer, search-beam, pre-elmo, pre-bert, task-spanlab, task-seq2seq, task-relation, task-tree 1 A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li https://www.aclweb.org/anthology/D19-1170.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, train-mll, train-transfer, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-memo, loss-svd, task-seqlab, task-seq2seq, task-lexicon, task-alignment 0 Neural Cross-Lingual Event Detection with Minimal Parallel Resources Jian Liu, Yubo Chen, Kang Liu, Jun Zhao https://www.aclweb.org/anthology/D19-1068.pdf
2019 EMNLP # optim-adam, train-mtl, pool-mean, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-glove, pre-bert, loss-nce, task-textpair, task-lm 4 KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning Bill Yuchen Lin, Xinyue Chen, Jamin Chen, Xiang Ren https://www.aclweb.org/anthology/D19-1282.pdf
2019 EMNLP # train-mll, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-copy, pre-skipthought, task-seq2seq 0 A Modular Architecture for Unsupervised Sarcasm Generation Abhijit Mishra, Tarun Tater, Karthik Sankaranarayanan https://www.aclweb.org/anthology/D19-1636.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-subword, struct-crf, task-textclass, task-tree 0 Reconstructing Capsule Networks for Zero-shot Intent Classification Han Liu, Xiaotong Zhang, Lu Fan, Xuandi Fu, Qimai Li, Xiao-Ming Wu, Albert Y.S. Lam https://www.aclweb.org/anthology/D19-1486.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, reg-norm, train-mtl, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, pre-bert, struct-crf 1 A Stack-Propagation Framework with Token-Level Intent Detection for Spoken Language Understanding Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, Ting Liu https://www.aclweb.org/anthology/D19-1214.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-bilinear, pre-word2vec, struct-crf, adv-train, task-seqlab, task-seq2seq 3 Transferable End-to-End Aspect-based Sentiment Analysis with Selective Adversarial Learning Zheng Li, Xin Li, Ying Wei, Lidong Bing, Yu Zhang, Qiang Yang https://www.aclweb.org/anthology/D19-1466.pdf
2019 EMNLP # optim-sgd, optim-adam, reg-norm, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, search-viterbi, struct-crf, task-textclass, task-seqlab 0 Leverage Lexical Knowledge for Chinese Named Entity Recognition via Collaborative Graph Network Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, Shengping Liu https://www.aclweb.org/anthology/D19-1396.pdf
2019 EMNLP # optim-adam, arch-att, arch-selfatt, arch-energy, pre-bert, task-spanlab 4 Multi-passage BERT: A Globally Normalized BERT Model for Open-domain Question Answering Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, Bing Xiang https://www.aclweb.org/anthology/D19-1599.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, norm-layer, pool-max, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, latent-topic, task-lm 0 A Hierarchical Location Prediction Neural Network for Twitter User Geolocation Binxuan Huang, Kathleen Carley https://www.aclweb.org/anthology/D19-1480.pdf
2019 EMNLP # optim-adam, reg-dropout, reg-labelsmooth, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-subword, arch-transformer, search-beam, pre-elmo, pre-bert, latent-vae, task-seqlab, task-extractive, task-lm, task-seq2seq, task-cloze 6 Text Summarization with Pretrained Encoders Yang Liu, Mirella Lapata https://www.aclweb.org/anthology/D19-1387.pdf
2019 EMNLP # optim-sgd, optim-adam, reg-stopping, train-mll, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, task-spanlab, task-lm, task-seq2seq 1 Retrofitting Contextualized Word Embeddings with Paraphrases Weijia Shi, Muhao Chen, Pei Zhou, Kai-Wei Chang https://www.aclweb.org/anthology/D19-1113.pdf
2019 EMNLP # optim-adam, arch-lstm, arch-att, arch-selfatt, task-spanlab, task-seq2seq 0 Question-type Driven Question Generation Wenjie Zhou, Minghua Zhang, Yunfang Wu https://www.aclweb.org/anthology/D19-1622.pdf
2019 EMNLP # optim-adam, optim-projection, train-mll, train-augment, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, task-lm, task-seq2seq, task-alignment 0 A Discriminative Neural Model for Cross-Lingual Word Alignment Elias Stengel-Eskin, Tzu-ray Su, Matt Post, Benjamin Van Durme https://www.aclweb.org/anthology/D19-1084.pdf
2019 EMNLP # optim-sgd, optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-transformer, struct-cfg, task-textclass, task-textpair, task-lm, task-seq2seq, task-tree 0 PaLM: A Hybrid Parser and Language Model Hao Peng, Roy Schwartz, Noah A. Smith https://www.aclweb.org/anthology/D19-1376.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, arch-gru, arch-cnn, arch-att, arch-selfatt 0 Neural News Recommendation with Multi-Head Self-Attention Chuhan Wu, Fangzhao Wu, Suyu Ge, Tao Qi, Yongfeng Huang, Xing Xie https://www.aclweb.org/anthology/D19-1671.pdf
2019 EMNLP # reg-dropout, reg-norm, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, pre-word2vec, pre-bert, latent-topic, task-textclass, task-lm, task-seq2seq 0 Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training Giannis Karamanolakis, Daniel Hsu, Luis Gravano https://www.aclweb.org/anthology/D19-1468.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, train-mll, arch-lstm, arch-cnn, arch-att, arch-selfatt, pre-word2vec, adv-train, task-lm, task-seq2seq 0 Specificity-Driven Cascading Approach for Unsupervised Sentiment Modification Pengcheng Yang, Junyang Lin, Jingjing Xu, Jun Xie, Qi Su, Xu Sun https://www.aclweb.org/anthology/D19-1553.pdf
2019 EMNLP # optim-adam, optim-projection, train-mll, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-fasttext, pre-glove, task-seq2seq 0 Rotate King to get Queen: Word Relationships as Orthogonal Transformations in Embedding Space Kawin Ethayarajh https://www.aclweb.org/anthology/D19-1354.pdf
2019 EMNLP # optim-projection, reg-dropout, norm-layer, pool-mean, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, task-textclass, task-textpair 0 Learning Invariant Representations of Social Media Users Nicholas Andrews, Marcus Bishop https://www.aclweb.org/anthology/D19-1178.pdf
2019 EMNLP # optim-adam, optim-projection, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-subword, comb-ensemble, search-beam, pre-word2vec, pre-glove, pre-elmo, adv-examp, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq 8 Universal Adversarial Triggers for Attacking and Analyzing NLP Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh https://www.aclweb.org/anthology/D19-1221.pdf
2019 EMNLP # optim-adam, arch-att, arch-selfatt, arch-coverage, pre-bert, nondif-reinforce, latent-vae, task-spanlab, task-tree 2 A Discrete Hard EM Approach for Weakly Supervised Question Answering Sewon Min, Danqi Chen, Hannaneh Hajishirzi, Luke Zettlemoyer https://www.aclweb.org/anthology/D19-1284.pdf
2019 EMNLP # reg-dropout, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, struct-cfg, task-lm, task-seq2seq, task-cloze, task-tree 0 Tree Transformer: Integrating Tree Structures into Self-Attention Yaushian Wang, Hung-Yi Lee, Yun-Nung Chen https://www.aclweb.org/anthology/D19-1098.pdf
2019 EMNLP # optim-projection, arch-lstm, arch-gru, arch-bigru, arch-att, arch-selfatt, task-textclass, task-extractive, task-lm, task-condlm, task-seq2seq 0 From the Token to the Review: A Hierarchical Multimodal approach to Opinion Mining Alexandre Garcia, Pierre Colombo, Florence d’Alché-Buc, Slim Essid, Chloé Clavel https://www.aclweb.org/anthology/D19-1556.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, train-transfer, train-augment, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-subword, comb-ensemble, search-beam, pre-bert, task-lm, task-seq2seq 0 Exploiting Monolingual Data at Scale for Neural Machine Translation Lijun Wu, Yiren Wang, Yingce Xia, Tao Qin, Jianhuang Lai, Tie-Yan Liu https://www.aclweb.org/anthology/D19-1430.pdf
2019 EMNLP # optim-adam, train-augment, arch-att, arch-selfatt, pre-bert, task-spanlab, task-lm 3 Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning Pradeep Dasigi, Nelson F. Liu, Ana Marasović, Noah A. Smith, Matt Gardner https://www.aclweb.org/anthology/D19-1606.pdf
2019 EMNLP # optim-projection, train-mll, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, task-seqlab, task-seq2seq 3 Multi-Granularity Self-Attention for Neural Machine Translation Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu https://www.aclweb.org/anthology/D19-1082.pdf
2019 EMNLP # optim-adam, reg-stopping, train-mll, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, pre-bert, adv-train, latent-topic, task-lm 0 Neural Duplicate Question Detection without Labeled Training Data Andreas Rücklé, Nafise Sadat Moosavi, Iryna Gurevych https://www.aclweb.org/anthology/D19-1171.pdf
2019 NAA-CL # reg-dropout, train-mtl, arch-lstm, arch-cnn, arch-att, arch-selfatt, pre-elmo, task-lm, task-tree 1 Neural Constituency Parsing of Speech Transcripts Paria Jamshid Lou, Yufei Wang, Mark Johnson https://www.aclweb.org/anthology/N19-1282.pdf
2019 NAA-CL # optim-adam, reg-stopping, train-augment, arch-lstm, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-subword, arch-transformer, search-beam, pre-glove, pre-elmo, task-textpair, task-lm, task-condlm, task-seq2seq, task-tree 1 Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, Benjamin Van Durme https://www.aclweb.org/anthology/N19-1090.pdf
2019 NAA-CL # optim-adam, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-coverage, arch-subword, arch-transformer, search-beam 19 MuST-C: a Multilingual Speech Translation Corpus Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, Marco Turchi https://www.aclweb.org/anthology/N19-1202.pdf
2019 NAA-CL # optim-adam, optim-projection, reg-dropout, train-mll, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, comb-ensemble, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm, task-seq2seq, task-relation, task-tree, task-lexicon, task-alignment 12 Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis https://www.aclweb.org/anthology/N19-1380.pdf
2019 NAA-CL # pool-mean, arch-lstm, arch-cnn, arch-att, arch-selfatt, comb-ensemble, pre-fasttext, pre-glove, struct-crf, loss-cca, loss-margin, task-textclass 1 Ranking-Based Autoencoder for Extreme Multi-label Classification Bingyu Wang, Li Chen, Wei Sun, Kechen Qin, Kefeng Li, Hui Zhou https://www.aclweb.org/anthology/N19-1289.pdf
2019 NAA-CL # optim-adam, reg-dropout, reg-patience, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, pre-word2vec, struct-crf, task-textclass, task-seq2seq 3 HiGRU: Hierarchical Gated Recurrent Units for Utterance-Level Emotion Recognition Wenxiang Jiao, Haiqin Yang, Irwin King, Michael R. Lyu https://www.aclweb.org/anthology/N19-1037.pdf
2019 NAA-CL # optim-adam, reg-dropout, reg-decay, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, search-beam, task-spanlab, task-seq2seq 0 Online Distilling from Checkpoints for Neural Machine Translation Hao-Ran Wei, Shujian Huang, Ran Wang, Xin-yu Dai, Jiajun Chen https://www.aclweb.org/anthology/N19-1192.pdf
2019 NAA-CL # optim-adam, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, comb-ensemble, pre-word2vec, latent-vae, task-lm 5 Document-Level N-ary Relation Extraction with Multiscale Representation Learning Robin Jia, Cliff Wong, Hoifung Poon https://www.aclweb.org/anthology/N19-1370.pdf
2019 NAA-CL # reg-dropout, activ-relu, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, pre-glove, task-textclass, task-textpair, task-seq2seq 3 How Large a Vocabulary Does Text Classification Need? A Variational Approach to Vocabulary Selection Wenhu Chen, Yu Su, Yilin Shen, Zhiyu Chen, Xifeng Yan, William Yang Wang https://www.aclweb.org/anthology/N19-1352.pdf
2019 NAA-CL # optim-adam, train-mtl, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-treelstm, arch-gnn, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq 17 Star-Transformer Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Xiangyang Xue, Zheng Zhang https://www.aclweb.org/anthology/N19-1133.pdf
2019 NAA-CL # optim-adam, reg-dropout, train-mll, arch-rnn, arch-gru, arch-bigru, arch-att, arch-selfatt, latent-topic, task-textpair 2 Train One Get One Free: Partially Supervised Neural Network for Bug Report Duplicate Detection and Clustering Lahari Poddar, Leonardo Neves, William Brendel, Luis Marujo, Sergey Tulyakov, Pradeep Karuturi https://www.aclweb.org/anthology/N19-2020.pdf
2019 NAA-CL # optim-adam, optim-projection, reg-dropout, norm-batch, pool-max, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, pre-glove, nondif-reinforce, task-seq2seq, task-tree 4 Bidirectional Attentive Memory Networks for Question Answering over Knowledge Bases Yu Chen, Lingfei Wu, Mohammed J. Zaki https://www.aclweb.org/anthology/N19-1299.pdf
2019 NAA-CL # optim-sgd, reg-dropout, train-mtl, train-mll, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, search-viterbi, pre-word2vec, struct-crf, task-textclass, task-seqlab 1 An Encoding Strategy Based Word-Character LSTM for Chinese NER Wei Liu, Tongge Xu, Qinghua Xu, Jiayu Song, Yueran Zu https://www.aclweb.org/anthology/N19-1247.pdf
2019 NAA-CL # reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-memo, pre-elmo, pre-bert, task-lm, task-cloze 16 Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence Chi Sun, Luyao Huang, Xipeng Qiu https://www.aclweb.org/anthology/N19-1035.pdf
2019 NAA-CL # optim-adam, reg-dropout, reg-labelsmooth, norm-layer, train-transfer, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seq2seq 7 Non-Parametric Adaptation for Neural Machine Translation Ankur Bapna, Orhan Firat https://www.aclweb.org/anthology/N19-1191.pdf
2019 NAA-CL # optim-sgd, reg-dropout, arch-cnn, arch-att, arch-selfatt, adv-train 5 Distant Supervision Relation Extraction with Intra-Bag and Inter-Bag Attentions Zhi-Xiu Ye, Zhen-Hua Ling https://www.aclweb.org/anthology/N19-1288.pdf
2019 NAA-CL # optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-fasttext, task-textclass, task-seq2seq, task-tree 2 Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems Ting-Rui Chiang, Yun-Nung Chen https://www.aclweb.org/anthology/N19-1272.pdf
2019 NAA-CL # optim-adam, init-glorot, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, pre-glove, task-textclass, task-relation 1 Topic Spotting using Hierarchical Networks with Self Attention Pooja Chitkara, Ashutosh Modi, Pravalika Avvaru, Sepehr Janghorbani, Mubbasir Kapadia https://www.aclweb.org/anthology/N19-1376.pdf
2019 NAA-CL # optim-adam, reg-patience, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-coverage, arch-subword, arch-transformer, pre-glove, pre-elmo, pre-bert, struct-crf, adv-train, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-relation 51 Linguistic Knowledge and Transferability of Contextual Representations Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, Noah A. Smith https://www.aclweb.org/anthology/N19-1112.pdf
2019 NAA-CL # train-mtl, train-mll, train-transfer, arch-rnn, arch-gru, arch-att, arch-selfatt, adv-train, task-relation, task-tree 1 Robust Semantic Parsing with Adversarial Learning for Domain Generalization Gabriel Marzinotto, Géraldine Damnati, Frédéric Béchet, Benoît Favre https://www.aclweb.org/anthology/N19-2021.pdf
2019 NAA-CL # optim-adam, norm-batch, pool-max, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-subword, pre-word2vec, pre-glove, struct-crf, task-seqlab, task-lm 0 VCWE: Visual Character-Enhanced Word Embeddings Chi Sun, Xipeng Qiu, Xuanjing Huang https://www.aclweb.org/anthology/N19-1277.pdf
2019 NAA-CL # optim-sgd, optim-adam, reg-dropout, reg-labelsmooth, norm-layer, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-elmo, task-textclass, task-lm, task-seq2seq 10 Pre-trained language model representations for language generation Sergey Edunov, Alexei Baevski, Michael Auli https://www.aclweb.org/anthology/N19-1409.pdf
2019 NAA-CL # reg-dropout, reg-worddropout, norm-layer, pool-max, arch-cnn, arch-att, arch-selfatt, arch-bilinear, arch-transformer 5 Relation Extraction using Explicit Context Conditioning Gaurav Singh, Parminder Bhatia https://www.aclweb.org/anthology/N19-1147.pdf
2019 NAA-CL # optim-adam, init-glorot, norm-layer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-transformer, task-extractive, task-seq2seq, task-relation 4 Single Document Summarization as Tree Induction Yang Liu, Ivan Titov, Mirella Lapata https://www.aclweb.org/anthology/N19-1173.pdf
2019 NAA-CL # optim-sgd, optim-projection, init-glorot, reg-dropout, reg-stopping, arch-rnn, arch-birnn, arch-lstm, arch-gnn, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, search-beam, struct-crf, task-seqlab, task-lm, task-tree, task-graph 8 Text Generation from Knowledge Graphs with Graph Transformers Rik Koncel-Kedziorski, Dhanush Bekal, Yi Luan, Mirella Lapata, Hannaneh Hajishirzi https://www.aclweb.org/anthology/N19-1238.pdf
2019 NAA-CL # optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, search-greedy, nondif-gumbelsoftmax, adv-train, task-lm, task-condlm, task-seq2seq 2 Latent Code and Text-based Generative Adversarial Networks for Soft-text Generation Md. Akmal Haidar, Mehdi Rezagholizadeh, Alan Do Omri, Ahmad Rashid https://www.aclweb.org/anthology/N19-1234.pdf
2019 NAA-CL # optim-sgd, optim-adam, reg-dropout, reg-patience, arch-rnn, arch-birnn, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-subword, pre-word2vec, pre-glove, pre-skipthought, pre-elmo, struct-crf, adv-train, latent-vae, task-textclass, task-lm 6 Dialogue Act Classification with Context-Aware Self-Attention Vipul Raheja, Joel Tetreault https://www.aclweb.org/anthology/N19-1373.pdf
2019 NAA-CL # optim-adam, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, pre-elmo, pre-bert, task-lm, task-seq2seq, task-relation 43 A Structural Probe for Finding Syntax in Word Representations John Hewitt, Christopher D. Manning https://www.aclweb.org/anthology/N19-1419.pdf
2019 NAA-CL # optim-sgd, reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-memo, comb-ensemble, pre-glove, pre-elmo, pre-bert, struct-crf, nondif-reinforce, task-seq2seq, task-relation 7 Better, Faster, Stronger Sequence Tagging Constituent Parsers David Vilares, Mostafa Abdou, Anders Søgaard https://www.aclweb.org/anthology/N19-1341.pdf
2019 NAA-CL # optim-projection, train-mll, pool-max, arch-rnn, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, pre-bert, task-seq2seq 11 Information Aggregation for Multi-Head Attention with Routing-by-Agreement Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, Zhaopeng Tu https://www.aclweb.org/anthology/N19-1359.pdf
2019 NAA-CL # optim-adam, reg-dropout, train-mtl, arch-lstm, arch-gru, arch-bigru, arch-att, arch-selfatt, pre-glove 3 Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis Md Shad Akhtar, Dushyant Chauhan, Deepanway Ghosal, Soujanya Poria, Asif Ekbal, Pushpak Bhattacharyya https://www.aclweb.org/anthology/N19-1034.pdf
2019 NAA-CL # optim-adam, reg-dropout, train-mtl, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-gating, arch-bilinear, pre-glove, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq 4 Multi-task Learning with Sample Re-weighting for Machine Reading Comprehension Yichong Xu, Xiaodong Liu, Yelong Shen, Jingjing Liu, Jianfeng Gao https://www.aclweb.org/anthology/N19-1271.pdf
2019 NAA-CL # reg-dropout, reg-decay, train-mtl, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-transformer, comb-ensemble, pre-glove, pre-elmo, pre-bert, task-lm, task-seq2seq, task-tree 20 Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu https://www.aclweb.org/anthology/N19-1014.pdf
2019 NAA-CL # optim-adam, arch-rnn, arch-att, arch-selfatt, arch-subword, pre-fasttext 5 Attentive Mimicking: Better Word Embeddings by Attending to Informative Contexts Timo Schick, Hinrich Schütze https://www.aclweb.org/anthology/N19-1048.pdf
2019 NAA-CL # optim-adam, reg-dropout, reg-labelsmooth, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-residual, arch-coverage, arch-subword, arch-transformer, task-seq2seq 9 Selective Attention for Context-aware Neural Machine Translation Sameen Maruf, André F. T. Martins, Gholamreza Haffari https://www.aclweb.org/anthology/N19-1313.pdf
2019 NAA-CL # optim-adam, init-glorot, reg-labelsmooth, norm-layer, pool-max, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-memo, pre-fasttext, latent-vae, task-extractive, task-seq2seq 9 Abstractive Summarization of Reddit Posts with Multi-level Memory Networks Byeongchang Kim, Hyunwoo Kim, Gunhee Kim https://www.aclweb.org/anthology/N19-1260.pdf
2019 NAA-CL # optim-adam, reg-decay, train-augment, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-att, arch-selfatt, comb-ensemble, pre-glove 0 On Knowledge distillation from complex networks for response prediction Siddhartha Arora, Mitesh M. Khapra, Harish G. Ramaswamy https://www.aclweb.org/anthology/N19-1382.pdf
2019 NAA-CL # optim-adam, reg-stopping, reg-patience, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-coverage, adv-examp, task-textclass, task-textpair, task-spanlab, task-seq2seq 10 Inoculation by Fine-Tuning: A Method for Analyzing Challenge Datasets Nelson F. Liu, Roy Schwartz, Noah A. Smith https://www.aclweb.org/anthology/N19-1225.pdf
2019 NAA-CL # reg-dropout, norm-gradient, train-mtl, train-mll, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-residual, comb-ensemble, task-seq2seq, task-relation 0 Multi-Task Learning for Japanese Predicate Argument Structure Analysis Hikaru Omori, Mamoru Komachi https://www.aclweb.org/anthology/N19-1344.pdf
2019 NAA-CL # reg-dropout, train-mtl, train-mll, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, loss-nce, task-seqlab, task-seq2seq 24 Massively Multilingual Neural Machine Translation Roee Aharoni, Melvin Johnson, Orhan Firat https://www.aclweb.org/anthology/N19-1388.pdf
2019 NAA-CL # optim-sgd, optim-adam, optim-projection, init-glorot, reg-dropout, train-active, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-bilinear 0 Relation Extraction with Temporal Reasoning Based on Memory Augmented Distant Supervision Jianhao Yan, Lin He, Ruqin Huang, Jian Li, Ying Liu https://www.aclweb.org/anthology/N19-1107.pdf
2019 NAA-CL # reg-dropout, train-mll, train-transfer, arch-lstm, arch-bilstm, arch-recnn, arch-att, arch-selfatt, arch-coverage, arch-subword, pre-fasttext, adv-train, task-seq2seq 3 Addressing word-order Divergence in Multilingual Neural Machine Translation for extremely Low Resource Languages Rudra Murthy, Anoop Kunchukuttan, Pushpak Bhattacharyya https://www.aclweb.org/anthology/N19-1387.pdf
2019 NAA-CL # optim-adam, reg-dropout, reg-decay, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze 3209 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova https://www.aclweb.org/anthology/N19-1423.pdf
2019 NAA-CL # optim-adam, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-glove, pre-elmo 2 Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs Debjit Paul, Anette Frank https://www.aclweb.org/anthology/N19-1368.pdf