Year Conf. Topic Cited Paper Authors Url
2019 ACL # optim-adam, optim-projection, arch-lstm, arch-att, arch-residual, arch-coverage, latent-vae, loss-nce, task-lm, task-seq2seq, task-lexicon 1 Neural Decipherment via Minimum-Cost Flow: From Ugaritic to Linear B Jiaming Luo, Yuan Cao, Regina Barzilay https://www.aclweb.org/anthology/P19-1303.pdf
2019 ACL # optim-adam, reg-dropout, arch-lstm, arch-att, arch-residual, arch-gating, arch-subword, arch-transformer, comb-ensemble, search-beam, task-textclass, task-seq2seq 1 Depth Growing for Neural Machine Translation Lijun Wu, Yiren Wang, Yingce Xia, Fei Tian, Fei Gao, Tao Qin, Jianhuang Lai, Tie-Yan Liu https://www.aclweb.org/anthology/P19-1558.pdf
2019 ACL # optim-adam, reg-dropout, reg-stopping, arch-rnn, arch-lstm, arch-att, arch-residual, arch-memo, comb-ensemble, pre-glove, task-textpair, task-condlm, task-tree 2 Simple and Effective Text Matching with Richer Alignment Features Runqi Yang, Jianhai Zhang, Xing Gao, Feng Ji, Haiqing Chen https://www.aclweb.org/anthology/P19-1465.pdf
2019 ACL # optim-adam, reg-dropout, norm-layer, train-augment, arch-rnn, arch-att, arch-selfatt, arch-residual, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq 7 Learning Deep Transformer Models for Machine Translation Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, Lidia S. Chao https://www.aclweb.org/anthology/P19-1176.pdf
2019 ACL # reg-dropout, reg-worddropout, reg-labelsmooth, arch-att, arch-residual, arch-memo, arch-transformer, comb-ensemble, pre-bert, task-lm, task-seq2seq 0 Cross-Sentence Grammatical Error Correction Shamil Chollampatt, Weiqi Wang, Hwee Tou Ng https://www.aclweb.org/anthology/P19-1042.pdf
2019 ACL # optim-adagrad, train-mtl, train-transfer, pool-max, arch-rnn, arch-cnn, arch-att, arch-residual, arch-subword, pre-word2vec, pre-elmo 0 Employing the Correspondence of Relations and Connectives to Identify Implicit Discourse Relations via Label Embeddings Linh The Nguyen, Linh Van Ngo, Khoat Than, Thien Huu Nguyen https://www.aclweb.org/anthology/P19-1411.pdf
2019 ACL # optim-adam, train-mtl, train-transfer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-gating, arch-transformer, comb-ensemble, search-beam, pre-glove, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq 5 Multi-style Generative Reading Comprehension Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita https://www.aclweb.org/anthology/P19-1220.pdf
2019 ACL # optim-adam, optim-adadelta, reg-dropout, reg-labelsmooth, norm-layer, train-parallel, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-residual, arch-subword, pre-glove, pre-bert, struct-crf, task-seqlab, task-spanlab, task-lm, task-seq2seq 1 A Lightweight Recurrent Network for Sequence Modeling Biao Zhang, Rico Sennrich https://www.aclweb.org/anthology/P19-1149.pdf
2019 ACL # optim-adam, optim-projection, reg-stopping, train-mtl, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-residual, arch-transformer, search-beam, task-condlm, task-seq2seq 4 Distilling Translations with Visual Awareness Julia Ive, Pranava Madhyastha, Lucia Specia https://www.aclweb.org/anthology/P19-1653.pdf
2019 ACL # optim-adam, init-glorot, reg-dropout, reg-labelsmooth, norm-layer, arch-rnn, arch-lstm, arch-treelstm, arch-gnn, arch-cnn, arch-att, arch-selfatt, arch-residual, arch-energy, arch-transformer, search-beam, task-seq2seq 2 Self-Attentional Models for Lattice Inputs Matthias Sperber, Graham Neubig, Ngoc-Quan Pham, Alex Waibel https://www.aclweb.org/anthology/P19-1115.pdf
2019 ACL # optim-adam, reg-dropout, pool-max, pool-kmax, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-residual, arch-copy, arch-bilinear, task-seq2seq 1 BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization Kai Wang, Xiaojun Quan, Rui Wang https://www.aclweb.org/anthology/P19-1207.pdf
2019 ACL # optim-adam, init-glorot, train-mll, train-augment, arch-att, arch-selfatt, arch-residual, arch-transformer, struct-hmm, adv-train, latent-vae, task-textclass, task-lm, task-seq2seq 4 Unsupervised Paraphrasing without Translation Aurko Roy, David Grangier https://www.aclweb.org/anthology/P19-1605.pdf
2019 ACL # optim-adam, optim-projection, reg-dropout, reg-labelsmooth, arch-rnn, arch-att, arch-residual, arch-subword, arch-transformer, task-lm, task-seq2seq 2 Shared-Private Bilingual Word Embeddings for Neural Machine Translation Xuebo Liu, Derek F. Wong, Yang Liu, Lidia S. Chao, Tong Xiao, Jingbo Zhu https://www.aclweb.org/anthology/P19-1352.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-residual, arch-copy, pre-glove, task-lm, task-seq2seq 4 EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing Yue Dong, Zichao Li, Mehdi Rezagholizadeh, Jackie Chi Kit Cheung https://www.aclweb.org/anthology/P19-1331.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, norm-layer, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-residual, search-beam, struct-crf, task-lm, task-seq2seq 0 Efficient Convolutional Neural Networks for Diacritic Restoration Sawsan Alqahtani, Ajay Mishra, Mona Diab https://www.aclweb.org/anthology/D19-1151.pdf
2019 EMNLP # optim-adam, optim-projection, norm-layer, arch-att, arch-selfatt, arch-residual, arch-subword, arch-transformer, task-seq2seq 1 Synchronously Generating Two Languages with Interactive Decoding Yining Wang, Jiajun Zhang, Long Zhou, Yuchen Liu, Chengqing Zong https://www.aclweb.org/anthology/D19-1330.pdf
2019 EMNLP # reg-stopping, arch-residual 0 Automatic Taxonomy Induction and Expansion Nicolas Rodolfo Fauceglia, Alfio Gliozzo, Sarthak Dash, Md. Faisal Mahbub Chowdhury, Nandana Mihindukulasooriya https://www.aclweb.org/anthology/D19-3005.pdf
2019 EMNLP # optim-adam, reg-stopping, reg-patience, reg-labelsmooth, train-mll, arch-rnn, arch-att, arch-selfatt, arch-residual, arch-copy, arch-coverage, arch-subword, arch-transformer, comb-ensemble, search-beam, task-seqlab, task-seq2seq 0 Deep Copycat Networks for Text-to-Text Generation Julia Ive, Pranava Madhyastha, Lucia Specia https://www.aclweb.org/anthology/D19-1318.pdf
2019 EMNLP # arch-rnn, arch-lstm, arch-bilstm, arch-treelstm, arch-cnn, arch-att, arch-residual, arch-coverage, pre-glove, task-textpair, task-lm, task-relation, task-tree 0 Incorporating Contextual and Syntactic Structures Improves Semantic Similarity Modeling Linqing Liu, Wei Yang, Jinfeng Rao, Raphael Tang, Jimmy Lin https://www.aclweb.org/anthology/D19-1114.pdf
2019 EMNLP # optim-adam, optim-amsgrad, reg-dropout, train-parallel, arch-lstm, arch-bilstm, arch-att, arch-residual, arch-memo, arch-copy, arch-subword, comb-ensemble, pre-bert, task-lm, task-seq2seq 1 Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation Liliang Ren, Jianmo Ni, Julian McAuley https://www.aclweb.org/anthology/D19-1196.pdf
2019 EMNLP # optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-memo, arch-transformer 0 Video Dialog via Progressive Inference and Cross-Transformer Weike Jin, Zhou Zhao, Mao Gu, Jun Xiao, Furu Wei, Yueting Zhuang https://www.aclweb.org/anthology/D19-1217.pdf
2019 EMNLP # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-att, arch-residual, search-greedy, search-beam, latent-vae, task-textclass, task-condlm, task-seq2seq 0 Exploring Diverse Expressions for Paraphrase Generation Lihua Qian, Lin Qiu, Weinan Zhang, Xin Jiang, Yong Yu https://www.aclweb.org/anthology/D19-1313.pdf
2019 EMNLP # optim-adam, optim-adadelta, optim-projection, init-glorot, reg-dropout, norm-layer, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-residual, arch-bilinear, arch-coverage, arch-transformer, pre-glove, pre-elmo, pre-bert, task-textclass, task-seq2seq, task-relation, task-tree 0 Syntax-Enhanced Self-Attention-Based Semantic Role Labeling Yue Zhang, Rui Wang, Luo Si https://www.aclweb.org/anthology/D19-1057.pdf
2019 EMNLP # optim-adam, optim-projection, init-glorot, reg-dropout, reg-labelsmooth, norm-layer, norm-gradient, arch-rnn, arch-att, arch-selfatt, arch-residual, arch-subword, arch-transformer, search-beam, pre-bert, task-lm, task-seq2seq 2 Improving Deep Transformer with Depth-Scaled Initialization and Merged Attention Biao Zhang, Ivan Titov, Rico Sennrich https://www.aclweb.org/anthology/D19-1083.pdf
2019 NAA-CL # optim-adam, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-coverage, arch-subword, arch-transformer, search-beam 19 MuST-C: a Multilingual Speech Translation Corpus Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, Marco Turchi https://www.aclweb.org/anthology/N19-1202.pdf
2019 NAA-CL # optim-adam, reg-dropout, reg-labelsmooth, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-residual, arch-coverage, arch-subword, arch-transformer, task-seq2seq 9 Selective Attention for Context-aware Neural Machine Translation Sameen Maruf, André F. T. Martins, Gholamreza Haffari https://www.aclweb.org/anthology/N19-1313.pdf
2019 NAA-CL # reg-dropout, norm-gradient, train-mtl, train-mll, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-residual, comb-ensemble, task-seq2seq, task-relation 0 Multi-Task Learning for Japanese Predicate Argument Structure Analysis Hikaru Omori, Mamoru Komachi https://www.aclweb.org/anthology/N19-1344.pdf