Year |
Conf. |
Topic |
Cited |
Paper |
Authors |
Url |
2019 |
ACL |
# optim-adam, reg-dropout, reg-worddropout, arch-rnn, arch-lstm, arch-gru, arch-att, arch-transformer, pre-word2vec, pre-fasttext, pre-elmo, pre-bert, struct-crf, task-seqlab |
5 |
Neural Architectures for Nested NER through Linearization |
Jana Straková, Milan Straka, Jan Hajic |
https://www.aclweb.org/anthology/P19-1527.pdf |
2019 |
ACL |
# optim-adam, train-mll, arch-cnn, arch-att, arch-transformer, pre-skipthought, pre-bert, task-extractive, task-lm, task-seq2seq, task-cloze |
0 |
Sentence Centrality Revisited for Unsupervised Summarization |
Hao Zheng, Mirella Lapata |
https://www.aclweb.org/anthology/P19-1628.pdf |
2019 |
ACL |
# optim-adam, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, task-seq2seq |
2 |
Incremental Transformer with Deliberation Decoder for Document Grounded Conversations |
Zekang Li, Cheng Niu, Fandong Meng, Yang Feng, Qian Li, Jie Zhou |
https://www.aclweb.org/anthology/P19-1002.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, norm-layer, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-transformer, comb-ensemble, search-beam, pre-bert, latent-vae, task-condlm, task-seq2seq |
1 |
Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems |
Hung Le, Doyen Sahoo, Nancy Chen, Steven Hoi |
https://www.aclweb.org/anthology/P19-1564.pdf |
2019 |
ACL |
# optim-adam, train-mtl, train-transfer, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-extractive, task-lm, task-context |
0 |
Self-Supervised Learning for Contextualized Extractive Summarization |
Hong Wang, Xin Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang |
https://www.aclweb.org/anthology/P19-1214.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-stopping, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq |
3 |
E3: Entailment-driven Extracting and Editing for Conversational Machine Reading |
Victor Zhong, Luke Zettlemoyer |
https://www.aclweb.org/anthology/P19-1223.pdf |
2019 |
ACL |
# optim-adam, train-augment, arch-rnn, arch-transformer, pre-bert, task-spanlab |
2 |
RankQA: Neural Question Answering with Answer Re-Ranking |
Bernhard Kratzwald, Anna Eigenmann, Stefan Feuerriegel |
https://www.aclweb.org/anthology/P19-1611.pdf |
2019 |
ACL |
# reg-dropout, reg-worddropout, train-augment, arch-rnn, arch-cnn, arch-att, arch-subword, arch-transformer, adv-examp, task-textclass, task-lm, task-seq2seq |
6 |
Robust Neural Machine Translation with Doubly Adversarial Inputs |
Yong Cheng, Lu Jiang, Wolfgang Macherey |
https://www.aclweb.org/anthology/P19-1425.pdf |
2019 |
ACL |
# optim-adagrad, reg-dropout, norm-batch, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, task-lm, task-seq2seq |
0 |
Training Hybrid Language Models by Marginalizing over Segmentations |
Edouard Grave, Sainbayar Sukhbaatar, Piotr Bojanowski, Armand Joulin |
https://www.aclweb.org/anthology/P19-1143.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, init-glorot, reg-dropout, train-augment, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-bilinear, arch-transformer, nondif-reinforce, task-relation, task-tree |
0 |
AdaNSP: Uncertainty-driven Adaptive Decoding in Neural Semantic Parsing |
Xiang Zhang, Shizhu He, Kang Liu, Jun Zhao |
https://www.aclweb.org/anthology/P19-1418.pdf |
2019 |
ACL |
# optim-adam, optim-projection, pool-max, arch-cnn, arch-att, arch-transformer, pre-word2vec, latent-topic, task-textclass, task-seq2seq, task-alignment |
1 |
Improving Textual Network Embedding with Global Attention via Optimal Transport |
Liqun Chen, Guoyin Wang, Chenyang Tao, Dinghan Shen, Pengyu Cheng, Xinyuan Zhang, Wenlin Wang, Yizhe Zhang, Lawrence Carin |
https://www.aclweb.org/anthology/P19-1512.pdf |
2019 |
ACL |
# optim-sgd, optim-adadelta, reg-dropout, arch-rnn, arch-gru, arch-att, arch-coverage, arch-subword, arch-transformer, search-greedy, search-beam, nondif-minrisk, task-seq2seq |
8 |
Bridging the Gap between Training and Inference for Neural Machine Translation |
Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu |
https://www.aclweb.org/anthology/P19-1426.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mll, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-spanlab, task-lm, task-seq2seq |
4 |
Cross-Lingual Training for Automatic Question Generation |
Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee, Ganesh Ramakrishnan, Preethi Jyothi |
https://www.aclweb.org/anthology/P19-1481.pdf |
2019 |
ACL |
# optim-adam, optim-projection, arch-rnn, arch-cnn, arch-att, arch-memo, arch-transformer, comb-ensemble, pre-glove, pre-skipthought, task-lm, task-relation |
0 |
Global Textual Relation Embedding for Relational Understanding |
Zhiyu Chen, Hanwen Zha, Honglei Liu, Wenhu Chen, Xifeng Yan, Yu Su |
https://www.aclweb.org/anthology/P19-1127.pdf |
2019 |
ACL |
# reg-dropout, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-coverage, arch-transformer, comb-ensemble, pre-glove, pre-elmo, pre-bert, task-relation |
8 |
Head-Driven Phrase Structure Grammar Parsing on Penn Treebank |
Junru Zhou, Hai Zhao |
https://www.aclweb.org/anthology/P19-1230.pdf |
2019 |
ACL |
# optim-sgd, reg-dropout, reg-labelsmooth, train-mtl, arch-rnn, arch-lstm, arch-att, arch-coverage, arch-transformer, comb-ensemble, search-beam, pre-bert, struct-cfg, task-textclass, task-lm, task-seq2seq, task-relation |
1 |
Scalable Syntax-Aware Language Models Using Knowledge Distillation |
Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, Phil Blunsom |
https://www.aclweb.org/anthology/P19-1337.pdf |
2019 |
ACL |
# norm-layer, train-mll, train-transfer, arch-att, arch-subword, arch-transformer, task-textclass, task-spanlab, task-lm, task-seq2seq |
5 |
Large-Scale Transfer Learning for Natural Language Generation |
Sergey Golovanov, Rauf Kurbanov, Sergey Nikolenko, Kyryl Truskovskyi, Alexander Tselousov, Thomas Wolf |
https://www.aclweb.org/anthology/P19-1608.pdf |
2019 |
ACL |
# arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, search-beam, latent-vae, task-seq2seq |
3 |
Imitation Learning for Non-Autoregressive Neural Machine Translation |
Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, Xu Sun |
https://www.aclweb.org/anthology/P19-1125.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, task-textpair, task-lm, task-seq2seq |
4 |
Multimodal Transformer for Unaligned Multimodal Language Sequences |
Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov |
https://www.aclweb.org/anthology/P19-1656.pdf |
2019 |
ACL |
# reg-labelsmooth, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, latent-vae, task-seq2seq |
4 |
Syntactically Supervised Transformers for Faster Neural Machine Translation |
Nader Akoury, Kalpesh Krishna, Mohit Iyyer |
https://www.aclweb.org/anthology/P19-1122.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, pool-max, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-bert, struct-crf, task-seq2seq |
1 |
Dense Procedure Captioning in Narrated Instructional Videos |
Botian Shi, Lei Ji, Yaobo Liang, Nan Duan, Peng Chen, Zhendong Niu, Ming Zhou |
https://www.aclweb.org/anthology/P19-1641.pdf |
2019 |
ACL |
# optim-adam, optim-amsgrad, optim-projection, reg-stopping, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-coverage, arch-transformer, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq, task-relation |
0 |
Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling |
Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, Samuel R. Bowman |
https://www.aclweb.org/anthology/P19-1439.pdf |
2019 |
ACL |
# optim-adam, reg-stopping, train-mtl, train-active, arch-att, arch-subword, arch-transformer, pre-fasttext, task-condlm |
14 |
Learning from Dialogue after Deployment: Feed Yourself, Chatbot! |
Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, Jason Weston |
https://www.aclweb.org/anthology/P19-1358.pdf |
2019 |
ACL |
# train-mll, arch-rnn, arch-att, arch-subword, arch-transformer, search-beam, task-seqlab, task-seq2seq |
5 |
Zero-Shot Cross-Lingual Abstractive Sentence Summarization through Teaching Generation and Attention |
Xiangyu Duan, Mingming Yin, Min Zhang, Boxing Chen, Weihua Luo |
https://www.aclweb.org/anthology/P19-1305.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-augment, arch-att, arch-subword, arch-transformer, task-lm, task-seq2seq |
2 |
Soft Contextual Data Augmentation for Neural Machine Translation |
Fei Gao, Jinhua Zhu, Lijun Wu, Yingce Xia, Tao Qin, Xueqi Cheng, Wengang Zhou, Tie-Yan Liu |
https://www.aclweb.org/anthology/P19-1555.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-gnn, arch-att, arch-selfatt, arch-memo, arch-bilinear, arch-transformer, comb-ensemble, pre-fasttext, task-textclass, task-relation, task-graph |
2 |
Graph-based Dependency Parsing with Graph Neural Networks |
Tao Ji, Yuanbin Wu, Man Lan |
https://www.aclweb.org/anthology/P19-1237.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, norm-layer, train-transfer, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, task-seq2seq, task-relation |
1 |
Neural Machine Translation with Reordering Embeddings |
Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita |
https://www.aclweb.org/anthology/P19-1174.pdf |
2019 |
ACL |
# train-transfer, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-bert, adv-train, task-relation |
6 |
Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers |
Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, Saloni Potdar |
https://www.aclweb.org/anthology/P19-1132.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-bilinear, arch-transformer, pre-glove, pre-bert, task-textclass, task-seqlab, task-seq2seq |
4 |
DocRED: A Large-Scale Document-Level Relation Extraction Dataset |
Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, Maosong Sun |
https://www.aclweb.org/anthology/P19-1074.pdf |
2019 |
ACL |
# optim-adam, optim-projection, arch-rnn, arch-lstm, arch-subword, arch-transformer, pre-fasttext, pre-skipthought, pre-bert, struct-cfg, task-textpair, task-lm, task-tree |
5 |
Correlating Neural and Symbolic Representations of Language |
Grzegorz Chrupała, Afra Alishahi |
https://www.aclweb.org/anthology/P19-1283.pdf |
2019 |
ACL |
# train-mtl, pool-max, arch-lstm, arch-bilstm, arch-att, arch-subword, arch-transformer, pre-fasttext, pre-bert, task-lm, task-seq2seq, task-relation |
0 |
Empirical Linguistic Study of Sentence Embeddings |
Katarzyna Krasnowska-Kieraś, Alina Wróblewska |
https://www.aclweb.org/anthology/P19-1573.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, search-beam, task-seq2seq, task-alignment |
3 |
On the Word Alignment from Neural Machine Translation |
Xintong Li, Guanlin Li, Lemao Liu, Max Meng, Shuming Shi |
https://www.aclweb.org/anthology/P19-1124.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-labelsmooth, train-augment, arch-lstm, arch-att, arch-subword, arch-transformer, adv-examp, task-seq2seq |
5 |
Robust Neural Machine Translation with Joint Textual and Phonetic Embedding |
Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, Zhongjun He |
https://www.aclweb.org/anthology/P19-1291.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, arch-lstm, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, struct-crf, task-textclass |
0 |
Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes |
Jie Cao, Michael Tanana, Zac Imel, Eric Poitras, David Atkins, Vivek Srikumar |
https://www.aclweb.org/anthology/P19-1563.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-lstm, arch-att, arch-residual, arch-gating, arch-subword, arch-transformer, comb-ensemble, search-beam, task-textclass, task-seq2seq |
1 |
Depth Growing for Neural Machine Translation |
Lijun Wu, Yiren Wang, Yingce Xia, Fei Tian, Fei Gao, Tao Qin, Jianhuang Lai, Tie-Yan Liu |
https://www.aclweb.org/anthology/P19-1558.pdf |
2019 |
ACL |
# optim-adam, train-augment, arch-rnn, arch-lstm, arch-cnn, arch-transformer, search-beam, pre-glove, pre-elmo, pre-bert, task-lm |
1 |
Relating Simple Sentence Representations in Deep Neural Networks and the Brain |
Sharmistha Jat, Hao Tang, Partha Talukdar, Tom Mitchell |
https://www.aclweb.org/anthology/P19-1507.pdf |
2019 |
ACL |
# optim-adam, init-glorot, reg-dropout, reg-stopping, reg-patience, norm-gradient, arch-rnn, arch-lstm, arch-gcnn, arch-cnn, arch-att, arch-memo, arch-bilinear, arch-transformer, comb-ensemble, pre-glove, task-tree |
2 |
Inter-sentence Relation Extraction with Document-level Graph Convolutional Neural Network |
Sunil Kumar Sahu, Fenia Christopoulou, Makoto Miwa, Sophia Ananiadou |
https://www.aclweb.org/anthology/P19-1423.pdf |
2019 |
ACL |
# optim-adam, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, comb-ensemble, pre-elmo, pre-bert, loss-margin, task-spanlab, task-lm |
2 |
Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension |
An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, Sujian Li |
https://www.aclweb.org/anthology/P19-1226.pdf |
2019 |
ACL |
# arch-rnn, arch-lstm, arch-att, arch-transformer, search-beam, pre-glove, pre-bert, task-lm, task-condlm, task-seq2seq |
1 |
Comparison of Diverse Decoding Methods from Conditional Language Models |
Daphne Ippolito, Reno Kriz, Joao Sedoc, Maria Kustikova, Chris Callison-Burch |
https://www.aclweb.org/anthology/P19-1365.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-transformer, struct-cfg, adv-train, task-lm, task-seq2seq, task-relation, task-tree |
0 |
Language Modeling with Shared Grammar |
Yuyu Zhang, Le Song |
https://www.aclweb.org/anthology/P19-1437.pdf |
2019 |
ACL |
# arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, task-textclass, task-extractive, task-lm, task-seq2seq |
4 |
Multi-News: A Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model |
Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, Dragomir Radev |
https://www.aclweb.org/anthology/P19-1102.pdf |
2019 |
ACL |
# arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-seq2seq |
5 |
Simultaneous Translation with Flexible Policy via Restricted Imitation Learning |
Baigong Zheng, Renjie Zheng, Mingbo Ma, Liang Huang |
https://www.aclweb.org/anthology/P19-1582.pdf |
2019 |
ACL |
# init-glorot, train-transfer, arch-lstm, arch-att, arch-coverage, arch-transformer, pre-glove, pre-elmo, pre-bert, adv-train, latent-topic, task-textclass, task-lm |
1 |
Zero-Shot Entity Linking by Reading Entity Descriptions |
Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee |
https://www.aclweb.org/anthology/P19-1335.pdf |
2019 |
ACL |
# optim-adam, arch-att, arch-selfatt, arch-subword, arch-transformer, task-lm, task-seq2seq |
30 |
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned |
Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, Ivan Titov |
https://www.aclweb.org/anthology/P19-1580.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-decay, norm-batch, norm-gradient, train-mtl, activ-relu, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-coverage, arch-transformer, pre-fasttext, task-textclass, task-lm |
1 |
#YouToo? Detection of Personal Recollections of Sexual Harassment on Social Media |
Arijit Ghosh Chowdhury, Ramit Sawhney, Rajiv Ratn Shah, Debanjan Mahata |
https://www.aclweb.org/anthology/P19-1241.pdf |
2019 |
ACL |
# optim-projection, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-seq2seq, task-alignment |
8 |
STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework |
Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, Haifeng Wang |
https://www.aclweb.org/anthology/P19-1289.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-labelsmooth, train-transfer, arch-att, arch-selfatt, arch-copy, arch-bilinear, arch-coverage, arch-transformer, search-greedy, search-beam, pre-glove, task-spanlab, task-seq2seq, task-tree |
1 |
Complex Question Decomposition for Semantic Parsing |
Haoyu Zhang, Jingjing Cai, Jianjun Xu, Ji Wang |
https://www.aclweb.org/anthology/P19-1440.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, norm-layer, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, search-beam, latent-vae, task-lm, task-seq2seq |
2 |
Improving Abstractive Document Summarization with Salient Information Modeling |
Yongjian You, Weijia Jia, Tianyi Liu, Wenmian Yang |
https://www.aclweb.org/anthology/P19-1205.pdf |
2019 |
ACL |
# train-mtl, arch-rnn, arch-lstm, arch-att, arch-transformer, search-beam, pre-bert, task-seq2seq |
1 |
Ranking Generated Summaries by Correctness: An Interesting but Challenging Application for Natural Language Inference |
Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, Iryna Gurevych |
https://www.aclweb.org/anthology/P19-1213.pdf |
2019 |
ACL |
# train-augment, arch-lstm, arch-att, arch-transformer, task-lm, task-condlm, task-seq2seq |
1 |
Visual Story Post-Editing |
Ting-Yao Hsu, Chieh-Yang Huang, Yen-Chia Hsu, Ting-Hao Huang |
https://www.aclweb.org/anthology/P19-1658.pdf |
2019 |
ACL |
# arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, adv-examp, task-textclass, task-seq2seq |
2 |
On the Robustness of Self-Attentive Models |
Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, Cho-Jui Hsieh |
https://www.aclweb.org/anthology/P19-1147.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-transformer, pre-bert, struct-crf, latent-topic, task-seq2seq |
3 |
Open-Domain Targeted Sentiment Analysis via Span-Based Extraction and Classification |
Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, Yiwei Lv |
https://www.aclweb.org/anthology/P19-1051.pdf |
2019 |
ACL |
# optim-adam, pool-max, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, task-spanlab, task-lm |
7 |
Multi-hop Reading Comprehension through Question Decomposition and Rescoring |
Sewon Min, Victor Zhong, Luke Zettlemoyer, Hannaneh Hajishirzi |
https://www.aclweb.org/anthology/P19-1613.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-decay, norm-layer, train-augment, arch-gnn, arch-att, arch-selfatt, arch-copy, arch-transformer, search-greedy, pre-glove, pre-bert, task-textclass, task-lm, task-seq2seq, task-tree, task-graph |
2 |
Generating Logical Forms from Graph Representations of Text and Entities |
Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, Yasemin Altun |
https://www.aclweb.org/anthology/P19-1010.pdf |
2019 |
ACL |
# optim-adam, arch-memo, arch-coverage, arch-transformer, search-beam, pre-fasttext, pre-bert, task-textclass, task-lm, task-seq2seq |
5 |
Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset |
Hannah Rashkin, Eric Michael Smith, Margaret Li, Y-Lan Boureau |
https://www.aclweb.org/anthology/P19-1534.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, norm-layer, train-augment, arch-rnn, arch-att, arch-selfatt, arch-residual, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq |
7 |
Learning Deep Transformer Models for Machine Translation |
Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, Lidia S. Chao |
https://www.aclweb.org/anthology/P19-1176.pdf |
2019 |
ACL |
# train-transfer, arch-subword, arch-transformer, comb-ensemble, task-lm, task-seq2seq |
1 |
Domain Adaptive Inference for Neural Machine Translation |
Danielle Saunders, Felix Stahlberg, Adrià de Gispert, Bill Byrne |
https://www.aclweb.org/anthology/P19-1022.pdf |
2019 |
ACL |
# optim-adam, arch-att, arch-subword, arch-transformer, search-beam, pre-fasttext, task-spanlab, task-lm, task-seq2seq |
5 |
ELI5: Long Form Question Answering |
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, Michael Auli |
https://www.aclweb.org/anthology/P19-1346.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mll, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seqlab, task-seq2seq |
4 |
Assessing the Ability of Self-Attention Networks to Learn Word Order |
Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, Zhaopeng Tu |
https://www.aclweb.org/anthology/P19-1354.pdf |
2019 |
ACL |
# optim-adagrad, reg-dropout, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, search-beam, task-lm, task-condlm, task-seq2seq |
1 |
Informative Image Captioning with External Sources of Information |
Sanqiang Zhao, Piyush Sharma, Tomer Levinboim, Radu Soricut |
https://www.aclweb.org/anthology/P19-1650.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-transfer, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, pre-elmo, pre-bert, adv-train, task-textclass, task-lm, task-seq2seq, task-relation |
2 |
Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction |
Christoph Alt, Marc Hübner, Leonhard Hennig |
https://www.aclweb.org/anthology/P19-1134.pdf |
2019 |
ACL |
# optim-adam, optim-projection, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, task-textpair, task-lm |
20 |
What Does BERT Learn about the Structure of Language? |
Ganesh Jawahar, Benoît Sagot, Djamé Seddah |
https://www.aclweb.org/anthology/P19-1356.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-transformer, search-viterbi, pre-skipthought, pre-bert, struct-crf, latent-vae, task-seqlab, task-lm, task-seq2seq |
2 |
Extracting Symptoms and their Status from Clinical Conversations |
Nan Du, Kai Chen, Anjuli Kannan, Linh Tran, Yuhui Chen, Izhak Shafran |
https://www.aclweb.org/anthology/P19-1087.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, optim-projection, arch-lstm, arch-treelstm, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seqlab, task-lm, task-seq2seq |
5 |
Lattice Transformer for Speech Translation |
Pei Zhang, Niyu Ge, Boxing Chen, Kai Fan |
https://www.aclweb.org/anthology/P19-1649.pdf |
2019 |
ACL |
# reg-dropout, norm-batch, norm-gradient, activ-relu, arch-rnn, arch-lstm, arch-gcnn, arch-att, arch-selfatt, arch-transformer, task-lm, task-seq2seq, meta-arch |
194 |
Transformer-XL: Attentive Language Models beyond a Fixed-Length Context |
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, Ruslan Salakhutdinov |
https://www.aclweb.org/anthology/P19-1285.pdf |
2019 |
ACL |
# optim-adam, reg-stopping, pool-max, arch-lstm, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, task-condlm, task-seq2seq |
1 |
Constructing Interpretive Spatio-Temporal Features for Multi-Turn Responses Selection |
Junyu Lu, Chenbin Zhang, Zeying Xie, Guang Ling, Tom Chao Zhou, Zenglin Xu |
https://www.aclweb.org/anthology/P19-1006.pdf |
2019 |
ACL |
# reg-dropout, norm-layer, pool-mean, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, search-beam, adv-train, task-seq2seq |
0 |
Reference Network for Neural Machine Translation |
Han Fu, Chenghao Liu, Jianling Sun |
https://www.aclweb.org/anthology/P19-1287.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-active, arch-att, arch-memo, arch-transformer, pre-word2vec, pre-fasttext, latent-vae, loss-nce, task-seq2seq |
1 |
Improving Neural Conversational Models with Entropy-Based Data Filtering |
Richárd Csáky, Patrik Purgai, Gábor Recski |
https://www.aclweb.org/anthology/P19-1567.pdf |
2019 |
ACL |
# optim-adam, train-augment, arch-coverage, arch-transformer, pre-bert, task-lm, task-tree |
1 |
The KnowRef Coreference Corpus: Removing Gender and Number Cues for Difficult Pronominal Anaphora Resolution |
Ali Emami, Paul Trichelair, Adam Trischler, Kaheer Suleman, Hannes Schulz, Jackie Chi Kit Cheung |
https://www.aclweb.org/anthology/P19-1386.pdf |
2019 |
ACL |
# optim-adam, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-bert, struct-crf, task-seqlab, task-seq2seq |
0 |
Scaling up Open Tagging from Tens to Thousands: Comprehension Empowered Attribute Value Extraction from Product Title |
Huimin Xu, Wenting Wang, Xin Mao, Xinyu Jiang, Man Lan |
https://www.aclweb.org/anthology/P19-1514.pdf |
2019 |
ACL |
# init-glorot, reg-dropout, train-mll, train-transfer, arch-rnn, arch-cnn, arch-att, arch-subword, arch-transformer, task-lm, task-seq2seq |
5 |
Improved Zero-shot Neural Machine Translation via Ignoring Spurious Correlations |
Jiatao Gu, Yong Wang, Kyunghyun Cho, Victor O.K. Li |
https://www.aclweb.org/anthology/P19-1121.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mll, arch-lstm, arch-cnn, arch-att, arch-coverage, arch-transformer, search-beam, loss-margin, task-seq2seq |
4 |
Neural Relation Extraction for Knowledge Base Enrichment |
Bayu Distiawan Trisedya, Gerhard Weikum, Jianzhong Qi, Rui Zhang |
https://www.aclweb.org/anthology/P19-1023.pdf |
2019 |
ACL |
# arch-att, arch-transformer, comb-ensemble, search-beam, pre-bert, task-spanlab, task-lm, task-seq2seq |
9 |
Synthetic QA Corpora Generation with Roundtrip Consistency |
Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, Michael Collins |
https://www.aclweb.org/anthology/P19-1620.pdf |
2019 |
ACL |
# optim-adam, train-mtl, arch-rnn, arch-birnn, arch-lstm, arch-att, arch-coverage, arch-transformer, comb-ensemble, search-beam, pre-bert, task-textpair, task-extractive, task-spanlab, task-lm, task-seq2seq |
4 |
Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction |
Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita |
https://www.aclweb.org/anthology/P19-1225.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-transformer, latent-topic, task-seq2seq |
2 |
Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information |
Pengcheng Yang, Zhihan Zhang, Fuli Luo, Lei Li, Chengyang Huang, Xu Sun |
https://www.aclweb.org/anthology/P19-1257.pdf |
2019 |
ACL |
# reg-dropout, train-transfer, arch-att, arch-selfatt, arch-transformer, pre-bert, task-lm |
0 |
BERT-based Lexical Substitution |
Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou |
https://www.aclweb.org/anthology/P19-1328.pdf |
2019 |
ACL |
# reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-bigru, arch-att, arch-transformer, comb-ensemble, pre-word2vec, task-seqlab, task-lm, task-seq2seq, task-relation |
4 |
Open Vocabulary Learning for Neural Chinese Pinyin IME |
Zhuosheng Zhang, Yafang Huang, Hai Zhao |
https://www.aclweb.org/anthology/P19-1154.pdf |
2019 |
ACL |
# optim-adam, init-glorot, norm-batch, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-glove, task-textclass, task-textpair, task-lm, task-seq2seq |
4 |
Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks |
Yi Tay, Aston Zhang, Anh Tuan Luu, Jinfeng Rao, Shuai Zhang, Shuohang Wang, Jie Fu, Siu Cheung Hui |
https://www.aclweb.org/anthology/P19-1145.pdf |
2019 |
ACL |
# reg-dropout, reg-worddropout, reg-labelsmooth, arch-att, arch-residual, arch-memo, arch-transformer, comb-ensemble, pre-bert, task-lm, task-seq2seq |
0 |
Cross-Sentence Grammatical Error Correction |
Shamil Chollampatt, Weiqi Wang, Hwee Tou Ng |
https://www.aclweb.org/anthology/P19-1042.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, activ-relu, pool-mean, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-bilinear, arch-transformer, pre-word2vec, pre-glove, pre-bert, task-spanlab |
0 |
Open-Domain Why-Question Answering with Adversarial Learning to Encode Answer Texts |
Jong-Hoon Oh, Kazuma Kadowaki, Julien Kloetzer, Ryu Iida, Kentaro Torisawa |
https://www.aclweb.org/anthology/P19-1414.pdf |
2019 |
ACL |
# optim-adam, arch-rnn, arch-gru, arch-att, arch-copy, arch-transformer, task-seqlab, task-seq2seq, task-relation |
0 |
Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation |
Jiangjie Chen, Ao Wang, Haiyun Jiang, Suo Feng, Chenguang Li, Yanghua Xiao |
https://www.aclweb.org/anthology/P19-1196.pdf |
2019 |
ACL |
# train-mtl, train-transfer, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-textclass, task-spanlab, task-lm, task-seq2seq, task-relation |
4 |
BAM! Born-Again Multi-Task Networks for Natural Language Understanding |
Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, Quoc V. Le |
https://www.aclweb.org/anthology/P19-1595.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mtl, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-paravec, pre-bert, adv-examp, task-spanlab |
2 |
Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading Comprehension |
Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li |
https://www.aclweb.org/anthology/P19-1221.pdf |
2019 |
ACL |
# train-mll, arch-lstm, arch-att, arch-transformer, comb-ensemble, search-beam, loss-nce, task-seq2seq |
0 |
Automatic Grammatical Error Correction for Sequence-to-sequence Text Generation: An Empirical Study |
Tao Ge, Xingxing Zhang, Furu Wei, Ming Zhou |
https://www.aclweb.org/anthology/P19-1609.pdf |
2019 |
ACL |
# optim-adam, init-glorot, reg-dropout, reg-worddropout, reg-stopping, train-mtl, arch-rnn, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-textclass, task-lm, task-condlm |
1 |
Neural Legal Judgment Prediction in English |
Ilias Chalkidis, Ion Androutsopoulos, Nikolaos Aletras |
https://www.aclweb.org/anthology/P19-1424.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, reg-dropout, train-mll, train-transfer, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, loss-margin, task-lm, task-seq2seq |
0 |
Self-Supervised Neural Machine Translation |
Dana Ruiter, Cristina España-Bonet, Josef van Genabith |
https://www.aclweb.org/anthology/P19-1178.pdf |
2019 |
ACL |
# optim-adam, init-glorot, reg-dropout, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-coverage, arch-subword, arch-transformer, pre-word2vec, pre-elmo, pre-bert, struct-hmm, latent-vae, task-extractive, task-lm, task-seq2seq, task-cloze |
0 |
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization |
Xingxing Zhang, Furu Wei, Ming Zhou |
https://www.aclweb.org/anthology/P19-1499.pdf |
2019 |
ACL |
# optim-adam, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-elmo, pre-bert, task-lm, task-seq2seq |
1 |
Attention Is (not) All You Need for Commonsense Reasoning |
Tassilo Klein, Moin Nabi |
https://www.aclweb.org/anthology/P19-1477.pdf |
2019 |
ACL |
# optim-adam, arch-rnn, arch-lstm, arch-att, arch-coverage, arch-transformer, pre-bert, task-textpair, task-lm, task-tree |
37 |
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference |
Tom McCoy, Ellie Pavlick, Tal Linzen |
https://www.aclweb.org/anthology/P19-1334.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, search-greedy, search-beam, nondif-reinforce, latent-vae, task-seq2seq |
3 |
Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation |
Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, Jie Zhou |
https://www.aclweb.org/anthology/P19-1288.pdf |
2019 |
ACL |
# optim-adam, train-transfer, arch-transformer, pre-bert, struct-crf, task-seqlab |
0 |
Label-Agnostic Sequence Labeling by Copying Nearest Neighbors |
Sam Wiseman, Karl Stratos |
https://www.aclweb.org/anthology/P19-1533.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-labelsmooth, norm-layer, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-memo, arch-copy, arch-coverage, arch-subword, arch-transformer, task-extractive, task-seq2seq, task-tree |
1 |
Keeping Notes: Conditional Natural Language Generation with a Scratchpad Encoder |
Ryan Benmalek, Madian Khabsa, Suma Desu, Claire Cardie, Michele Banko |
https://www.aclweb.org/anthology/P19-1407.pdf |
2019 |
ACL |
# optim-adam, train-mll, train-transfer, activ-tanh, pool-max, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-subword, arch-transformer, pre-bert, task-lm, task-condlm, task-seq2seq |
2 |
Like a Baby: Visually Situated Neural Language Acquisition |
Alexander Ororbia, Ankur Mali, Matthew Kelly, David Reitter |
https://www.aclweb.org/anthology/P19-1506.pdf |
2019 |
ACL |
# optim-adam, optim-adadelta, norm-gradient, train-mtl, train-transfer, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-subword, arch-transformer, search-beam, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-textclass, task-lm, task-seq2seq |
1 |
Gated Embeddings in End-to-End Speech Recognition for Conversational-Context Fusion |
Suyoun Kim, Siddharth Dalmia, Florian Metze |
https://www.aclweb.org/anthology/P19-1107.pdf |
2019 |
ACL |
# optim-adam, reg-stopping, norm-layer, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-memo, arch-copy, arch-transformer, search-beam, latent-vae, task-lm, task-seq2seq, task-tree |
4 |
Decomposable Neural Paraphrase Generation |
Zichao Li, Xin Jiang, Lifeng Shang, Qun Liu |
https://www.aclweb.org/anthology/P19-1332.pdf |
2019 |
ACL |
# optim-adam, train-mll, train-transfer, arch-gru, arch-att, arch-memo, arch-transformer, pre-glove, pre-bert, task-spanlab, task-lm, task-seq2seq, task-alignment |
3 |
XQA: A Cross-lingual Open-domain Question Answering Dataset |
Jiahua Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun |
https://www.aclweb.org/anthology/P19-1227.pdf |
2019 |
ACL |
# optim-adam, norm-layer, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-transformer, pre-glove, pre-skipthought, pre-bert, task-textpair, task-lm, task-tree |
0 |
Towards Lossless Encoding of Sentences |
Gabriele Prato, Mathieu Duchesneau, Sarath Chandar, Alain Tapp |
https://www.aclweb.org/anthology/P19-1153.pdf |
2019 |
ACL |
# optim-adam, optim-projection, arch-att, arch-selfatt, arch-transformer, search-beam, nondif-reinforce, task-seq2seq |
0 |
Look Harder: A Neural Machine Translation Model with Hard Attention |
Sathish Reddy Indurthi, Insoo Chung, Sangha Kim |
https://www.aclweb.org/anthology/P19-1290.pdf |
2019 |
ACL |
# optim-adam, train-mtl, train-transfer, train-augment, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-fasttext, adv-train, loss-svd, task-lm, task-seq2seq, task-lexicon, task-alignment |
1 |
Domain Adaptation of Neural Machine Translation by Lexicon Induction |
Junjie Hu, Mengzhou Xia, Graham Neubig, Jaime Carbonell |
https://www.aclweb.org/anthology/P19-1286.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-worddropout, reg-stopping, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, pre-bert, latent-vae, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-tree |
2 |
Unsupervised Question Answering by Cloze Translation |
Patrick Lewis, Ludovic Denoyer, Sebastian Riedel |
https://www.aclweb.org/anthology/P19-1484.pdf |
2019 |
ACL |
# optim-adadelta, arch-lstm, arch-bilstm, arch-cnn, arch-transformer, pre-glove, struct-crf, task-seqlab |
4 |
Sequence-to-Nuggets: Nested Entity Mention Detection via Anchor-Region Networks |
Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun |
https://www.aclweb.org/anthology/P19-1511.pdf |
2019 |
ACL |
# reg-dropout, reg-patience, arch-rnn, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-textclass, task-lm |
1 |
Large-Scale Multi-Label Text Classification on EU Legislation |
Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, Ion Androutsopoulos |
https://www.aclweb.org/anthology/P19-1636.pdf |
2019 |
ACL |
# optim-adam, train-mtl, train-transfer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-gating, arch-transformer, comb-ensemble, search-beam, pre-glove, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq |
5 |
Multi-style Generative Reading Comprehension |
Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita |
https://www.aclweb.org/anthology/P19-1220.pdf |
2019 |
ACL |
# optim-adam, optim-projection, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-transformer, pre-word2vec, pre-glove, pre-elmo, task-lm, task-seq2seq, task-lexicon |
0 |
LSTMEmbed: Learning Word and Sense Representations from a Large Semantically Annotated Corpus with Long Short-Term Memories |
Ignacio Iacobacci, Roberto Navigli |
https://www.aclweb.org/anthology/P19-1165.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-treelstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-seqlab, task-seq2seq, task-relation |
5 |
Lattice-Based Transformer Encoder for Neural Machine Translation |
Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, Kehai Chen |
https://www.aclweb.org/anthology/P19-1298.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-stopping, train-mtl, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-residual, arch-transformer, search-beam, task-condlm, task-seq2seq |
4 |
Distilling Translations with Visual Awareness |
Julia Ive, Pranava Madhyastha, Lucia Specia |
https://www.aclweb.org/anthology/P19-1653.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-decay, reg-labelsmooth, train-mll, train-transfer, arch-att, arch-selfatt, arch-transformer, comb-ensemble, search-beam, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-cloze |
1 |
A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning |
Gonçalo M. Correia, André F. T. Martins |
https://www.aclweb.org/anthology/P19-1292.pdf |
2019 |
ACL |
# optim-adam, init-glorot, reg-dropout, reg-labelsmooth, norm-layer, arch-rnn, arch-lstm, arch-treelstm, arch-gnn, arch-cnn, arch-att, arch-selfatt, arch-residual, arch-energy, arch-transformer, search-beam, task-seq2seq |
2 |
Self-Attentional Models for Lattice Inputs |
Matthias Sperber, Graham Neubig, Ngoc-Quan Pham, Alex Waibel |
https://www.aclweb.org/anthology/P19-1115.pdf |
2019 |
ACL |
# optim-adam, reg-stopping, reg-patience, reg-decay, arch-att, arch-coverage, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-textpair, task-lm |
1 |
GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification |
Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, Maosong Sun |
https://www.aclweb.org/anthology/P19-1085.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mll, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, pre-fasttext, adv-train, task-lm, task-seq2seq, task-lexicon |
5 |
Effective Cross-lingual Transfer of Neural Machine Translation Models without Shared Vocabularies |
Yunsu Kim, Yingbo Gao, Hermann Ney |
https://www.aclweb.org/anthology/P19-1120.pdf |
2019 |
ACL |
# optim-adam, optim-projection, norm-layer, train-mll, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, task-seq2seq |
9 |
Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention |
Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, William Yang Wang |
https://www.aclweb.org/anthology/P19-1360.pdf |
2019 |
ACL |
# optim-sgd, reg-dropout, reg-worddropout, arch-rnn, arch-att, arch-selfatt, arch-transformer, pre-fasttext, pre-bert, latent-vae, task-textclass, task-lm, task-seq2seq |
6 |
Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation |
Ning Dai, Jianze Liang, Xipeng Qiu, Xuanjing Huang |
https://www.aclweb.org/anthology/P19-1601.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, norm-layer, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert |
8 |
Matching the Blanks: Distributional Similarity for Relation Learning |
Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, Tom Kwiatkowski |
https://www.aclweb.org/anthology/P19-1279.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-gru, arch-att, arch-copy, arch-transformer, search-greedy, search-beam, nondif-reinforce, latent-topic, task-condlm, task-seq2seq |
2 |
Neural Keyphrase Generation via Reinforcement Learning with Adaptive Rewards |
Hou Pong Chan, Wang Chen, Lu Wang, Irwin King |
https://www.aclweb.org/anthology/P19-1208.pdf |
2019 |
ACL |
# optim-adagrad, reg-dropout, norm-layer, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-gating, arch-memo, arch-transformer, pre-glove, pre-skipthought, task-textpair, task-lm, task-seq2seq |
0 |
You Only Need Attention to Traverse Trees |
Mahtab Ahmed, Muhammad Rifayat Samee, Robert E. Mercer |
https://www.aclweb.org/anthology/P19-1030.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, optim-projection, train-mtl, train-mll, arch-att, arch-selfatt, arch-transformer, task-lm, task-seq2seq |
2 |
A Multi-Task Architecture on Relevance-based Neural Query Translation |
Sheikh Muhammad Sarwar, Hamed Bonab, James Allan |
https://www.aclweb.org/anthology/P19-1639.pdf |
2019 |
ACL |
# train-transfer, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, task-seq2seq, task-relation, task-alignment |
1 |
Sentence-Level Agreement for Neural Machine Translation |
Mingming Yang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Min Zhang, Tiejun Zhao |
https://www.aclweb.org/anthology/P19-1296.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-patience, train-augment, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-transformer, adv-examp, latent-vae, task-lm, task-condlm, task-seq2seq |
0 |
Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation |
Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie Zhou, Xu Sun |
https://www.aclweb.org/anthology/P19-1197.pdf |
2019 |
ACL |
# optim-adam, arch-rnn, arch-gru, arch-att, arch-transformer, pre-word2vec, pre-bert |
4 |
Proactive Human-Machine Conversation with Explicit Conversation Goal |
Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, Haifeng Wang |
https://www.aclweb.org/anthology/P19-1369.pdf |
2019 |
ACL |
# optim-adam, init-glorot, norm-layer, train-mtl, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, latent-topic, task-textclass, task-lm, task-seq2seq |
0 |
Text Categorization by Learning Predominant Sense of Words as Auxiliary Task |
Kazuya Shimura, Jiyi Li, Fumiyo Fukumoto |
https://www.aclweb.org/anthology/P19-1105.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-labelsmooth, arch-lstm, arch-cnn, arch-att, arch-coverage, arch-transformer, latent-topic, task-seq2seq |
3 |
Generating Summaries with Topic Templates and Structured Convolutional Decoders |
Laura Perez-Beltrachini, Yang Liu, Mirella Lapata |
https://www.aclweb.org/anthology/P19-1504.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mll, pool-max, arch-lstm, arch-bilstm, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, loss-margin, task-seqlab, task-seq2seq |
15 |
Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings |
Mikel Artetxe, Holger Schwenk |
https://www.aclweb.org/anthology/P19-1309.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, reg-dropout, pool-max, arch-rnn, arch-att, arch-transformer, pre-word2vec, task-context |
0 |
DeepSentiPeer: Harnessing Sentiment in Review Texts to Recommend Peer Review Decisions |
Tirthankar Ghosal, Rajeev Verma, Asif Ekbal, Pushpak Bhattacharyya |
https://www.aclweb.org/anthology/P19-1106.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-transformer, pre-bert, struct-crf, task-seqlab, task-lm |
3 |
Merge and Label: A Novel Neural Network Architecture for Nested NER |
Joseph Fisher, Andreas Vlachos |
https://www.aclweb.org/anthology/P19-1585.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, reg-stopping, reg-patience, train-active, arch-lstm, arch-cnn, arch-att, arch-transformer, pre-glove, pre-bert, struct-cfg, adv-train, latent-vae, latent-topic, task-textclass, task-lm |
1 |
Variational Pretraining for Semi-supervised Text Classification |
Suchin Gururangan, Tam Dang, Dallas Card, Noah A. Smith |
https://www.aclweb.org/anthology/P19-1590.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-worddropout, reg-stopping, reg-patience, reg-labelsmooth, norm-layer, arch-rnn, arch-att, arch-subword, arch-transformer, search-beam, task-seqlab, task-lm, task-seq2seq, task-alignment |
5 |
Revisiting Low-Resource Neural Machine Translation: A Case Study |
Rico Sennrich, Biao Zhang |
https://www.aclweb.org/anthology/P19-1021.pdf |
2019 |
ACL |
# optim-adam, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-memo, arch-transformer, pre-glove, pre-bert, nondif-reinforce, task-spanlab |
1 |
Episodic Memory Reader: Learning What to Remember for Question Answering from Streaming Data |
Moonsu Han, Minki Kang, Hyunwoo Jung, Sung Ju Hwang |
https://www.aclweb.org/anthology/P19-1434.pdf |
2019 |
ACL |
# optim-adam, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-transformer, pre-bert, task-seq2seq |
5 |
Scoring Sentence Singletons and Pairs for Abstractive Summarization |
Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, Fei Liu |
https://www.aclweb.org/anthology/P19-1209.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, pre-glove, pre-elmo, pre-bert, struct-crf, adv-train, task-textclass, task-seqlab, task-lm, task-seq2seq |
1 |
Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation |
Benjamin Heinzerling, Michael Strube |
https://www.aclweb.org/anthology/P19-1027.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, reg-norm, reg-labelsmooth, train-mll, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, task-seqlab, task-seq2seq, task-relation |
3 |
Leveraging Local and Global Patterns for Self-Attention Networks |
Mingzhou Xu, Derek F. Wong, Baosong Yang, Yue Zhang, Lidia S. Chao |
https://www.aclweb.org/anthology/P19-1295.pdf |
2019 |
ACL |
# optim-adam, arch-att, arch-selfatt, arch-transformer, pre-bert, task-textpair, task-seqlab, task-graph |
4 |
Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction |
Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, Debasis Ganguly |
https://www.aclweb.org/anthology/P19-1513.pdf |
2019 |
ACL |
# optim-adam, norm-layer, train-transfer, pool-max, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq |
1 |
A Compact and Language-Sensitive Multilingual Translation Method |
Yining Wang, Long Zhou, Jiajun Zhang, Feifei Zhai, Jingfang Xu, Chengqing Zong |
https://www.aclweb.org/anthology/P19-1117.pdf |
2019 |
ACL |
# train-mtl, pool-mean, arch-rnn, arch-lstm, arch-att, arch-transformer, pre-word2vec, pre-glove, pre-bert, task-textclass, task-textpair, task-extractive, task-seq2seq |
3 |
Searching for Effective Neural Extractive Summarization: What Works and What’s Next |
Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, Xuanjing Huang |
https://www.aclweb.org/anthology/P19-1100.pdf |
2019 |
ACL |
# arch-rnn, arch-lstm, arch-att, arch-transformer, pre-elmo, pre-bert, latent-vae, task-lm, task-seq2seq |
1 |
Coreference Resolution with Entity Equalization |
Ben Kantor, Amir Globerson |
https://www.aclweb.org/anthology/P19-1066.pdf |
2019 |
ACL |
# optim-adam, optim-adagrad, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, search-beam, task-seq2seq, task-tree |
1 |
Improving Multi-turn Dialogue Modelling with Utterance ReWriter |
Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, Jie Zhou |
https://www.aclweb.org/anthology/P19-1003.pdf |
2019 |
ACL |
# optim-adam, optim-projection, norm-layer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-transformer, pre-bert |
5 |
SUMBT: Slot-Utterance Matching for Universal and Scalable Belief Tracking |
Hwaran Lee, Jinsik Lee, Tae-Yoon Kim |
https://www.aclweb.org/anthology/P19-1546.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, optim-projection, reg-dropout, reg-stopping, norm-layer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-fasttext, task-lm |
13 |
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction |
Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi |
https://www.aclweb.org/anthology/P19-1470.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, reg-dropout, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-cloze |
116 |
Multi-Task Deep Neural Networks for Natural Language Understanding |
Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao |
https://www.aclweb.org/anthology/P19-1441.pdf |
2019 |
ACL |
# optim-adam, init-glorot, train-mll, train-augment, arch-att, arch-selfatt, arch-residual, arch-transformer, struct-hmm, adv-train, latent-vae, task-textclass, task-lm, task-seq2seq |
4 |
Unsupervised Paraphrasing without Translation |
Aurko Roy, David Grangier |
https://www.aclweb.org/anthology/P19-1605.pdf |
2019 |
ACL |
# optim-adam, train-mtl, train-mll, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-cloze |
27 |
ERNIE: Enhanced Language Representation with Informative Entities |
Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu |
https://www.aclweb.org/anthology/P19-1139.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, arch-memo, arch-transformer, pre-bert, task-textclass, task-textpair, task-seq2seq, task-tree, meta-init |
3 |
Personalizing Dialogue Agents via Meta-Learning |
Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Pascale Fung |
https://www.aclweb.org/anthology/P19-1542.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, reg-labelsmooth, arch-rnn, arch-att, arch-residual, arch-subword, arch-transformer, task-lm, task-seq2seq |
2 |
Shared-Private Bilingual Word Embeddings for Neural Machine Translation |
Xuebo Liu, Derek F. Wong, Yang Liu, Lidia S. Chao, Tong Xiao, Jingbo Zhu |
https://www.aclweb.org/anthology/P19-1352.pdf |
2019 |
ACL |
# optim-sgd, reg-dropout, reg-worddropout, reg-norm, reg-decay, train-mtl, arch-rnn, arch-lstm, arch-transformer, task-lm, task-seq2seq |
2 |
Improved Language Modeling by Decoding the Past |
Siddhartha Brahma |
https://www.aclweb.org/anthology/P19-1142.pdf |
2019 |
ACL |
# train-mll, train-transfer, train-augment, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-fasttext, loss-svd, task-seqlab, task-lm, task-seq2seq, task-lexicon |
1 |
Generalized Data Augmentation for Low-Resource Translation |
Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, Graham Neubig |
https://www.aclweb.org/anthology/P19-1579.pdf |
2019 |
ACL |
# arch-lstm, arch-gru, arch-treelstm, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, search-beam, pre-fasttext, pre-bert, task-seq2seq |
2 |
Generating Diverse Translations with Sentence Codes |
Raphael Shu, Hideki Nakayama, Kyunghyun Cho |
https://www.aclweb.org/anthology/P19-1177.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, optim-projection, reg-dropout, norm-layer, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-gating, arch-transformer, comb-ensemble, pre-fasttext, task-spanlab, task-seq2seq |
1 |
Token-level Dynamic Self-Attention Network for Multi-Passage Reading Comprehension |
Yimeng Zhuang, Huadong Wang |
https://www.aclweb.org/anthology/P19-1218.pdf |
2019 |
ACL |
# arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, adv-examp, adv-train, task-textclass, task-seq2seq |
1 |
Effective Adversarial Regularization for Neural Machine Translation |
Motoki Sato, Jun Suzuki, Shun Kiyono |
https://www.aclweb.org/anthology/P19-1020.pdf |
2019 |
ACL |
# optim-adam, arch-rnn, arch-lstm, arch-transformer, pre-elmo, pre-bert, task-textpair, task-lm, task-cloze |
37 |
BERT Rediscovers the Classical NLP Pipeline |
Ian Tenney, Dipanjan Das, Ellie Pavlick |
https://www.aclweb.org/anthology/P19-1452.pdf |
2019 |
ACL |
# arch-att, arch-selfatt, arch-subword, arch-transformer, pre-fasttext, pre-glove, pre-bert, latent-topic, task-lm |
0 |
Context-specific Language Modeling for Human Trafficking Detection from Online Advertisements |
Saeideh Shahrokh Esfahani, Michael J. Cafarella, Maziyar Baran Pouyan, Gregory DeAngelo, Elena Eneva, Andy E. Fano |
https://www.aclweb.org/anthology/P19-1114.pdf |
2019 |
ACL |
# optim-sgd, reg-dropout, arch-rnn, arch-lstm, arch-gcnn, arch-cnn, arch-att, arch-selfatt, arch-gating, arch-memo, arch-transformer, task-lm, task-seq2seq, task-alignment, meta-arch |
1 |
Improving Neural Language Models by Segmenting, Attending, and Predicting the Future |
Hongyin Luo, Lan Jiang, Yonatan Belinkov, James Glass |
https://www.aclweb.org/anthology/P19-1144.pdf |
2019 |
ACL |
# optim-adam, reg-norm, train-mtl, train-mll, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-glove, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-textclass, task-textpair, task-lm |
0 |
EigenSent: Spectral sentence embeddings using higher-order Dynamic Mode Decomposition |
Subhradeep Kayal, George Tsatsaronis |
https://www.aclweb.org/anthology/P19-1445.pdf |
2019 |
ACL |
# train-transfer, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-transformer, pre-bert, task-textclass, task-spanlab, task-lm, task-seq2seq, task-cloze |
2 |
Exploring Pre-trained Language Models for Event Extraction and Generation |
Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, Dongsheng Li |
https://www.aclweb.org/anthology/P19-1522.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-att, arch-transformer, task-condlm, task-seq2seq |
0 |
Expressing Visual Relationships via Language |
Hao Tan, Franck Dernoncourt, Zhe Lin, Trung Bui, Mohit Bansal |
https://www.aclweb.org/anthology/P19-1182.pdf |
2019 |
ACL |
# train-transfer, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, task-seqlab, task-lm, task-seq2seq |
5 |
Strategies for Structuring Story Generation |
Angela Fan, Mike Lewis, Yann Dauphin |
https://www.aclweb.org/anthology/P19-1254.pdf |
2019 |
ACL |
# arch-lstm, arch-bilstm, arch-att, arch-transformer, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-textpair, task-lm, task-cloze |
3 |
Classification and Clustering of Arguments with Contextualized Word Embeddings |
Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab, Iryna Gurevych |
https://www.aclweb.org/anthology/P19-1054.pdf |
2019 |
ACL |
# reg-dropout, arch-rnn, arch-att, arch-coverage, arch-subword, arch-transformer, search-beam, latent-vae, task-lm, task-seq2seq, task-alignment |
0 |
Reducing Word Omission Errors in Neural Machine Translation: A Contrastive Learning Approach |
Zonghan Yang, Yong Cheng, Yang Liu, Maosong Sun |
https://www.aclweb.org/anthology/P19-1623.pdf |
2019 |
ACL |
# optim-adam, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-elmo, pre-bert, task-seqlab, task-lm, task-seq2seq, task-relation, meta-arch |
76 |
Energy and Policy Considerations for Deep Learning in NLP |
Emma Strubell, Ananya Ganesh, Andrew McCallum |
https://www.aclweb.org/anthology/P19-1355.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-stopping, reg-patience, train-mtl, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, latent-vae, task-textpair, task-lm, task-seq2seq |
7 |
Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study |
Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, Yoshua Bengio |
https://www.aclweb.org/anthology/P19-1004.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, arch-transformer, pre-word2vec, pre-glove, pre-bert, adv-feat, task-seq2seq |
1 |
Gender-preserving Debiasing for Pre-trained Word Embeddings |
Masahiro Kaneko, Danushka Bollegala |
https://www.aclweb.org/anthology/P19-1160.pdf |
2019 |
ACL |
# optim-adam, optim-adagrad, reg-dropout, reg-labelsmooth, norm-layer, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, pre-paravec, task-seq2seq |
9 |
Hierarchical Transformers for Multi-Document Summarization |
Yang Liu, Mirella Lapata |
https://www.aclweb.org/anthology/P19-1500.pdf |
2019 |
ACL |
# optim-adam, reg-stopping, train-mll, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-seq2seq |
6 |
When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion |
Elena Voita, Rico Sennrich, Ivan Titov |
https://www.aclweb.org/anthology/P19-1116.pdf |
2019 |
ACL |
# train-augment, train-parallel, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-transformer, pre-glove, pre-bert, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-relation |
0 |
Self-Attention Architectures for Answer-Agnostic Neural Question Generation |
Thomas Scialom, Benjamin Piwowarski, Jacopo Staiano |
https://www.aclweb.org/anthology/P19-1604.pdf |
2019 |
ACL |
# optim-adagrad, reg-dropout, arch-att, arch-selfatt, arch-transformer, task-lm, task-seq2seq |
11 |
Adaptive Attention Span in Transformers |
Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin |
https://www.aclweb.org/anthology/P19-1032.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, train-mll, arch-lstm, arch-bilstm, arch-att, arch-subword, arch-transformer, comb-ensemble, pre-glove, pre-elmo, pre-bert, latent-vae, task-textclass, task-seq2seq |
0 |
Modelling the interplay of metaphor and emotion through multitask learning |
Verna Dankers, Marek Rei, Martha Lewis, Ekaterina Shutova |
https://www.aclweb.org/anthology/D19-1227.pdf |
2019 |
EMNLP |
# optim-sgd, optim-projection, reg-dropout, reg-stopping, norm-layer, train-mll, train-transfer, train-parallel, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-bert, task-lm, task-seq2seq |
1 |
Simple, Scalable Adaptation for Neural Machine Translation |
Ankur Bapna, Orhan Firat |
https://www.aclweb.org/anthology/D19-1165.pdf |
2019 |
EMNLP |
# init-glorot, arch-rnn, arch-lstm, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, latent-vae, task-lm, task-seq2seq |
0 |
Parallel Iterative Edit Models for Local Sequence Transduction |
Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, Vihari Piratla |
https://www.aclweb.org/anthology/D19-1435.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, train-mtl, train-mll, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, search-beam, task-seq2seq |
1 |
NCLS: Neural Cross-Lingual Summarization |
Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, Chengqing Zong |
https://www.aclweb.org/anthology/D19-1302.pdf |
2019 |
EMNLP |
# reg-stopping, train-transfer, arch-transformer, latent-topic, task-lm, task-seq2seq, task-relation |
2 |
Distributionally Robust Language Modeling |
Yonatan Oren, Shiori Sagawa, Tatsunori Hashimoto, Percy Liang |
https://www.aclweb.org/anthology/D19-1432.pdf |
2019 |
EMNLP |
# train-transfer, arch-rnn, arch-lstm, arch-att, arch-subword, arch-transformer, search-beam, pre-fasttext, pre-glove, pre-bert, task-textclass, task-lm, task-seq2seq |
0 |
“Transforming” Delete, Retrieve, Generate Approach for Controlled Text Style Transfer |
Akhilesh Sudhakar, Bhargav Upadhyay, Arjun Maheswaran |
https://www.aclweb.org/anthology/D19-1322.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-att, arch-coverage, arch-transformer, task-seq2seq |
3 |
Dynamic Past and Future for Neural Machine Translation |
Zaixiang Zheng, Shujian Huang, Zhaopeng Tu, Xin-Yu Dai, Jiajun Chen |
https://www.aclweb.org/anthology/D19-1086.pdf |
2019 |
EMNLP |
# train-mtl, train-transfer, arch-rnn, arch-att, arch-subword, arch-transformer, pre-skipthought, pre-bert, task-textclass, task-lm, task-seq2seq |
1 |
Improving Neural Story Generation by Targeted Common Sense Grounding |
Huanru Henry Mao, Bodhisattwa Prasad Majumder, Julian McAuley, Garrison Cottrell |
https://www.aclweb.org/anthology/D19-1615.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, reg-decay, train-mll, train-transfer, pool-max, arch-lstm, arch-bilstm, arch-cnn, arch-coverage, arch-transformer, struct-crf, task-seqlab |
1 |
Low-Resource Name Tagging Learned with Weakly Labeled Data |
Yixin Cao, Zikun Hu, Tat-seng Chua, Zhiyuan Liu, Heng Ji |
https://www.aclweb.org/anthology/D19-1025.pdf |
2019 |
EMNLP |
# reg-dropout, train-augment, pool-max, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-coverage, arch-transformer, pre-glove, task-textclass |
0 |
Improving Relation Extraction with Knowledge-attention |
Pengfei Li, Kezhi Mao, Xuefeng Yang, Qi Li |
https://www.aclweb.org/anthology/D19-1022.pdf |
2019 |
EMNLP |
# pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-skipthought, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-cloze |
0 |
UER: An Open-Source Toolkit for Pre-training Models |
Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, Xiaoyong Du |
https://www.aclweb.org/anthology/D19-3041.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, train-mll, arch-att, arch-subword, arch-transformer, pre-fasttext, pre-bert, adv-train, task-lm, task-seq2seq, task-alignment |
0 |
Explicit Cross-lingual Pre-training for Unsupervised Machine Translation |
Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou, Shuai Ma |
https://www.aclweb.org/anthology/D19-1071.pdf |
2019 |
EMNLP |
# reg-dropout, pool-max, arch-lstm, arch-att, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq |
0 |
Aggregating Bidirectional Encoder Representations Using MatchLSTM for Sequence Matching |
Bo Shao, Yeyun Gong, Weizhen Qi, Nan Duan, Xiaola Lin |
https://www.aclweb.org/anthology/D19-1626.pdf |
2019 |
EMNLP |
# arch-rnn, arch-gru, arch-att, arch-transformer, pre-bert, task-textclass, task-lm |
1 |
Trouble on the Horizon: Forecasting the Derailment of Online Conversations as they Develop |
Jonathan P. Chang, Cristian Danescu-Niculescu-Mizil |
https://www.aclweb.org/anthology/D19-1481.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, reg-stopping, reg-decay, reg-labelsmooth, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, task-lm, task-seq2seq, task-alignment |
0 |
Jointly Learning to Align and Translate with Transformer Models |
Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, Matthias Paulik |
https://www.aclweb.org/anthology/D19-1453.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, norm-layer, arch-rnn, arch-lstm, arch-att, arch-subword, arch-transformer, task-seq2seq |
0 |
Combining Global Sparse Gradients with Local Gradients in Distributed Neural Network Training |
Alham Fikri Aji, Kenneth Heafield, Nikolay Bogoychev |
https://www.aclweb.org/anthology/D19-1373.pdf |
2019 |
EMNLP |
# init-glorot, reg-dropout, reg-stopping, norm-gradient, train-transfer, train-augment, train-parallel, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, comb-ensemble, search-beam, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq |
2 |
Denoising based Sequence-to-Sequence Pre-training for Text Generation |
Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, Jingming Liu |
https://www.aclweb.org/anthology/D19-1412.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seq2seq |
0 |
Encoders Help You Disambiguate Word Senses in Neural Machine Translation |
Gongbo Tang, Rico Sennrich, Joakim Nivre |
https://www.aclweb.org/anthology/D19-1149.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, pre-word2vec, pre-glove, pre-elmo, pre-bert, loss-nce, task-textpair, task-alignment |
0 |
A Gated Self-attention Memory Network for Answer Selection |
Tuan Lai, Quan Hung Tran, Trung Bui, Daisuke Kihara |
https://www.aclweb.org/anthology/D19-1610.pdf |
2019 |
EMNLP |
# arch-rnn, arch-lstm, arch-recnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seq2seq |
2 |
Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons |
Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu |
https://www.aclweb.org/anthology/D19-1135.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, reg-dropout, train-augment, arch-subword, arch-transformer, task-seq2seq |
0 |
Understanding Data Augmentation in Neural Machine Translation: Two Perspectives towards Generalization |
Guanlin Li, Lemao Liu, Guoping Huang, Conghui Zhu, Tiejun Zhao |
https://www.aclweb.org/anthology/D19-1570.pdf |
2019 |
EMNLP |
# train-mll, arch-transformer, comb-ensemble, pre-fasttext, pre-bert, adv-train, loss-cca, loss-svd, task-seq2seq, task-relation, task-lexicon |
1 |
Weakly-Supervised Concept-based Adversarial Learning for Cross-lingual Word Embeddings |
Haozhou Wang, James Henderson, Paola Merlo |
https://www.aclweb.org/anthology/D19-1450.pdf |
2019 |
EMNLP |
# reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-lm |
1 |
Quantity doesn’t buy quality syntax with neural language models |
Marten van Schijndel, Aaron Mueller, Tal Linzen |
https://www.aclweb.org/anthology/D19-1592.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, norm-layer, train-mll, pool-mean, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, task-spanlab, task-lm, task-seq2seq |
0 |
Cross-Lingual Machine Reading Comprehension |
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu |
https://www.aclweb.org/anthology/D19-1169.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, train-mtl, train-mll, train-active, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, arch-transformer, search-viterbi, pre-glove, struct-crf, adv-train, task-textclass, task-seqlab, task-lm, task-seq2seq, task-relation |
0 |
Hierarchically-Refined Label Attention Network for Sequence Labeling |
Leyang Cui, Yue Zhang |
https://www.aclweb.org/anthology/D19-1422.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, train-mtl, arch-att, arch-memo, arch-bilinear, arch-transformer, search-beam, pre-bert, task-seq2seq, task-tree |
0 |
Multi-Task Learning for Conversational Question Answering over a Large-Scale Knowledge Base |
Tao Shen, Xiubo Geng, Tao Qin, Daya Guo, Duyu Tang, Nan Duan, Guodong Long, Daxin Jiang |
https://www.aclweb.org/anthology/D19-1248.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-word2vec, pre-bert, task-seqlab, task-seq2seq, task-relation, task-tree |
0 |
A Syntax-aware Multi-task Learning Framework for Chinese Semantic Role Labeling |
Qingrong Xia, Zhenghua Li, Min Zhang |
https://www.aclweb.org/anthology/D19-1541.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, norm-layer, arch-att, arch-selfatt, arch-residual, arch-subword, arch-transformer, task-seq2seq |
1 |
Synchronously Generating Two Languages with Interactive Decoding |
Yining Wang, Jiajun Zhang, Long Zhou, Yuchen Liu, Chengqing Zong |
https://www.aclweb.org/anthology/D19-1330.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, train-mll, train-transfer, pool-mean, arch-att, arch-transformer, pre-fasttext, pre-bert, loss-cca, task-spanlab, task-lm, task-seq2seq, task-lexicon |
0 |
Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model |
Tsung-Yuan Hsu, Chi-Liang Liu, Hung-yi Lee |
https://www.aclweb.org/anthology/D19-1607.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-decay, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, pre-bert, task-textclass, task-seqlab, task-lm, task-seq2seq |
0 |
Subword Language Model for Query Auto-Completion |
Gyuwan Kim |
https://www.aclweb.org/anthology/D19-1507.pdf |
2019 |
EMNLP |
# optim-projection, reg-dropout, train-mtl, train-parallel, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-gating, arch-memo, arch-coverage, arch-transformer, pre-bert |
1 |
Different Absorption from the Same Sharing: Sifted Multi-task Learning for Fake News Detection |
Lianwei Wu, Yuan Rao, Haolin Jin, Ambreen Nazir, Ling Sun |
https://www.aclweb.org/anthology/D19-1471.pdf |
2019 |
EMNLP |
# optim-adam, arch-transformer, pre-elmo, adv-examp, task-textclass, task-textpair, task-spanlab |
0 |
Evaluating adversarial attacks against multiple fact verification systems |
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal |
https://www.aclweb.org/anthology/D19-1292.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, activ-tanh, arch-rnn, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, loss-nce, task-seq2seq, task-relation |
0 |
Minimally Supervised Learning of Affective Events Using Discourse Relations |
Jun Saito, Yugo Murawaki, Sadao Kurohashi |
https://www.aclweb.org/anthology/D19-1581.pdf |
2019 |
EMNLP |
# arch-lstm, arch-gru, arch-transformer, task-seqlab, task-condlm, task-seq2seq |
2 |
Neural data-to-text generation: A comparison between pipeline and end-to-end architectures |
Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, Emiel Krahmer |
https://www.aclweb.org/anthology/D19-1052.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-gnn, arch-cnn, arch-att, arch-memo, arch-transformer, search-viterbi, pre-word2vec, struct-crf, task-textclass, task-seqlab, task-lm, task-relation |
1 |
A Lexicon-Based Graph Neural Network for Chinese NER |
Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, Xuanjing Huang |
https://www.aclweb.org/anthology/D19-1096.pdf |
2019 |
EMNLP |
# optim-adadelta, reg-dropout, reg-stopping, train-mtl, train-mll, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-transformer, task-lm, task-seq2seq |
0 |
One Model to Learn Both: Zero Pronoun Prediction and Translation |
Longyue Wang, Zhaopeng Tu, Xing Wang, Shuming Shi |
https://www.aclweb.org/anthology/D19-1085.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, reg-dropout, reg-decay, train-mll, train-transfer, pool-max, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze, task-relation |
22 |
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT |
Shijie Wu, Mark Dredze |
https://www.aclweb.org/anthology/D19-1077.pdf |
2019 |
EMNLP |
# reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, task-textclass, task-lm |
0 |
DENS: A Dataset for Multi-class Emotion Analysis |
Chen Liu, Muhammad Osama, Anderson De Andrade |
https://www.aclweb.org/anthology/D19-1656.pdf |
2019 |
EMNLP |
# optim-adam, train-transfer, arch-att, arch-transformer, search-beam, pre-word2vec, task-lm, task-condlm, task-seq2seq |
2 |
Decoupled Box Proposal and Featurization with Ultrafine-Grained Semantic Labels Improve Image Captioning and Visual Question Answering |
Soravit Changpinyo, Bo Pang, Piyush Sharma, Radu Soricut |
https://www.aclweb.org/anthology/D19-1155.pdf |
2019 |
EMNLP |
# optim-projection, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-coverage, arch-transformer, pre-glove, pre-bert, task-seq2seq |
0 |
Generating Questions for Knowledge Bases via Incorporating Diversified Contexts and Answer-Aware Loss |
Cao Liu, Kang Liu, Shizhu He, Zaiqing Nie, Jun Zhao |
https://www.aclweb.org/anthology/D19-1247.pdf |
2019 |
EMNLP |
# optim-adam, train-mll, arch-coverage, arch-subword, arch-transformer, search-greedy, search-beam, latent-topic, task-textclass, task-lm, task-seq2seq, task-lexicon |
0 |
Machine Translation With Weakly Paired Documents |
Lijun Wu, Jinhua Zhu, Di He, Fei Gao, Tao Qin, Jianhuang Lai, Tie-Yan Liu |
https://www.aclweb.org/anthology/D19-1446.pdf |
2019 |
EMNLP |
# optim-adam, train-transfer, arch-lstm, arch-att, arch-selfatt, arch-memo, arch-transformer, comb-ensemble, pre-bert, task-seq2seq |
0 |
MoEL: Mixture of Empathetic Listeners |
Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, Pascale Fung |
https://www.aclweb.org/anthology/D19-1012.pdf |
2019 |
EMNLP |
# optim-adam, train-mll, train-transfer, arch-att, arch-subword, arch-transformer, comb-ensemble, search-beam, adv-train, task-seqlab, task-seq2seq, task-graph |
0 |
Iterative Dual Domain Adaptation for Neural Machine Translation |
Jiali Zeng, Yang Liu, Jinsong Su, Yubing Ge, Yaojie Lu, Yongjing Yin, Jiebo Luo |
https://www.aclweb.org/anthology/D19-1078.pdf |
2019 |
EMNLP |
# train-transfer, arch-cnn, arch-att, arch-selfatt, arch-gating, arch-transformer, pre-bert |
0 |
Humor Detection: A Transformer Gets the Last Laugh |
Orion Weller, Kevin Seppi |
https://www.aclweb.org/anthology/D19-1372.pdf |
2019 |
EMNLP |
# optim-adam, train-mll, train-active, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, comb-ensemble, task-lm, task-seq2seq, task-alignment |
0 |
Learning to Copy for Automatic Post-Editing |
Xuancheng Huang, Yang Liu, Huanbo Luan, Jingfang Xu, Maosong Sun |
https://www.aclweb.org/anthology/D19-1634.pdf |
2019 |
EMNLP |
# optim-adam, arch-gru, arch-memo, arch-transformer, pre-bert, task-lm, task-seq2seq |
0 |
WikiCREM: A Large Unsupervised Corpus for Coreference Resolution |
Vid Kocijan, Oana-Maria Camburu, Ana-Maria Cretu, Yordan Yordanov, Phil Blunsom, Thomas Lukasiewicz |
https://www.aclweb.org/anthology/D19-1439.pdf |
2019 |
EMNLP |
# arch-rnn, arch-att, arch-subword, arch-transformer, search-greedy, search-beam, task-lm, task-condlm, task-seq2seq |
1 |
Speculative Beam Search for Simultaneous Translation |
Renjie Zheng, Mingbo Ma, Baigong Zheng, Liang Huang |
https://www.aclweb.org/anthology/D19-1144.pdf |
2019 |
EMNLP |
# norm-batch, arch-rnn, arch-lstm, arch-att, arch-transformer, pre-fasttext, pre-elmo, pre-bert, nondif-gumbelsoftmax, task-textclass, task-seq2seq, task-relation, task-lexicon |
0 |
What Does This Word Mean? Explaining Contextualized Embeddings with Natural Language Definition |
Ting-Yun Chang, Yun-Nung Chen |
https://www.aclweb.org/anthology/D19-1627.pdf |
2019 |
EMNLP |
# optim-adam, train-transfer, train-active, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, search-beam, pre-fasttext, pre-bert, task-textclass, task-lm, task-condlm, task-seq2seq |
14 |
Learning to Speak and Act in a Fantasy Text Adventure Game |
Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, Jason Weston |
https://www.aclweb.org/anthology/D19-1062.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq, task-relation, task-tree |
0 |
Discourse-Aware Semantic Self-Attention for Narrative Reading Comprehension |
Todor Mihaylov, Anette Frank |
https://www.aclweb.org/anthology/D19-1257.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-att, arch-selfatt, arch-transformer, task-seq2seq |
1 |
Transformer Dissection: An Unified Understanding for Transformer’s Attention via the Lens of Kernel |
Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, Ruslan Salakhutdinov |
https://www.aclweb.org/anthology/D19-1443.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, arch-rnn, arch-lstm, arch-gru, arch-gating, arch-coverage, arch-transformer, pre-glove, adv-train, latent-vae, task-seq2seq, task-tree |
6 |
An End-to-End Generative Architecture for Paraphrase Generation |
Qian Yang, Zhouyuan Huo, Dinghan Shen, Yong Cheng, Wenlin Wang, Guoyin Wang, Lawrence Carin |
https://www.aclweb.org/anthology/D19-1309.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, arch-cnn, arch-att, arch-coverage, arch-transformer, pre-glove, pre-skipthought, pre-bert, task-spanlab, task-lm, task-relation, task-tree |
0 |
Linking artificial and human neural representations of language |
Jon Gauthier, Roger Levy |
https://www.aclweb.org/anthology/D19-1050.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, train-mll, arch-cnn, arch-att, arch-gating, arch-coverage, arch-transformer, pre-bert, loss-margin, task-textpair, task-seqlab, task-lm, task-seq2seq |
0 |
Aligning Cross-Lingual Entities with Multi-Aspect Information |
Hsiu-Wei Yang, Yanyan Zou, Peng Shi, Wei Lu, Jimmy Lin, Xu Sun |
https://www.aclweb.org/anthology/D19-1451.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, pool-max, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-transformer, task-seq2seq, task-tree |
0 |
Asking Clarification Questions in Knowledge-Based Question Answering |
Jingjing Xu, Yuechen Wang, Duyu Tang, Nan Duan, Pengcheng Yang, Qi Zeng, Ming Zhou, Xu Sun |
https://www.aclweb.org/anthology/D19-1172.pdf |
2019 |
EMNLP |
# optim-adam, arch-transformer, comb-ensemble, pre-bert, task-seq2seq |
0 |
An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction |
Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, Kentaro Inui |
https://www.aclweb.org/anthology/D19-1119.pdf |
2019 |
EMNLP |
# pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-transformer, pre-bert, loss-triplet |
0 |
Improving Answer Selection and Answer Triggering using Hard Negatives |
Sawan Kumar, Shweta Garg, Kartik Mehta, Nikhil Rasiwasia |
https://www.aclweb.org/anthology/D19-1604.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-transformer, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-lm |
0 |
How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings |
Kawin Ethayarajh |
https://www.aclweb.org/anthology/D19-1006.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, struct-crf, task-textclass, task-extractive, task-lm, task-seq2seq |
0 |
Pretrained Language Models for Sequential Sentence Classification |
Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld |
https://www.aclweb.org/anthology/D19-1383.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, comb-ensemble, task-seq2seq, task-tree, task-graph |
0 |
Modeling Graph Structure in Transformer for Better AMR-to-Text Generation |
Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, Guodong Zhou |
https://www.aclweb.org/anthology/D19-1548.pdf |
2019 |
EMNLP |
# optim-adam, train-mll, arch-att, arch-coverage, arch-transformer, search-beam, task-seq2seq |
0 |
INMT: Interactive Neural Machine Translation Prediction |
Sebastin Santy, Sandipan Dandapat, Monojit Choudhury, Kalika Bali |
https://www.aclweb.org/anthology/D19-3018.pdf |
2019 |
EMNLP |
# train-mll, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq, task-tree |
0 |
Towards Understanding Neural Machine Translation with Word Importance |
Shilin He, Zhaopeng Tu, Xing Wang, Longyue Wang, Michael Lyu, Shuming Shi |
https://www.aclweb.org/anthology/D19-1088.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-lstm, arch-att, arch-transformer, pre-glove, task-textclass, task-seq2seq |
0 |
Robust Text Classifier on Test-Time Budgets |
Md Rizwan Parvez, Tolga Bolukbasi, Kai-Wei Chang, Venkatesh Saligrama |
https://www.aclweb.org/anthology/D19-1108.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, arch-gnn, arch-att, arch-selfatt, arch-copy, arch-transformer, pre-bert, latent-vae, task-tree |
0 |
Answering Conversational Questions on Structured Data without Logical Forms |
Thomas Mueller, Francesco Piccinno, Peter Shaw, Massimo Nicosia, Yasemin Altun |
https://www.aclweb.org/anthology/D19-1603.pdf |
2019 |
EMNLP |
# reg-dropout, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-memo, arch-coverage, arch-transformer, pre-elmo, pre-bert, task-lm |
1 |
GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge |
Luyao Huang, Chi Sun, Xipeng Qiu, Xuanjing Huang |
https://www.aclweb.org/anthology/D19-1355.pdf |
2019 |
EMNLP |
# optim-adam, optim-adagrad, norm-layer, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, search-beam, pre-glove, pre-paravec, adv-train, latent-topic, task-textpair, task-extractive, task-spanlab, task-lm, task-condlm, task-seq2seq |
0 |
Topic-Guided Coherence Modeling for Sentence Ordering by Preserving Global and Local Information |
Byungkook Oh, Seungmin Seo, Cheolheon Shin, Eunju Jo, Kyong-Ho Lee |
https://www.aclweb.org/anthology/D19-1232.pdf |
2019 |
EMNLP |
# arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, task-seq2seq |
0 |
Multi-agent Learning for Neural Machine Translation |
Tianchi Bi, Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang |
https://www.aclweb.org/anthology/D19-1079.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, norm-layer, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, search-beam, task-seq2seq, task-alignment |
0 |
Contrastive Attention Mechanism for Abstractive Sentence Summarization |
Xiangyu Duan, Hongfei Yu, Mingming Yin, Min Zhang, Weihua Luo, Yue Zhang |
https://www.aclweb.org/anthology/D19-1301.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-lstm, arch-gru, arch-att, arch-transformer, task-condlm, task-seq2seq |
0 |
Compositional Generalization for Primitive Substitutions |
Yuanpeng Li, Liang Zhao, Jianyu Wang, Joel Hestness |
https://www.aclweb.org/anthology/D19-1438.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, train-mll, train-transfer, arch-cnn, arch-transformer, pre-bert, task-textclass, task-lm, task-seq2seq, task-relation |
0 |
Cross-lingual intent classification in a low resource industrial setting |
Talaat Khalil, Kornel Kiełczewski, Georgios Christos Chouliaras, Amina Keldibek, Maarten Versteegh |
https://www.aclweb.org/anthology/D19-1676.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, reg-worddropout, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, loss-cca, task-lm, task-seq2seq, task-cloze |
1 |
The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives |
Elena Voita, Rico Sennrich, Ivan Titov |
https://www.aclweb.org/anthology/D19-1448.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-lstm, arch-att, arch-memo, arch-coverage, arch-transformer, task-lm, task-condlm, task-seq2seq |
0 |
Generating Classical Chinese Poems from Vernacular Chinese |
Zhichao Yang, Pengshan Cai, Yansong Feng, Fei Li, Weijiang Feng, Elena Suet-Ying Chiu, Hong Yu |
https://www.aclweb.org/anthology/D19-1637.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, train-transfer, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-greedy, search-beam, latent-vae, task-seqlab, task-lm, task-seq2seq, task-tree |
0 |
Latent Part-of-Speech Sequences for Neural Machine Translation |
Xuewen Yang, Yingru Liu, Dongliang Xie, Xin Wang, Niranjan Balasubramanian |
https://www.aclweb.org/anthology/D19-1072.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, arch-rnn, arch-birnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, search-beam, adv-train, task-lm, task-condlm, task-seq2seq, task-relation |
0 |
Stick to the Facts: Learning towards a Fidelity-oriented E-Commerce Product Description Generation |
Zhangming Chan, Xiuying Chen, Yongliang Wang, Juntao Li, Zhiqiang Zhang, Kun Gai, Dongyan Zhao, Rui Yan |
https://www.aclweb.org/anthology/D19-1501.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, train-mtl, train-mll, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-glove, pre-elmo, pre-bert, latent-vae, task-spanlab, task-lm, task-seq2seq |
2 |
Knowledge Enhanced Contextual Word Representations |
Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A. Smith |
https://www.aclweb.org/anthology/D19-1005.pdf |
2019 |
EMNLP |
# optim-projection, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer |
1 |
UR-FUNNY: A Multimodal Language Dataset for Understanding Humor |
Md Kamrul Hasan, Wasifur Rahman, AmirAli Bagher Zadeh, Jianyuan Zhong, Md Iftekhar Tanveer, Louis-Philippe Morency, Mohammed (Ehsan) Hoque |
https://www.aclweb.org/anthology/D19-1211.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, train-mll, train-transfer, arch-att, arch-coverage, arch-transformer, pre-skipthought, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, meta-init |
0 |
Investigating Meta-Learning Algorithms for Low-Resource Natural Language Understanding Tasks |
Zi-Yi Dou, Keyi Yu, Antonios Anastasopoulos |
https://www.aclweb.org/anthology/D19-1112.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-decay, train-mll, train-transfer, arch-lstm, arch-att, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq |
1 |
The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English |
Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, Marc’Aurelio Ranzato |
https://www.aclweb.org/anthology/D19-1632.pdf |
2019 |
EMNLP |
# optim-adam, train-augment, arch-lstm, arch-att, arch-coverage, arch-transformer, pre-fasttext, task-seqlab, task-lm, task-seq2seq, task-tree, task-alignment |
0 |
Handling Syntactic Divergence in Low-resource Machine Translation |
Chunting Zhou, Xuezhe Ma, Junjie Hu, Graham Neubig |
https://www.aclweb.org/anthology/D19-1143.pdf |
2019 |
EMNLP |
# optim-adam, optim-adadelta, optim-projection, reg-dropout, train-augment, pool-max, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-transformer, search-greedy, search-beam, pre-elmo, pre-bert, nondif-reinforce, task-textpair, task-seqlab, task-spanlab, task-lm, task-condlm, task-seq2seq |
0 |
Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering |
Shiyue Zhang, Mohit Bansal |
https://www.aclweb.org/anthology/D19-1253.pdf |
2019 |
EMNLP |
# arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seq2seq, task-relation, task-graph |
0 |
Self-Attention with Structural Position Representations |
Xing Wang, Zhaopeng Tu, Longyue Wang, Shuming Shi |
https://www.aclweb.org/anthology/D19-1145.pdf |
2019 |
EMNLP |
# train-augment, arch-transformer, adv-examp, task-textpair, task-condlm, task-seq2seq |
0 |
Polly Want a Cracker: Analyzing Performance of Parroting on Paraphrase Generation Datasets |
Hong-Ren Mao, Hung-Yi Lee |
https://www.aclweb.org/anthology/D19-1611.pdf |
2019 |
EMNLP |
# arch-rnn, arch-lstm, arch-att, arch-subword, arch-transformer, search-beam, task-seq2seq |
1 |
On NMT Search Errors and Model Errors: Cat Got Your Tongue? |
Felix Stahlberg, Bill Byrne |
https://www.aclweb.org/anthology/D19-1331.pdf |
2019 |
EMNLP |
# reg-dropout, train-mll, arch-att, arch-coverage, arch-subword, arch-transformer, search-beam, loss-nce, task-seqlab, task-lm, task-seq2seq |
0 |
Investigating the Effectiveness of BPE: The Power of Shorter Sequences |
Matthias Gallé |
https://www.aclweb.org/anthology/D19-1141.pdf |
2019 |
EMNLP |
# train-mll, arch-lstm, arch-bilstm, arch-att, arch-transformer, pre-glove, pre-elmo, pre-bert, task-seq2seq |
2 |
Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite |
Prathyusha Jwalapuram, Shafiq Joty, Irina Temnikova, Preslav Nakov |
https://www.aclweb.org/anthology/D19-1294.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, search-viterbi, pre-elmo, pre-bert, struct-crf, loss-nce, task-seqlab, task-lm |
0 |
Effective Use of Transformer Networks for Entity Tracking |
Aditya Gupta, Greg Durrett |
https://www.aclweb.org/anthology/D19-1070.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-bert, task-lm |
1 |
Attending to Future Tokens for Bidirectional Sequence Generation |
Carolin Lawrence, Bhushan Kotnis, Mathias Niepert |
https://www.aclweb.org/anthology/D19-1001.pdf |
2019 |
EMNLP |
# optim-adam, train-transfer, arch-transformer, pre-bert, task-spanlab, task-lm |
11 |
Social IQa: Commonsense Reasoning about Social Interactions |
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, Yejin Choi |
https://www.aclweb.org/anthology/D19-1454.pdf |
2019 |
EMNLP |
# optim-adadelta, reg-stopping, pool-max, pool-mean, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-fasttext, pre-glove, pre-bert, task-textclass, task-seq2seq |
0 |
Enhancing Local Feature Extraction with Global Representation for Neural Text Classification |
Guocheng Niu, Hengru Xu, Bolei He, Xinyan Xiao, Hua Wu, Sheng Gao |
https://www.aclweb.org/anthology/D19-1047.pdf |
2019 |
EMNLP |
# arch-rnn, arch-lstm, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, task-textclass, task-lm, task-seq2seq |
0 |
uniblock: Scoring and Filtering Corpus with Unicode Block Information |
Yingbo Gao, Weiyue Wang, Hermann Ney |
https://www.aclweb.org/anthology/D19-1133.pdf |
2019 |
EMNLP |
# optim-adam, norm-layer, train-mll, train-augment, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-elmo, pre-bert, task-lm, task-condlm, task-seq2seq |
19 |
LXMERT: Learning Cross-Modality Encoder Representations from Transformers |
Hao Tan, Mohit Bansal |
https://www.aclweb.org/anthology/D19-1514.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, train-mll, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, task-lm, task-seq2seq |
0 |
Harnessing Pre-Trained Neural Networks with Rules for Formality Style Transfer |
Yunli Wang, Yu Wu, Lili Mou, Zhoujun Li, Wenhan Chao |
https://www.aclweb.org/anthology/D19-1365.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-labelsmooth, arch-att, arch-coverage, arch-subword, arch-transformer, search-beam, loss-nce, task-seqlab, task-seq2seq, task-tree |
0 |
Improving Back-Translation with Uncertainty-based Confidence Estimation |
Shuo Wang, Yang Liu, Chao Wang, Huanbo Luan, Maosong Sun |
https://www.aclweb.org/anthology/D19-1073.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, init-glorot, reg-dropout, train-mll, train-transfer, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-transformer, comb-ensemble, pre-glove, adv-train, latent-topic, task-textclass, task-seq2seq |
0 |
Adaptive Ensembling: Unsupervised Domain Adaptation for Political Document Analysis |
Shrey Desai, Barea Sinno, Alex Rosenfeld, Junyi Jessy Li |
https://www.aclweb.org/anthology/D19-1478.pdf |
2019 |
EMNLP |
# optim-adam, train-mll, train-transfer, arch-att, arch-subword, arch-transformer, pre-bert, loss-cca, loss-svd, task-lm, task-seq2seq |
4 |
Investigating Multilingual NMT Representations at Scale |
Sneha Kudugunta, Ankur Bapna, Isaac Caswell, Orhan Firat |
https://www.aclweb.org/anthology/D19-1167.pdf |
2019 |
EMNLP |
# reg-dropout, reg-decay, reg-labelsmooth, train-mll, arch-rnn, arch-lstm, arch-att, arch-subword, arch-transformer, pre-bert, task-textclass, task-lm, task-seq2seq |
2 |
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning |
Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kadras, Sylvain Gugger, Jeremy Howard |
https://www.aclweb.org/anthology/D19-1572.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, reg-patience, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-memo, arch-bilinear, arch-transformer, pre-glove |
0 |
Connecting the Dots: Document-level Neural Relation Extraction with Edge-oriented Graphs |
Fenia Christopoulou, Makoto Miwa, Sophia Ananiadou |
https://www.aclweb.org/anthology/D19-1498.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, train-mtl, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, task-textclass, task-seq2seq |
0 |
Enhancing Context Modeling with a Query-Guided Capsule Network for Document-level Translation |
Zhengxin Yang, Jinchao Zhang, Fandong Meng, Shuhao Gu, Yang Feng, Jie Zhou |
https://www.aclweb.org/anthology/D19-1164.pdf |
2019 |
EMNLP |
# arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-transformer, task-textpair, task-seq2seq |
1 |
Retrieval-guided Dialogue Response Generation via a Matching-to-Generation Framework |
Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, Shuming Shi |
https://www.aclweb.org/anthology/D19-1195.pdf |
2019 |
EMNLP |
# optim-adam, norm-layer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-elmo, pre-bert, task-textclass, task-textpair, task-extractive, task-spanlab, task-lm, task-seq2seq, task-cloze |
0 |
Fine-tune BERT with Sparse Self-Attention Mechanism |
Baiyun Cui, Yingming Li, Ming Chen, Zhongfei Zhang |
https://www.aclweb.org/anthology/D19-1361.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, train-mll, pool-max, arch-subword, arch-transformer, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, latent-vae, task-textclass, task-textpair, task-seq2seq |
0 |
Correlations between Word Vector Sets |
Vitalii Zhelezniak, April Shen, Daniel Busbridge, Aleksandar Savkov, Nils Hammerla |
https://www.aclweb.org/anthology/D19-1008.pdf |
2019 |
EMNLP |
# train-augment, arch-subword, arch-transformer, comb-ensemble, search-beam, latent-vae, task-lm, task-seq2seq |
2 |
Simple and Effective Noisy Channel Modeling for Neural Machine Translation |
Kyra Yee, Yann Dauphin, Michael Auli |
https://www.aclweb.org/anthology/D19-1571.pdf |
2019 |
EMNLP |
# optim-projection, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seq2seq, task-relation |
0 |
Recurrent Positional Embedding for Neural Machine Translation |
Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita |
https://www.aclweb.org/anthology/D19-1139.pdf |
2019 |
EMNLP |
# optim-adam, reg-stopping, reg-patience, reg-labelsmooth, train-mll, arch-rnn, arch-att, arch-selfatt, arch-residual, arch-copy, arch-coverage, arch-subword, arch-transformer, comb-ensemble, search-beam, task-seqlab, task-seq2seq |
0 |
Deep Copycat Networks for Text-to-Text Generation |
Julia Ive, Pranava Madhyastha, Lucia Specia |
https://www.aclweb.org/anthology/D19-1318.pdf |
2019 |
EMNLP |
# optim-adam, reg-patience, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm |
10 |
Language Models as Knowledge Bases? |
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller |
https://www.aclweb.org/anthology/D19-1250.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-att, arch-memo, arch-transformer, pre-bert, task-textpair, task-relation |
1 |
STANCY: Stance Classification Based on Consistency Cues |
Kashyap Popat, Subhabrata Mukherjee, Andrew Yates, Gerhard Weikum |
https://www.aclweb.org/anthology/D19-1675.pdf |
2019 |
EMNLP |
# optim-projection, train-mtl, arch-cnn, arch-att, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-cloze |
9 |
Patient Knowledge Distillation for BERT Model Compression |
Siqi Sun, Yu Cheng, Zhe Gan, Jingjing Liu |
https://www.aclweb.org/anthology/D19-1441.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, reg-stopping, train-mll, train-transfer, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, struct-crf, adv-train, task-seqlab, task-lm, task-seq2seq, task-relation, task-lexicon |
0 |
Low-Resource Sequence Labeling via Unsupervised Multilingual Contextualized Representations |
Zuyi Bao, Rui Huang, Chen Li, Kenny Zhu |
https://www.aclweb.org/anthology/D19-1095.pdf |
2019 |
EMNLP |
# reg-dropout, train-transfer, train-parallel, arch-att, arch-transformer, task-seq2seq |
1 |
Exploiting Multilingualism through Multistage Fine-Tuning for Low-Resource Neural Machine Translation |
Raj Dabre, Atsushi Fujita, Chenhui Chu |
https://www.aclweb.org/anthology/D19-1146.pdf |
2019 |
EMNLP |
# reg-stopping, train-mll, train-transfer, arch-transformer, pre-elmo, pre-bert, adv-train, task-textclass, task-lm, task-seq2seq |
0 |
A Robust Self-Learning Framework for Cross-Lingual Text Classification |
Xin Dong, Gerard de Melo |
https://www.aclweb.org/anthology/D19-1658.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-att, arch-memo, arch-transformer, pre-glove, pre-bert, task-seq2seq |
0 |
A Challenge Dataset and Effective Models for Aspect-Based Sentiment Analysis |
Qingnan Jiang, Lei Chen, Ruifeng Xu, Xiang Ao, Min Yang |
https://www.aclweb.org/anthology/D19-1654.pdf |
2019 |
EMNLP |
# train-transfer, arch-att, arch-transformer, pre-elmo, pre-bert, task-textclass, task-lm |
0 |
Pre-Training BERT on Domain Resources for Short Answer Grading |
Chul Sung, Tejas Dhamecha, Swarnadeep Saha, Tengfei Ma, Vinay Reddy, Rishi Arora |
https://www.aclweb.org/anthology/D19-1628.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, train-mll, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, comb-ensemble, search-greedy, search-beam, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-lm, task-seq2seq, task-relation |
0 |
Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing - A Tale of Two Parsers Revisited |
Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, Joakim Nivre |
https://www.aclweb.org/anthology/D19-1277.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-bilstm, arch-att, arch-transformer, comb-ensemble, pre-bert, latent-vae, task-lm, task-relation |
0 |
Weak Supervision for Learning Discourse Structure |
Sonia Badene, Kate Thompson, Jean-Pierre Lorré, Nicholas Asher |
https://www.aclweb.org/anthology/D19-1234.pdf |
2019 |
EMNLP |
# optim-projection, train-mll, arch-rnn, arch-cnn, arch-att, arch-transformer, task-condlm, task-seq2seq |
0 |
Multilingual, Multi-scale and Multi-layer Visualization of Intermediate Representations |
Carlos Escolano, Marta R. Costa-jussà, Elora Lacroux, Pere-Pau Vázquez |
https://www.aclweb.org/anthology/D19-3026.pdf |
2019 |
EMNLP |
# optim-adam, norm-layer, pool-max, pool-mean, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-transformer, pre-glove, pre-bert, latent-topic, task-lm, task-seq2seq |
1 |
Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations |
Peixiang Zhong, Di Wang, Chunyan Miao |
https://www.aclweb.org/anthology/D19-1016.pdf |
2019 |
EMNLP |
# optim-adam, train-mll, arch-att, arch-bilinear, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, pre-bert, latent-vae, loss-nce, task-seqlab, task-relation, task-alignment |
0 |
Target Language-Aware Constrained Inference for Cross-lingual Dependency Parsing |
Tao Meng, Nanyun Peng, Kai-Wei Chang |
https://www.aclweb.org/anthology/D19-1103.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, train-transfer, train-augment, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-transformer, pre-bert, pre-use, task-textpair, task-lm, task-seq2seq, task-relation |
0 |
Keep Calm and Switch On! Preserving Sentiment and Fluency in Semantic Text Exchange |
Steven Y. Feng, Aaron W. Li, Jesse Hoey |
https://www.aclweb.org/anthology/D19-1272.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-copy, arch-bilinear, arch-coverage, arch-transformer, search-beam, latent-vae, task-seq2seq, task-relation, task-tree, task-graph |
0 |
Core Semantic First: A Top-down Approach for AMR Parsing |
Deng Cai, Wai Lam |
https://www.aclweb.org/anthology/D19-1393.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, search-beam, task-seq2seq, task-tree |
1 |
JuICe: A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation |
Rajas Agashe, Srinivasan Iyer, Luke Zettlemoyer |
https://www.aclweb.org/anthology/D19-1546.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, norm-gradient, arch-gnn, arch-att, arch-selfatt, arch-transformer, pre-bert, task-spanlab, task-tree |
0 |
NumNet: Machine Reading Comprehension with Numerical Reasoning |
Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, Zhiyuan Liu |
https://www.aclweb.org/anthology/D19-1251.pdf |
2019 |
EMNLP |
# arch-rnn, arch-lstm, arch-gru, arch-att, arch-transformer, task-lm, task-seq2seq |
0 |
Controlling Sequence-to-Sequence Models - A Demonstration on Neural-based Acrostic Generator |
Liang-Hsin Shen, Pei-Lun Tai, Chao-Chung Wu, Shou-De Lin |
https://www.aclweb.org/anthology/D19-3008.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-gru, arch-gnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, pre-glove, pre-bert, task-seq2seq, task-graph |
0 |
Enhancing AMR-to-Text Generation with Dual Graph Representations |
Leonardo F. R. Ribeiro, Claire Gardent, Iryna Gurevych |
https://www.aclweb.org/anthology/D19-1314.pdf |
2019 |
EMNLP |
# arch-lstm, arch-cnn, arch-att, arch-transformer, comb-ensemble, pre-bert, task-textclass, task-seq2seq |
0 |
Many Faces of Feature Importance: Comparing Built-in and Post-hoc Feature Importance in Text Classification |
Vivian Lai, Zheng Cai, Chenhao Tan |
https://www.aclweb.org/anthology/D19-1046.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, norm-batch, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-transformer, pre-fasttext, pre-glove, pre-use, task-textclass, task-textpair, task-lm, task-condlm, task-seq2seq |
0 |
Semantic Relatedness Based Re-ranker for Text Spotting |
Ahmed Sabir, Francesc Moreno, Lluís Padró |
https://www.aclweb.org/anthology/D19-1346.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, train-mll, arch-att, arch-subword, arch-transformer, comb-ensemble, search-viterbi, pre-fasttext, pre-glove, struct-crf, task-seqlab, task-lm, task-seq2seq, task-lexicon |
0 |
Hierarchical Meta-Embeddings for Code-Switching Named Entity Recognition |
Genta Indra Winata, Zhaojiang Lin, Jamin Shin, Zihan Liu, Pascale Fung |
https://www.aclweb.org/anthology/D19-1360.pdf |
2019 |
EMNLP |
# arch-lstm, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, pre-bert, task-lm, task-condlm, task-seq2seq |
0 |
Encode, Tag, Realize: High-Precision Text Editing |
Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, Aliaksei Severyn |
https://www.aclweb.org/anthology/D19-1510.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, train-mtl, train-transfer, pool-max, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-skipthought, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq, task-cloze |
0 |
Transfer Fine-Tuning: A BERT Case Study |
Yuki Arase, Jun’ichi Tsujii |
https://www.aclweb.org/anthology/D19-1542.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, pre-elmo, struct-crf, latent-vae, task-seqlab, task-lm, task-seq2seq, task-relation, task-tree |
0 |
Semantic Role Labeling with Iterative Structure Refinement |
Chunchuan Lyu, Shay B. Cohen, Ivan Titov |
https://www.aclweb.org/anthology/D19-1099.pdf |
2019 |
EMNLP |
# optim-adam, arch-cnn, arch-att, arch-selfatt, arch-transformer, task-seq2seq |
0 |
Towards Knowledge-Based Recommender Dialog System |
Qibin Chen, Junyang Lin, Yichang Zhang, Ming Ding, Yukuo Cen, Hongxia Yang, Jie Tang |
https://www.aclweb.org/anthology/D19-1189.pdf |
2019 |
EMNLP |
# optim-sgd, train-mll, arch-rnn, arch-subword, arch-transformer, search-beam, pre-bert, task-seq2seq |
0 |
Machine Translation for Machines: the Sentiment Classification Use Case |
Amirhossein Tebbifakhr, Luisa Bentivogli, Matteo Negri, Marco Turchi |
https://www.aclweb.org/anthology/D19-1140.pdf |
2019 |
EMNLP |
# reg-dropout, reg-labelsmooth, train-mtl, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-subword, arch-transformer, task-textclass, task-lm, task-seq2seq |
3 |
Towards Linear Time Neural Machine Translation with Capsule Networks |
Mingxuan Wang |
https://www.aclweb.org/anthology/D19-1074.pdf |
2019 |
EMNLP |
# optim-projection, reg-dropout, train-mtl, train-transfer, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-transformer, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq |
0 |
Shallow Domain Adaptive Embeddings for Sentiment Analysis |
Prathusha K Sarma, Yingyu Liang, William Sethares |
https://www.aclweb.org/anthology/D19-1557.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-memo, arch-transformer |
0 |
Video Dialog via Progressive Inference and Cross-Transformer |
Weike Jin, Zhou Zhao, Mao Gu, Jun Xiao, Furu Wei, Yueting Zhuang |
https://www.aclweb.org/anthology/D19-1217.pdf |
2019 |
EMNLP |
# train-mtl, train-transfer, arch-lstm, arch-att, arch-subword, arch-transformer, task-lm, task-seq2seq |
0 |
Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings |
Zi-Yi Dou, Junjie Hu, Antonios Anastasopoulos, Graham Neubig |
https://www.aclweb.org/anthology/D19-1147.pdf |
2019 |
EMNLP |
# arch-att, arch-transformer, task-seq2seq, task-relation, task-alignment |
1 |
EASSE: Easier Automatic Sentence Simplification Evaluation |
Fernando Alva-Manchego, Louis Martin, Carolina Scarton, Lucia Specia |
https://www.aclweb.org/anthology/D19-3009.pdf |
2019 |
EMNLP |
# arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq, task-cloze |
4 |
Revealing the Dark Secrets of BERT |
Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky |
https://www.aclweb.org/anthology/D19-1445.pdf |
2019 |
EMNLP |
# optim-adam, pool-max, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, search-viterbi, struct-crf, task-seqlab |
1 |
Doc2EDAG: An End-to-End Document-level Framework for Chinese Financial Event Extraction |
Shun Zheng, Wei Cao, Wei Xu, Jiang Bian |
https://www.aclweb.org/anthology/D19-1032.pdf |
2019 |
EMNLP |
# reg-dropout, pool-max, arch-cnn, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-word2vec, adv-train, task-textclass, task-textpair |
0 |
Self-Attention Enhanced CNNs and Collaborative Curriculum Learning for Distantly Supervised Relation Extraction |
Yuyun Huang, Jinhua Du |
https://www.aclweb.org/anthology/D19-1037.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-rnn, arch-birnn, arch-att, arch-selfatt, arch-transformer, task-seq2seq |
1 |
Hierarchical Modeling of Global Context for Document-Level Neural Machine Translation |
Xin Tan, Longyin Zhang, Deyi Xiong, Guodong Zhou |
https://www.aclweb.org/anthology/D19-1168.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, reg-patience, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-bilinear, arch-subword, arch-transformer, comb-ensemble, pre-elmo, pre-bert, struct-crf, task-textclass, task-lm, task-seq2seq, task-relation |
6 |
SciBERT: A Pretrained Language Model for Scientific Text |
Iz Beltagy, Kyle Lo, Arman Cohan |
https://www.aclweb.org/anthology/D19-1371.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-lstm, arch-att, arch-copy, arch-coverage, arch-transformer, task-lm, task-seq2seq |
0 |
Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset |
Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Ben Goodrich, Daniel Duckworth, Semih Yavuz, Amit Dubey, Kyu-Young Kim, Andy Cedilnik |
https://www.aclweb.org/anthology/D19-1459.pdf |
2019 |
EMNLP |
# reg-dropout, train-mtl, arch-lstm, arch-att, arch-copy, arch-bilinear, arch-coverage, arch-transformer, search-beam, pre-glove, task-lm, task-seq2seq, task-graph |
0 |
Sentence-Level Content Planning and Style Specification for Neural Text Generation |
Xinyu Hua, Lu Wang |
https://www.aclweb.org/anthology/D19-1055.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, init-glorot, train-mll, train-transfer, train-augment, arch-lstm, arch-transformer, pre-fasttext, pre-bert, loss-nce, task-textclass, task-lm, task-seq2seq, task-relation |
0 |
A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages |
Clara Vania, Yova Kementchedjhieva, Anders Søgaard, Adam Lopez |
https://www.aclweb.org/anthology/D19-1102.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-elmo, task-seq2seq |
0 |
Open Domain Web Keyphrase Extraction Beyond Language Modeling |
Lee Xiong, Chuan Hu, Chenyan Xiong, Daniel Campos, Arnold Overwijk |
https://www.aclweb.org/anthology/D19-1521.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, pool-max, arch-lstm, arch-bilstm, arch-att, arch-gating, arch-transformer, comb-ensemble, pre-fasttext, pre-bert, task-seqlab, task-spanlab |
3 |
Don’t Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases |
Christopher Clark, Mark Yatskar, Luke Zettlemoyer |
https://www.aclweb.org/anthology/D19-1418.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-labelsmooth, norm-batch, norm-gradient, pool-max, arch-rnn, arch-lstm, arch-att, arch-subword, arch-transformer, search-beam, latent-vae, task-lm, task-seq2seq, task-cloze, task-relation |
2 |
FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow |
Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, Eduard Hovy |
https://www.aclweb.org/anthology/D19-1437.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-transformer, pre-fasttext, pre-bert, latent-topic |
0 |
The Trumpiest Trump? Identifying a Subject’s Most Characteristic Tweets |
Charuta Pethe, Steve Skiena |
https://www.aclweb.org/anthology/D19-1175.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, train-mtl, train-mll, train-transfer, pool-mean, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-relation, task-tree, task-lexicon |
8 |
75 Languages, 1 Model: Parsing Universal Dependencies Universally |
Dan Kondratyuk, Milan Straka |
https://www.aclweb.org/anthology/D19-1279.pdf |
2019 |
EMNLP |
# train-mtl, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-transformer, pre-bert |
1 |
SUM-QE: a BERT-based Summary Quality Estimation Model |
Stratos Xenouleas, Prodromos Malakasiotis, Marianna Apidianaki, Ion Androutsopoulos |
https://www.aclweb.org/anthology/D19-1618.pdf |
2019 |
EMNLP |
# optim-adam, reg-stopping, arch-att, arch-subword, arch-transformer, search-beam, task-seq2seq |
2 |
Context-Aware Monolingual Repair for Neural Machine Translation |
Elena Voita, Rico Sennrich, Ivan Titov |
https://www.aclweb.org/anthology/D19-1081.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, train-augment, arch-att, arch-transformer, comb-ensemble, pre-fasttext, pre-bert, adv-examp, adv-train, task-textclass, task-lm, task-cloze |
1 |
Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification |
Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, Wei Wang |
https://www.aclweb.org/anthology/D19-1496.pdf |
2019 |
EMNLP |
# reg-dropout, reg-stopping, norm-layer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq |
1 |
Joey NMT: A Minimalist NMT Toolkit for Novices |
Julia Kreutzer, Joost Bastings, Stefan Riezler |
https://www.aclweb.org/anthology/D19-3019.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, optim-projection, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-elmo, pre-bert, task-textclass, task-lm, task-cloze |
1 |
Visualizing and Understanding the Effectiveness of BERT |
Yaru Hao, Li Dong, Furu Wei, Ke Xu |
https://www.aclweb.org/anthology/D19-1424.pdf |
2019 |
EMNLP |
# reg-dropout, reg-stopping, reg-patience, train-mtl, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-transformer, comb-ensemble, pre-bert, task-textclass, task-textpair, task-spanlab |
0 |
MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims |
Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, Jakob Grue Simonsen |
https://www.aclweb.org/anthology/D19-1475.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, task-lm, task-condlm, task-seq2seq |
1 |
Neural Naturalist: Generating Fine-Grained Image Comparisons |
Maxwell Forbes, Christine Kaeser-Chen, Piyush Sharma, Serge Belongie |
https://www.aclweb.org/anthology/D19-1065.pdf |
2019 |
EMNLP |
# train-parallel, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, task-lm, task-seq2seq |
4 |
Adaptively Sparse Transformers |
Gonçalo M. Correia, Vlad Niculae, André F. T. Martins |
https://www.aclweb.org/anthology/D19-1223.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-coverage, arch-transformer, pre-glove, pre-elmo, pre-bert, adv-examp, adv-train, task-textpair, task-lm |
0 |
A Logic-Driven Framework for Consistency of Neural Models |
Tao Li, Vivek Gupta, Maitrey Mehta, Vivek Srikumar |
https://www.aclweb.org/anthology/D19-1405.pdf |
2019 |
EMNLP |
# optim-adam, optim-adadelta, optim-projection, init-glorot, reg-dropout, norm-layer, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-residual, arch-bilinear, arch-coverage, arch-transformer, pre-glove, pre-elmo, pre-bert, task-textclass, task-seq2seq, task-relation, task-tree |
0 |
Syntax-Enhanced Self-Attention-Based Semantic Role Labeling |
Yue Zhang, Rui Wang, Luo Si |
https://www.aclweb.org/anthology/D19-1057.pdf |
2019 |
EMNLP |
# optim-adam, reg-labelsmooth, arch-lstm, arch-att, arch-selfatt, arch-transformer, latent-vae, task-spanlab, task-seq2seq, task-alignment |
1 |
Hint-Based Training for Non-Autoregressive Machine Translation |
Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, Tie-Yan Liu |
https://www.aclweb.org/anthology/D19-1573.pdf |
2019 |
EMNLP |
# arch-lstm, arch-cnn, arch-transformer, pre-bert, struct-crf, task-seqlab, task-lm, task-seq2seq |
0 |
What Part of the Neural Network Does This? Understanding LSTMs by Measuring and Dissecting Neurons |
Ji Xin, Jimmy Lin, Yaoliang Yu |
https://www.aclweb.org/anthology/D19-1591.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, init-glorot, reg-dropout, reg-labelsmooth, norm-layer, norm-gradient, arch-rnn, arch-att, arch-selfatt, arch-residual, arch-subword, arch-transformer, search-beam, pre-bert, task-lm, task-seq2seq |
2 |
Improving Deep Transformer with Depth-Scaled Initialization and Merged Attention |
Biao Zhang, Ivan Titov, Rico Sennrich |
https://www.aclweb.org/anthology/D19-1083.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-spanlab |
0 |
Machine Reading Comprehension Using Structural Knowledge Graph-aware Network |
Delai Qiu, Yuanzhe Zhang, Xinwei Feng, Xiangwen Liao, Wenbin Jiang, Yajuan Lyu, Kang Liu, Jun Zhao |
https://www.aclweb.org/anthology/D19-1602.pdf |
2019 |
EMNLP |
# optim-adam, train-mll, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, search-beam, task-spanlab, task-lm, task-seq2seq, task-cloze |
1 |
Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs |
Angela Fan, Claire Gardent, Chloé Braud, Antoine Bordes |
https://www.aclweb.org/anthology/D19-1428.pdf |
2019 |
EMNLP |
# reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-gating, arch-transformer, pre-glove, pre-elmo, pre-bert, struct-crf, task-textclass, task-seqlab, task-seq2seq, meta-arch |
1 |
NeuronBlocks: Building Your NLP DNN Models Like Playing Lego |
Ming Gong, Linjun Shou, Wutao Lin, Zhijie Sang, Quanjia Yan, Ze Yang, Feixiang Cheng, Daxin Jiang |
https://www.aclweb.org/anthology/D19-3028.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, init-glorot, reg-dropout, arch-att, arch-selfatt, arch-memo, arch-coverage, arch-transformer, search-beam, pre-elmo, pre-bert, task-spanlab, task-seq2seq, task-relation, task-tree |
1 |
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning |
Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li |
https://www.aclweb.org/anthology/D19-1170.pdf |
2019 |
EMNLP |
# arch-transformer, pre-word2vec, pre-bert, latent-vae, latent-topic, task-textclass, task-tree |
0 |
Identifying Predictive Causal Factors from News Streams |
Ananth Balashankar, Sunandan Chakraborty, Samuel Fraiberger, Lakshminarayanan Subramanian |
https://www.aclweb.org/anthology/D19-1238.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, reg-norm, train-mtl, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, pre-bert, struct-crf |
1 |
A Stack-Propagation Framework with Token-Level Intent Detection for Spoken Language Understanding |
Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, Ting Liu |
https://www.aclweb.org/anthology/D19-1214.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-labelsmooth, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-subword, arch-transformer, search-beam, pre-elmo, pre-bert, latent-vae, task-seqlab, task-extractive, task-lm, task-seq2seq, task-cloze |
6 |
Text Summarization with Pretrained Encoders |
Yang Liu, Mirella Lapata |
https://www.aclweb.org/anthology/D19-1387.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, reg-stopping, train-mll, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, task-spanlab, task-lm, task-seq2seq |
1 |
Retrofitting Contextualized Word Embeddings with Paraphrases |
Weijia Shi, Muhao Chen, Pei Zhou, Kai-Wei Chang |
https://www.aclweb.org/anthology/D19-1113.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, train-mll, train-augment, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, task-lm, task-seq2seq, task-alignment |
0 |
A Discriminative Neural Model for Cross-Lingual Word Alignment |
Elias Stengel-Eskin, Tzu-ray Su, Matt Post, Benjamin Van Durme |
https://www.aclweb.org/anthology/D19-1084.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-transformer, struct-cfg, task-textclass, task-textpair, task-lm, task-seq2seq, task-tree |
0 |
PaLM: A Hybrid Parser and Language Model |
Hao Peng, Roy Schwartz, Noah A. Smith |
https://www.aclweb.org/anthology/D19-1376.pdf |
2019 |
EMNLP |
# train-transfer, train-augment, arch-att, arch-copy, arch-coverage, arch-transformer, task-seq2seq |
0 |
Abstract Text Summarization: A Low Resource Challenge |
Shantipriya Parida, Petr Motlicek |
https://www.aclweb.org/anthology/D19-1616.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-decay, norm-layer, train-mll, train-parallel, arch-rnn, arch-att, arch-subword, arch-transformer, search-greedy, search-beam, pre-bert, task-lm, task-seq2seq, task-cloze |
2 |
Mask-Predict: Parallel Decoding of Conditional Masked Language Models |
Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer |
https://www.aclweb.org/anthology/D19-1633.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, train-mll, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-fasttext, pre-glove, task-seq2seq |
0 |
Rotate King to get Queen: Word Relationships as Orthogonal Transformations in Embedding Space |
Kawin Ethayarajh |
https://www.aclweb.org/anthology/D19-1354.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, reg-dropout, reg-stopping, train-mll, train-augment, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-transformer, pre-bert, adv-examp, adv-train, task-textclass, task-lm, task-seq2seq |
0 |
LexicalAT: Lexical-Based Adversarial Reinforcement Training for Robust Sentiment Classification |
Jingjing Xu, Liang Zhao, Hanqi Yan, Qi Zeng, Yun Liang, Xu Sun |
https://www.aclweb.org/anthology/D19-1554.pdf |
2019 |
EMNLP |
# optim-projection, reg-dropout, norm-layer, pool-mean, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, task-textclass, task-textpair |
0 |
Learning Invariant Representations of Social Media Users |
Nicholas Andrews, Marcus Bishop |
https://www.aclweb.org/anthology/D19-1178.pdf |
2019 |
EMNLP |
# reg-dropout, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-bert, struct-cfg, task-lm, task-seq2seq, task-cloze, task-tree |
0 |
Tree Transformer: Integrating Tree Structures into Self-Attention |
Yaushian Wang, Hung-Yi Lee, Yun-Nung Chen |
https://www.aclweb.org/anthology/D19-1098.pdf |
2019 |
EMNLP |
# optim-adam, arch-att, arch-transformer, comb-ensemble, pre-bert, task-lm, task-condlm, task-cloze, task-tree |
11 |
Fusion of Detected Objects in Text for Visual Question Answering |
Chris Alberti, Jeffrey Ling, Michael Collins, David Reitter |
https://www.aclweb.org/anthology/D19-1219.pdf |
2019 |
EMNLP |
# train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-memo, arch-transformer, pre-glove, pre-elmo, pre-bert, latent-vae, task-lm, task-seq2seq |
0 |
Next Sentence Prediction helps Implicit Discourse Relation Classification within and across Domains |
Wei Shi, Vera Demberg |
https://www.aclweb.org/anthology/D19-1586.pdf |
2019 |
EMNLP |
# optim-adam, train-transfer, pool-max, arch-lstm, arch-bilstm, arch-att, arch-transformer, pre-glove, pre-skipthought, pre-bert, pre-use, loss-triplet, task-textpair |
5 |
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks |
Nils Reimers, Iryna Gurevych |
https://www.aclweb.org/anthology/D19-1410.pdf |
2019 |
EMNLP |
# optim-projection, train-mll, train-transfer, arch-lstm, arch-att, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, pre-use, adv-train, loss-cca, task-textclass, task-textpair, task-lm, task-seq2seq, task-cloze |
0 |
Multi-View Domain Adapted Sentence Embeddings for Low-Resource Unsupervised Duplicate Question Detection |
Nina Poerner, Hinrich Schütze |
https://www.aclweb.org/anthology/D19-1173.pdf |
2019 |
EMNLP |
# train-mll, arch-lstm, arch-bilstm, arch-transformer, pre-fasttext, pre-bert, adv-examp, task-textpair, task-condlm, task-seq2seq |
1 |
PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification |
Yinfei Yang, Yuan Zhang, Chris Tar, Jason Baldridge |
https://www.aclweb.org/anthology/D19-1382.pdf |
2019 |
EMNLP |
# arch-gru, arch-att, arch-subword, arch-transformer, search-beam, task-seq2seq |
2 |
Simpler and Faster Learning of Adaptive Policies for Simultaneous Translation |
Baigong Zheng, Renjie Zheng, Mingbo Ma, Liang Huang |
https://www.aclweb.org/anthology/D19-1137.pdf |
2019 |
EMNLP |
# optim-adam, reg-stopping, train-mll, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, pre-bert, adv-train, latent-topic, task-lm |
0 |
Neural Duplicate Question Detection without Labeled Training Data |
Andreas Rücklé, Nafise Sadat Moosavi, Iryna Gurevych |
https://www.aclweb.org/anthology/D19-1171.pdf |
2019 |
EMNLP |
# arch-att, arch-transformer, pre-bert, task-lm, task-cloze |
1 |
TalkDown: A Corpus for Condescension Detection in Context |
Zijian Wang, Christopher Potts |
https://www.aclweb.org/anthology/D19-1385.pdf |
2019 |
NAA-CL |
# arch-lstm, arch-cnn, arch-att, arch-subword, arch-transformer, task-seq2seq |
4 |
Positional Encoding to Control Output Sequence Length |
Sho Takase, Naoaki Okazaki |
https://www.aclweb.org/anthology/N19-1401.pdf |
2019 |
NAA-CL |
# optim-adam, reg-stopping, train-augment, arch-lstm, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-subword, arch-transformer, search-beam, pre-glove, pre-elmo, task-textpair, task-lm, task-condlm, task-seq2seq, task-tree |
1 |
Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting |
J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, Benjamin Van Durme |
https://www.aclweb.org/anthology/N19-1090.pdf |
2019 |
NAA-CL |
# optim-adam, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-coverage, arch-subword, arch-transformer, search-beam |
19 |
MuST-C: a Multilingual Speech Translation Corpus |
Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, Marco Turchi |
https://www.aclweb.org/anthology/N19-1202.pdf |
2019 |
NAA-CL |
# optim-adam, optim-projection, reg-dropout, train-mtl, train-transfer, train-parallel, arch-rnn, arch-att, arch-coverage, arch-subword, arch-transformer, task-seq2seq, task-tree |
0 |
Understanding and Improving Hidden Representations for Neural Machine Translation |
Guanlin Li, Lemao Liu, Xintong Li, Conghui Zhu, Tiejun Zhao, Shuming Shi |
https://www.aclweb.org/anthology/N19-1046.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, reg-decay, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, search-beam, task-spanlab, task-seq2seq |
0 |
Online Distilling from Checkpoints for Neural Machine Translation |
Hao-Ran Wei, Shujian Huang, Ran Wang, Xin-yu Dai, Jiajun Chen |
https://www.aclweb.org/anthology/N19-1192.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-copy, arch-coverage, arch-transformer, pre-bert, task-lm, task-seq2seq, task-tree |
1 |
PoMo: Generating Entity-Specific Post-Modifiers in Context |
Jun Seok Kang, Robert Logan, Zewei Chu, Yang Chen, Dheeru Dua, Kevin Gimpel, Sameer Singh, Niranjan Balasubramanian |
https://www.aclweb.org/anthology/N19-1089.pdf |
2019 |
NAA-CL |
# optim-sgd, optim-adam, reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-gru, arch-att, arch-coverage, arch-transformer, pre-glove, pre-elmo, task-lm |
3 |
Recursive Routing Networks: Learning to Compose Modules for Language Understanding |
Ignacio Cases, Clemens Rosenbaum, Matthew Riemer, Atticus Geiger, Tim Klinger, Alex Tamkin, Olivia Li, Sandhini Agarwal, Joshua D. Greene, Dan Jurafsky, Christopher Potts, Lauri Karttunen |
https://www.aclweb.org/anthology/N19-1365.pdf |
2019 |
NAA-CL |
# optim-adam, train-mtl, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-treelstm, arch-gnn, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq |
17 |
Star-Transformer |
Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Xiangyang Xue, Zheng Zhang |
https://www.aclweb.org/anthology/N19-1133.pdf |
2019 |
NAA-CL |
# optim-adam, train-mtl, train-mll, arch-att, arch-subword, arch-transformer, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-use, latent-vae, task-textclass, task-textpair, task-seq2seq |
2 |
Correlation Coefficients and Semantic Textual Similarity |
Vitalii Zhelezniak, Aleksandar Savkov, April Shen, Nils Hammerla |
https://www.aclweb.org/anthology/N19-1100.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, reg-labelsmooth, norm-layer, train-transfer, arch-att, arch-selfatt, arch-subword, arch-transformer, task-seq2seq |
7 |
Non-Parametric Adaptation for Neural Machine Translation |
Ankur Bapna, Orhan Firat |
https://www.aclweb.org/anthology/N19-1191.pdf |
2019 |
NAA-CL |
# optim-adam, optim-projection, reg-dropout, norm-gradient, arch-rnn, arch-birnn, arch-lstm, arch-gru, arch-treelstm, arch-att, arch-memo, arch-bilinear, arch-subword, arch-transformer, comb-ensemble, search-beam, task-seq2seq, task-relation |
3 |
Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations |
Meishan Zhang, Zhenghua Li, Guohong Fu, Min Zhang |
https://www.aclweb.org/anthology/N19-1118.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-fasttext, task-textclass, task-seq2seq, task-tree |
2 |
Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems |
Ting-Rui Chiang, Yun-Nung Chen |
https://www.aclweb.org/anthology/N19-1272.pdf |
2019 |
NAA-CL |
# optim-adam, reg-patience, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-coverage, arch-subword, arch-transformer, pre-glove, pre-elmo, pre-bert, struct-crf, adv-train, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-relation |
51 |
Linguistic Knowledge and Transferability of Contextual Representations |
Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, Noah A. Smith |
https://www.aclweb.org/anthology/N19-1112.pdf |
2019 |
NAA-CL |
# optim-sgd, optim-adam, reg-dropout, reg-labelsmooth, norm-layer, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-elmo, task-textclass, task-lm, task-seq2seq |
10 |
Pre-trained language model representations for language generation |
Sergey Edunov, Alexei Baevski, Michael Auli |
https://www.aclweb.org/anthology/N19-1409.pdf |
2019 |
NAA-CL |
# reg-dropout, reg-worddropout, norm-layer, pool-max, arch-cnn, arch-att, arch-selfatt, arch-bilinear, arch-transformer |
5 |
Relation Extraction using Explicit Context Conditioning |
Gaurav Singh, Parminder Bhatia |
https://www.aclweb.org/anthology/N19-1147.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, arch-lstm, arch-att, arch-gating, arch-subword, arch-transformer, pre-glove, pre-elmo, pre-bert, adv-train, task-relation |
3 |
Learning to Denoise Distantly-Labeled Data for Entity Typing |
Yasumasa Onoe, Greg Durrett |
https://www.aclweb.org/anthology/N19-1250.pdf |
2019 |
NAA-CL |
# optim-adam, init-glorot, norm-layer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-transformer, task-extractive, task-seq2seq, task-relation |
4 |
Single Document Summarization as Tree Induction |
Yang Liu, Ivan Titov, Mirella Lapata |
https://www.aclweb.org/anthology/N19-1173.pdf |
2019 |
NAA-CL |
# arch-rnn, arch-lstm, arch-bilstm, arch-treelstm, arch-att, arch-coverage, arch-transformer, pre-word2vec, pre-elmo, pre-bert, pre-use, task-textpair, task-seq2seq, task-tree |
4 |
Evaluating Coherence in Dialogue Systems using Entailment |
Nouha Dziri, Ehsan Kamalloo, Kory Mathewson, Osmar Zaiane |
https://www.aclweb.org/anthology/N19-1381.pdf |
2019 |
NAA-CL |
# optim-sgd, optim-projection, init-glorot, reg-dropout, reg-stopping, arch-rnn, arch-birnn, arch-lstm, arch-gnn, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-transformer, search-beam, struct-crf, task-seqlab, task-lm, task-tree, task-graph |
8 |
Text Generation from Knowledge Graphs with Graph Transformers |
Rik Koncel-Kedziorski, Dhanush Bekal, Yi Luan, Mirella Lapata, Hannaneh Hajishirzi |
https://www.aclweb.org/anthology/N19-1238.pdf |
2019 |
NAA-CL |
# optim-adagrad, reg-dropout, reg-stopping, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-subword, arch-transformer, search-beam, task-condlm, task-seq2seq |
2 |
Learning to Stop in Structured Prediction for Neural Machine Translation |
Mingbo Ma, Renjie Zheng, Liang Huang |
https://www.aclweb.org/anthology/N19-1187.pdf |
2019 |
NAA-CL |
# optim-adam, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-memo, arch-transformer, comb-ensemble, pre-glove, pre-bert, struct-crf, task-textclass, task-textpair, task-lm, task-seq2seq, task-alignment |
0 |
Alignment over Heterogeneous Embeddings for Question Answering |
Vikas Yadav, Steven Bethard, Mihai Surdeanu |
https://www.aclweb.org/anthology/N19-1274.pdf |
2019 |
NAA-CL |
# arch-att, arch-transformer, search-greedy, pre-glove, task-lm |
0 |
Asking the Right Question: Inferring Advice-Seeking Intentions from Personal Narratives |
Liye Fu, Jonathan P. Chang, Cristian Danescu-Niculescu-Mizil |
https://www.aclweb.org/anthology/N19-1052.pdf |
2019 |
NAA-CL |
# optim-sgd, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-transformer, comb-ensemble, pre-word2vec, latent-vae, task-lm |
0 |
Using Large Corpus N-gram Statistics to Improve Recurrent Neural Language Models |
Yiben Yang, Ji-Ping Wang, Doug Downey |
https://www.aclweb.org/anthology/N19-1330.pdf |
2019 |
NAA-CL |
# train-mll, arch-rnn, arch-lstm, arch-att, arch-transformer, struct-hmm, task-lm, task-seq2seq |
0 |
Customizing Grapheme-to-Phoneme System for Non-Trivial Transcription Problems in Bangla Language |
Sudipta Saha Shubha, Nafis Sadeq, Shafayat Ahmed, Md. Nahidul Islam, Muhammad Abdullah Adnan, Md. Yasin Ali Khan, Mohammad Zuberul Islam |
https://www.aclweb.org/anthology/N19-1322.pdf |
2019 |
NAA-CL |
# optim-adam, reg-stopping, norm-gradient, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-transformer, pre-bert, struct-crf |
0 |
Joint Multiple Intent Detection and Slot Labeling for Goal-Oriented Dialog |
Rashmi Gangadharaiah, Balakrishnan Narayanaswamy |
https://www.aclweb.org/anthology/N19-1055.pdf |
2019 |
NAA-CL |
# optim-projection, train-mll, pool-max, arch-rnn, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, pre-bert, task-seq2seq |
11 |
Information Aggregation for Multi-Head Attention with Routing-by-Agreement |
Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, Zhaopeng Tu |
https://www.aclweb.org/anthology/N19-1359.pdf |
2019 |
NAA-CL |
# optim-adam, arch-lstm, arch-bilstm, arch-att, arch-coverage, arch-transformer, search-beam, pre-glove, pre-elmo, pre-bert, struct-crf, adv-examp, task-textpair, task-lm, task-condlm, task-seq2seq, task-alignment |
9 |
PAWS: Paraphrase Adversaries from Word Scrambling |
Yuan Zhang, Jason Baldridge, Luheng He |
https://www.aclweb.org/anthology/N19-1131.pdf |
2019 |
NAA-CL |
# optim-sgd, train-mll, arch-rnn, arch-lstm, arch-att, arch-memo, arch-coverage, arch-subword, arch-transformer, pre-word2vec, pre-glove, pre-elmo, pre-bert, struct-crf, task-textclass, task-lm, task-seq2seq |
2 |
One Size Does Not Fit All: Comparing NMT Representations of Different Granularities |
Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Yonatan Belinkov, Preslav Nakov |
https://www.aclweb.org/anthology/N19-1154.pdf |
2019 |
NAA-CL |
# reg-dropout, reg-decay, train-mtl, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-transformer, comb-ensemble, pre-glove, pre-elmo, pre-bert, task-lm, task-seq2seq, task-tree |
20 |
Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data |
Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu |
https://www.aclweb.org/anthology/N19-1014.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, reg-labelsmooth, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-residual, arch-coverage, arch-subword, arch-transformer, task-seq2seq |
9 |
Selective Attention for Context-aware Neural Machine Translation |
Sameen Maruf, André F. T. Martins, Gholamreza Haffari |
https://www.aclweb.org/anthology/N19-1313.pdf |
2019 |
NAA-CL |
# reg-dropout, norm-gradient, train-transfer, train-parallel, arch-att, arch-subword, arch-transformer, task-seq2seq, task-alignment |
0 |
Measuring Immediate Adaptation Performance for Neural Machine Translation |
Patrick Simianer, Joern Wuebker, John DeNero |
https://www.aclweb.org/anthology/N19-1206.pdf |
2019 |
NAA-CL |
# reg-dropout, train-mtl, train-mll, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, loss-nce, task-seqlab, task-seq2seq |
24 |
Massively Multilingual Neural Machine Translation |
Roee Aharoni, Melvin Johnson, Orhan Firat |
https://www.aclweb.org/anthology/N19-1388.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, reg-decay, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze |
3209 |
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding |
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova |
https://www.aclweb.org/anthology/N19-1423.pdf |