Year |
Conf. |
Topic |
Cited |
Paper |
Authors |
Url |
2019 |
ACL |
# optim-adam, reg-dropout, reg-worddropout, arch-rnn, arch-lstm, arch-gru, arch-att, arch-transformer, pre-word2vec, pre-fasttext, pre-elmo, pre-bert, struct-crf, task-seqlab |
5 |
Neural Architectures for Nested NER through Linearization |
Jana Straková, Milan Straka, Jan Hajic |
https://www.aclweb.org/anthology/P19-1527.pdf |
2019 |
ACL |
# optim-adam, arch-lstm, arch-att, arch-memo, arch-coverage, pre-elmo, pre-bert, task-spanlab |
3 |
Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index |
Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, Hannaneh Hajishirzi |
https://www.aclweb.org/anthology/P19-1436.pdf |
2019 |
ACL |
# optim-adam, train-mll, arch-rnn, arch-coverage, arch-subword, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert |
0 |
Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings |
Yadollah Yaghoobzadeh, Katharina Kann, T. J. Hazen, Eneko Agirre, Hinrich Schütze |
https://www.aclweb.org/anthology/P19-1574.pdf |
2019 |
ACL |
# optim-adagrad, arch-rnn, arch-lstm, arch-gru, arch-bigru, arch-gnn, arch-att, search-beam, search-viterbi, pre-glove, pre-skipthought, pre-elmo, task-lm, task-relation |
0 |
Multi-Relational Script Learning for Discourse Relations |
I-Ta Lee, Dan Goldwasser |
https://www.aclweb.org/anthology/P19-1413.pdf |
2019 |
ACL |
# train-transfer, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, pre-word2vec, pre-elmo, pre-bert, task-textclass, task-seqlab, task-lm, task-seq2seq |
0 |
An Investigation of Transfer Learning-Based Sentiment Analysis in Japanese |
Enkhbold Bataa, Joshua Wu |
https://www.aclweb.org/anthology/P19-1458.pdf |
2019 |
ACL |
# optim-adam, optim-projection, train-transfer, train-augment, pre-word2vec, pre-glove, pre-elmo, task-textclass, task-condlm |
7 |
Mitigating Gender Bias in Natural Language Processing: Literature Review |
Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, William Yang Wang |
https://www.aclweb.org/anthology/P19-1159.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, norm-layer, train-active, arch-lstm, arch-bilstm, arch-att, pre-glove, pre-elmo, pre-bert, struct-crf, task-lm |
0 |
Learning Emphasis Selection for Written Text in Visual Media from Crowd-Sourced Label Distributions |
Amirreza Shirani, Franck Dernoncourt, Paul Asente, Nedim Lipka, Seokhwan Kim, Jose Echevarria, Thamar Solorio |
https://www.aclweb.org/anthology/P19-1112.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mtl, train-mll, pool-max, pool-mean, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-coverage, pre-glove, pre-skipthought, pre-elmo, pre-bert, adv-train, task-textpair, task-lm, task-cloze, task-relation |
2 |
DisSent: Learning Sentence Representations from Explicit Discourse Relations |
Allen Nie, Erin Bennett, Noah Goodman |
https://www.aclweb.org/anthology/P19-1442.pdf |
2019 |
ACL |
# reg-dropout, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-coverage, arch-transformer, comb-ensemble, pre-glove, pre-elmo, pre-bert, task-relation |
8 |
Head-Driven Phrase Structure Grammar Parsing on Penn Treebank |
Junru Zhou, Hai Zhao |
https://www.aclweb.org/anthology/P19-1230.pdf |
2019 |
ACL |
# optim-sgd, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, comb-ensemble, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-textpair, task-seqlab, task-seq2seq |
0 |
End-to-End Sequential Metaphor Identification Inspired by Linguistic Theories |
Rui Mao, Chenghua Lin, Frank Guerin |
https://www.aclweb.org/anthology/P19-1378.pdf |
2019 |
ACL |
# arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, pre-elmo, task-seqlab, task-lm, task-tree |
2 |
End-to-end Deep Reinforcement Learning Based Coreference Resolution |
Hongliang Fei, Xu Li, Dingcheng Li, Ping Li |
https://www.aclweb.org/anthology/P19-1064.pdf |
2019 |
ACL |
# optim-adam, optim-adagrad, train-mll, arch-coverage, arch-subword, pre-fasttext, pre-elmo, task-lm, task-seq2seq |
2 |
Multilingual and Cross-Lingual Graded Lexical Entailment |
Ivan Vulić, Simone Paolo Ponzetto, Goran Glavaš |
https://www.aclweb.org/anthology/P19-1490.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, pool-max, arch-lstm, arch-bilstm, arch-treelstm, arch-att, arch-coverage, search-beam, search-viterbi, pre-elmo, struct-crf, task-seqlab, task-seq2seq, task-tree |
0 |
Span-Level Model for Relation Extraction |
Kalpit Dixit, Yaser Al-Onaizan |
https://www.aclweb.org/anthology/P19-1525.pdf |
2019 |
ACL |
# optim-sgd, reg-dropout, norm-gradient, arch-rnn, arch-lstm, arch-att, search-beam, pre-elmo, latent-vae, task-extractive, task-lm, task-seq2seq |
1 |
Simple Unsupervised Summarization by Contextual Matching |
Jiawei Zhou, Alexander Rush |
https://www.aclweb.org/anthology/P19-1503.pdf |
2019 |
ACL |
# optim-adam, optim-amsgrad, optim-projection, reg-stopping, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-coverage, arch-transformer, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq, task-relation |
0 |
Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling |
Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, Samuel R. Bowman |
https://www.aclweb.org/anthology/P19-1439.pdf |
2019 |
ACL |
# optim-adam, arch-rnn, arch-gru, arch-gnn, arch-att, arch-selfatt, comb-ensemble, pre-elmo, pre-bert, task-textclass, task-spanlab, task-seq2seq |
6 |
Multi-hop Reading Comprehension across Multiple Documents by Reasoning over Heterogeneous Graphs |
Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xiaodong He, Bowen Zhou |
https://www.aclweb.org/anthology/P19-1260.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, train-mll, arch-lstm, arch-bilstm, arch-treelstm, arch-att, pre-word2vec, pre-elmo, pre-bert, task-textpair, task-seq2seq, task-alignment |
0 |
Putting Evaluation in Context: Contextual Embeddings Improve Machine Translation Evaluation |
Nitika Mathur, Timothy Baldwin, Trevor Cohn |
https://www.aclweb.org/anthology/P19-1269.pdf |
2019 |
ACL |
# init-glorot, reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-bilinear, pre-elmo, pre-bert, struct-crf, task-textclass |
12 |
Joint Slot Filling and Intent Detection via Capsule Neural Networks |
Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, Philip Yu |
https://www.aclweb.org/anthology/P19-1519.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-stopping, arch-rnn, arch-lstm, arch-bilstm, pre-elmo, task-lm |
0 |
Putting Words in Context: LSTM Language Models and Lexical Ambiguity |
Laura Aina, Kristina Gulordava, Gemma Boleda |
https://www.aclweb.org/anthology/P19-1324.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, reg-dropout, train-mll, arch-lstm, arch-att, arch-subword, pre-elmo, task-textpair, task-lm, task-seq2seq |
0 |
Towards Language Agnostic Universal Representations |
Armen Aghajanyan, Xia Song, Saurabh Tiwary |
https://www.aclweb.org/anthology/P19-1395.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-stopping, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, pre-glove, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm, task-relation |
2 |
Multi-grained Named Entity Recognition |
Congying Xia, Chenwei Zhang, Tao Yang, Yaliang Li, Nan Du, Xian Wu, Wei Fan, Fenglong Ma, Philip Yu |
https://www.aclweb.org/anthology/P19-1138.pdf |
2019 |
ACL |
# optim-adam, optim-projection, train-mll, arch-lstm, arch-att, arch-selfatt, arch-subword, comb-ensemble, pre-fasttext, pre-elmo, pre-bert, task-textclass, task-lm |
9 |
Multilingual Constituency Parsing with Self-Attention and Pre-Training |
Nikita Kitaev, Steven Cao, Dan Klein |
https://www.aclweb.org/anthology/P19-1340.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, arch-lstm, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, struct-crf, task-textclass |
0 |
Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes |
Jie Cao, Michael Tanana, Zac Imel, Eric Poitras, David Atkins, Vivek Srikumar |
https://www.aclweb.org/anthology/P19-1563.pdf |
2019 |
ACL |
# optim-adam, train-augment, arch-rnn, arch-lstm, arch-cnn, arch-transformer, search-beam, pre-glove, pre-elmo, pre-bert, task-lm |
1 |
Relating Simple Sentence Representations in Deep Neural Networks and the Brain |
Sharmistha Jat, Hao Tang, Partha Talukdar, Tom Mitchell |
https://www.aclweb.org/anthology/P19-1507.pdf |
2019 |
ACL |
# optim-adam, optim-projection, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-subword, comb-ensemble, search-beam, search-viterbi, pre-word2vec, pre-elmo, pre-bert, task-lm |
0 |
Cross-Domain Generalization of Neural Constituency Parsers |
Daniel Fried, Nikita Kitaev, Dan Klein |
https://www.aclweb.org/anthology/P19-1031.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, norm-batch, norm-gradient, train-transfer, pool-mean, arch-lstm, arch-bilstm, arch-att, arch-memo, comb-ensemble, pre-elmo, loss-cca, task-lm, task-seq2seq, task-alignment |
1 |
Multimodal and Multi-view Models for Emotion Recognition |
Gustavo Aguilar, Viktor Rozgic, Weiran Wang, Chao Wang |
https://www.aclweb.org/anthology/P19-1095.pdf |
2019 |
ACL |
# optim-adam, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, comb-ensemble, pre-elmo, pre-bert, loss-margin, task-spanlab, task-lm |
2 |
Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension |
An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, Sujian Li |
https://www.aclweb.org/anthology/P19-1226.pdf |
2019 |
ACL |
# arch-lstm, arch-att, pre-elmo, pre-bert, task-lm |
2 |
Does it Make Sense? And Why? A Pilot Study for Sense Making and Explanation |
Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, Tian Gao |
https://www.aclweb.org/anthology/P19-1393.pdf |
2019 |
ACL |
# optim-adam, optim-projection, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, pre-glove, pre-elmo, latent-topic, task-textclass |
9 |
The Risk of Racial Bias in Hate Speech Detection |
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A. Smith |
https://www.aclweb.org/anthology/P19-1163.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mtl, train-transfer, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-elmo, loss-svd, task-textclass, task-textpair, task-lm, task-seq2seq, meta-arch |
1 |
Continual and Multi-Task Architecture Search |
Ramakanth Pasunuru, Mohit Bansal |
https://www.aclweb.org/anthology/P19-1185.pdf |
2019 |
ACL |
# init-glorot, train-transfer, arch-lstm, arch-att, arch-coverage, arch-transformer, pre-glove, pre-elmo, pre-bert, adv-train, latent-topic, task-textclass, task-lm |
1 |
Zero-Shot Entity Linking by Reading Entity Descriptions |
Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee |
https://www.aclweb.org/anthology/P19-1335.pdf |
2019 |
ACL |
# reg-dropout, arch-rnn, arch-lstm, arch-cnn, pre-glove, pre-elmo, task-lm, task-relation |
0 |
Identifying Visible Actions in Lifestyle Vlogs |
Oana Ignat, Laura Burdick, Jia Deng, Rada Mihalcea |
https://www.aclweb.org/anthology/P19-1643.pdf |
2019 |
ACL |
# optim-adam, train-transfer, arch-lstm, arch-treelstm, arch-bilinear, arch-coverage, search-viterbi, pre-glove, pre-elmo, task-textpair, task-relation, task-tree |
0 |
Automatic Generation of High Quality CCGbanks for Parser Domain Adaptation |
Masashi Yoshikawa, Hiroshi Noji, Koji Mineshima, Daisuke Bekki |
https://www.aclweb.org/anthology/P19-1013.pdf |
2019 |
ACL |
# optim-adadelta, train-mll, train-transfer, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-memo, comb-ensemble, pre-word2vec, pre-elmo, adv-train, task-lm, task-seq2seq |
0 |
Distilling Discrimination and Generalization Knowledge for Event Detection via Delta-Representation Learning |
Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun |
https://www.aclweb.org/anthology/P19-1429.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, comb-ensemble, pre-elmo, pre-bert, task-seqlab, task-spanlab, task-seq2seq |
0 |
MCˆ2: Multi-perspective Convolutional Cube for Conversational Machine Reading Comprehension |
Xuanyu Zhang |
https://www.aclweb.org/anthology/P19-1622.pdf |
2019 |
ACL |
# optim-adam, optim-projection, pool-mean, arch-coverage, arch-subword, pre-elmo, pre-bert, task-lm, task-seq2seq |
2 |
Entity-Centric Contextual Affective Analysis |
Anjalie Field, Yulia Tsvetkov |
https://www.aclweb.org/anthology/P19-1243.pdf |
2019 |
ACL |
# optim-adadelta, reg-dropout, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-gating, search-beam, search-viterbi, pre-elmo |
0 |
Improving Open Information Extraction via Iterative Rank-Aware Learning |
Zhengbao Jiang, Pengcheng Yin, Graham Neubig |
https://www.aclweb.org/anthology/P19-1523.pdf |
2019 |
ACL |
# reg-dropout, train-transfer, train-active, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, task-textclass, task-seqlab |
1 |
The Language of Legal and Illegal Activity on the Darknet |
Leshem Choshen, Dan Eldad, Daniel Hershcovich, Elior Sulem, Omri Abend |
https://www.aclweb.org/anthology/P19-1419.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-transfer, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, pre-elmo, pre-bert, adv-train, task-textclass, task-lm, task-seq2seq, task-relation |
2 |
Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction |
Christoph Alt, Marc Hübner, Leonhard Hennig |
https://www.aclweb.org/anthology/P19-1134.pdf |
2019 |
ACL |
# optim-adam, optim-projection, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, task-textpair, task-lm |
20 |
What Does BERT Learn about the Structure of Language? |
Ganesh Jawahar, Benoît Sagot, Djamé Seddah |
https://www.aclweb.org/anthology/P19-1356.pdf |
2019 |
ACL |
# init-glorot, pool-mean, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, task-relation |
0 |
Embedding Time Expressions for Deep Temporal Ordering Models |
Tanya Goyal, Greg Durrett |
https://www.aclweb.org/anthology/P19-1433.pdf |
2019 |
ACL |
# optim-adam, reg-stopping, pool-max, arch-lstm, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, task-condlm, task-seq2seq |
1 |
Constructing Interpretive Spatio-Temporal Features for Multi-Turn Responses Selection |
Junyu Lu, Chenbin Zhang, Zeying Xie, Guang Ling, Tom Chao Zhou, Zenglin Xu |
https://www.aclweb.org/anthology/P19-1006.pdf |
2019 |
ACL |
# optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, search-beam, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, nondif-reinforce, task-extractive, task-lm, task-condlm, task-seq2seq, task-lexicon |
2 |
Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts |
Elizabeth Clark, Asli Celikyilmaz, Noah A. Smith |
https://www.aclweb.org/anthology/P19-1264.pdf |
2019 |
ACL |
# train-mtl, train-transfer, arch-lstm, arch-att, pre-elmo, pre-bert, latent-vae, task-textclass, task-lm, task-seq2seq, task-cloze |
6 |
Pretraining Methods for Dialog Context Representation Learning |
Shikib Mehri, Evgeniia Razumovskaia, Tiancheng Zhao, Maxine Eskenazi |
https://www.aclweb.org/anthology/P19-1373.pdf |
2019 |
ACL |
# train-mtl, train-transfer, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-textclass, task-spanlab, task-lm, task-seq2seq, task-relation |
4 |
BAM! Born-Again Multi-Task Networks for Natural Language Understanding |
Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, Quoc V. Le |
https://www.aclweb.org/anthology/P19-1595.pdf |
2019 |
ACL |
# optim-adam, train-augment, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-subword, pre-elmo, pre-bert, adv-examp, task-spanlab, task-seq2seq, task-alignment |
1 |
Improving the Robustness of Question Answering Systems to Question Paraphrasing |
Wee Chung Gan, Hwee Tou Ng |
https://www.aclweb.org/anthology/P19-1610.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, arch-lstm, arch-bilstm, arch-att, pre-glove, pre-elmo, pre-bert, struct-crf, task-textpair, task-seqlab, task-spanlab, task-seq2seq |
2 |
Augmenting Neural Networks with First-order Logic |
Tao Li, Vivek Srikumar |
https://www.aclweb.org/anthology/P19-1028.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, struct-hmm, task-textpair, task-lm, task-seq2seq |
1 |
Knowledge-aware Pronoun Coreference Resolution |
Hongming Zhang, Yan Song, Yangqiu Song, Dong Yu |
https://www.aclweb.org/anthology/P19-1083.pdf |
2019 |
ACL |
# optim-adam, init-glorot, reg-dropout, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-coverage, arch-subword, arch-transformer, pre-word2vec, pre-elmo, pre-bert, struct-hmm, latent-vae, task-extractive, task-lm, task-seq2seq, task-cloze |
0 |
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization |
Xingxing Zhang, Furu Wei, Ming Zhou |
https://www.aclweb.org/anthology/P19-1499.pdf |
2019 |
ACL |
# optim-adagrad, train-mtl, train-transfer, pool-max, arch-rnn, arch-cnn, arch-att, arch-residual, arch-subword, pre-word2vec, pre-elmo |
0 |
Employing the Correspondence of Relations and Connectives to Identify Implicit Discourse Relations via Label Embeddings |
Linh The Nguyen, Linh Van Ngo, Khoat Than, Thien Huu Nguyen |
https://www.aclweb.org/anthology/P19-1411.pdf |
2019 |
ACL |
# optim-sgd, reg-norm, train-mtl, train-mll, train-transfer, arch-lstm, arch-bilstm, search-viterbi, pre-glove, pre-elmo, struct-crf, adv-train, task-seqlab, task-lm, task-seq2seq |
0 |
Cross-Domain NER using Cross-Domain Language Modeling |
Chen Jia, Xiaobo Liang, Yue Zhang |
https://www.aclweb.org/anthology/P19-1236.pdf |
2019 |
ACL |
# optim-adam, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-elmo, pre-bert, task-lm, task-seq2seq |
1 |
Attention Is (not) All You Need for Commonsense Reasoning |
Tassilo Klein, Moin Nabi |
https://www.aclweb.org/anthology/P19-1477.pdf |
2019 |
ACL |
# optim-adam, optim-adadelta, norm-gradient, train-mtl, train-transfer, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-subword, arch-transformer, search-beam, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-textclass, task-lm, task-seq2seq |
1 |
Gated Embeddings in End-to-End Speech Recognition for Conversational-Context Fusion |
Suyoun Kim, Siddharth Dalmia, Florian Metze |
https://www.aclweb.org/anthology/P19-1107.pdf |
2019 |
ACL |
# optim-adam, arch-lstm, arch-bilstm, arch-att, arch-coverage, arch-subword, pre-word2vec, pre-fasttext, pre-elmo, pre-bert, task-seqlab |
3 |
Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation |
Daniel Loureiro, Alípio Jorge |
https://www.aclweb.org/anthology/P19-1569.pdf |
2019 |
ACL |
# reg-stopping, arch-rnn, arch-lstm, arch-bilstm, arch-coverage, search-viterbi, pre-glove, pre-elmo, pre-bert, struct-crf, task-seqlab |
1 |
Towards Improving Neural Named Entity Recognition with Gazetteers |
Tianyu Liu, Jin-Ge Yao, Chin-Yew Lin |
https://www.aclweb.org/anthology/P19-1524.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mtl, train-mll, pool-max, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-coverage, arch-subword, pre-word2vec, pre-fasttext, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-textclass, task-textpair, task-seqlab |
2 |
Robust Representation Learning of Biomedical Names |
Minh C. Phan, Aixin Sun, Yi Tay |
https://www.aclweb.org/anthology/P19-1317.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, norm-gradient, arch-lstm, arch-cnn, arch-att, arch-bilinear, arch-coverage, arch-subword, search-beam, pre-elmo, task-spanlab, task-lm, task-seq2seq, task-relation |
0 |
Generating Question-Answer Hierarchies |
Kalpesh Krishna, Mohit Iyyer |
https://www.aclweb.org/anthology/P19-1224.pdf |
2019 |
ACL |
# optim-adam, arch-lstm, arch-gru, arch-att, arch-bilinear, pre-glove, pre-elmo, loss-margin, task-lm, task-condlm, task-seq2seq |
0 |
Multi-grained Attention with Object-level Grounding for Visual Question Answering |
Pingping Huang, Jianhui Huang, Yuqing Guo, Min Qiao, Yong Zhu |
https://www.aclweb.org/anthology/P19-1349.pdf |
2019 |
ACL |
# init-glorot, reg-dropout, train-transfer, arch-att, pre-elmo, loss-cca, task-seq2seq |
2 |
Fine-Grained Temporal Relation Extraction |
Siddharth Vashishtha, Benjamin Van Durme, Aaron Steven White |
https://www.aclweb.org/anthology/P19-1280.pdf |
2019 |
ACL |
# optim-adam, train-mtl, train-transfer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-residual, arch-gating, arch-transformer, comb-ensemble, search-beam, pre-glove, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq |
5 |
Multi-style Generative Reading Comprehension |
Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita |
https://www.aclweb.org/anthology/P19-1220.pdf |
2019 |
ACL |
# optim-adam, optim-projection, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-transformer, pre-word2vec, pre-glove, pre-elmo, task-lm, task-seq2seq, task-lexicon |
0 |
LSTMEmbed: Learning Word and Sense Representations from a Large Semantically Annotated Corpus with Long Short-Term Memories |
Ignacio Iacobacci, Roberto Navigli |
https://www.aclweb.org/anthology/P19-1165.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mtl, train-mll, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-subword, pre-word2vec, pre-elmo, pre-bert, struct-crf, task-textclass, task-seqlab, task-lm, task-relation |
3 |
Reliability-aware Dynamic Feature Composition for Name Tagging |
Ying Lin, Liyuan Liu, Heng Ji, Dong Yu, Jiawei Han |
https://www.aclweb.org/anthology/P19-1016.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-selfatt, arch-gating, arch-memo, arch-bilinear, pre-glove, pre-elmo, pre-bert, task-spanlab |
3 |
Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension |
Yichen Jiang, Nitish Joshi, Yen-Chun Chen, Mohit Bansal |
https://www.aclweb.org/anthology/P19-1261.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-decay, reg-labelsmooth, train-mll, train-transfer, arch-att, arch-selfatt, arch-transformer, comb-ensemble, search-beam, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-cloze |
1 |
A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning |
Gonçalo M. Correia, André F. T. Martins |
https://www.aclweb.org/anthology/P19-1292.pdf |
2019 |
ACL |
# optim-adam, reg-stopping, reg-patience, reg-decay, arch-att, arch-coverage, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-textpair, task-lm |
1 |
GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification |
Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, Maosong Sun |
https://www.aclweb.org/anthology/P19-1085.pdf |
2019 |
ACL |
# optim-adam, init-glorot, reg-dropout, norm-batch, pool-mean, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-coverage, pre-elmo, pre-bert, latent-vae, latent-topic, task-lm |
1 |
Open Domain Event Extraction Using Neural Latent Variable Models |
Xiao Liu, Heyan Huang, Yue Zhang |
https://www.aclweb.org/anthology/P19-1276.pdf |
2019 |
ACL |
# optim-adam, optim-projection, arch-lstm, arch-bilstm, arch-subword, pre-glove, pre-elmo, task-textclass, task-lm, task-relation |
2 |
Unsupervised Learning of PCFGs with Normalizing Flow |
Lifeng Jin, Finale Doshi-Velez, Timothy Miller, Lane Schwartz, William Schuler |
https://www.aclweb.org/anthology/P19-1234.pdf |
2019 |
ACL |
# arch-att, arch-selfatt, pre-elmo, pre-bert |
0 |
PTB Graph Parsing with Tree Approximation |
Yoshihide Kato, Shigeki Matsubara |
https://www.aclweb.org/anthology/P19-1530.pdf |
2019 |
ACL |
# optim-adam, init-glorot, norm-layer, train-mtl, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, latent-topic, task-textclass, task-lm, task-seq2seq |
0 |
Text Categorization by Learning Predominant Sense of Words as Auxiliary Task |
Kazuya Shimura, Jiyi Li, Fumiyo Fukumoto |
https://www.aclweb.org/anthology/P19-1105.pdf |
2019 |
ACL |
# reg-decay, arch-lstm, arch-att, arch-selfatt, comb-ensemble, pre-glove, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-tree |
8 |
Explain Yourself! Leveraging Language Models for Commonsense Reasoning |
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, Richard Socher |
https://www.aclweb.org/anthology/P19-1487.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-stopping, train-mtl, train-transfer, arch-lstm, pre-glove, pre-skipthought, pre-elmo, task-textclass, task-textpair, task-lm, task-seq2seq |
0 |
Encouraging Paragraph Embeddings to Remember Sentence Identity Improves Classification |
Tu Vu, Mohit Iyyer |
https://www.aclweb.org/anthology/P19-1638.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, arch-lstm, arch-treelstm, pre-elmo, struct-crf, latent-vae, task-lm, task-tree |
0 |
Latent Variable Sentiment Grammar |
Liwen Zhang, Kewei Tu, Yue Zhang |
https://www.aclweb.org/anthology/P19-1457.pdf |
2019 |
ACL |
# optim-adam, optim-projection, reg-dropout, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, pre-glove, pre-elmo, pre-bert, struct-crf, adv-train, task-textclass, task-seqlab, task-lm, task-seq2seq |
1 |
Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation |
Benjamin Heinzerling, Michael Strube |
https://www.aclweb.org/anthology/P19-1027.pdf |
2019 |
ACL |
# init-glorot, arch-lstm, arch-att, arch-selfatt, arch-bilinear, pre-word2vec, pre-glove, pre-elmo, task-textclass, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-relation |
5 |
Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks |
Shikhar Vashishth, Manik Bhandari, Prateek Yadav, Piyush Rai, Chiranjib Bhattacharyya, Partha Talukdar |
https://www.aclweb.org/anthology/P19-1320.pdf |
2019 |
ACL |
# arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-memo, pre-elmo, loss-nce, task-seqlab |
0 |
Implicit Discourse Relation Identification for Open-domain Dialogues |
Mingyu Derek Ma, Kevin Bowden, Jiaqi Wu, Wen Cui, Marilyn Walker |
https://www.aclweb.org/anthology/P19-1065.pdf |
2019 |
ACL |
# optim-adam, arch-lstm, comb-ensemble, pre-fasttext, pre-glove, pre-elmo, pre-bert, adv-examp, task-textclass, task-textpair, task-lm |
14 |
HellaSwag: Can a Machine Really Finish Your Sentence? |
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi |
https://www.aclweb.org/anthology/P19-1472.pdf |
2019 |
ACL |
# arch-rnn, arch-lstm, arch-att, arch-transformer, pre-elmo, pre-bert, latent-vae, task-lm, task-seq2seq |
1 |
Coreference Resolution with Entity Equalization |
Ben Kantor, Amir Globerson |
https://www.aclweb.org/anthology/P19-1066.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-memo, pre-glove, pre-elmo, task-textpair, task-seq2seq, task-relation, task-tree |
0 |
An Empirical Study of Span Representations in Argumentation Structure Parsing |
Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, Kentaro Inui |
https://www.aclweb.org/anthology/P19-1464.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, reg-dropout, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-cloze |
116 |
Multi-Task Deep Neural Networks for Natural Language Understanding |
Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao |
https://www.aclweb.org/anthology/P19-1441.pdf |
2019 |
ACL |
# train-transfer, arch-lstm, arch-bilstm, arch-att, pre-elmo, pre-bert, latent-topic, task-lm |
2 |
Diachronic Sense Modeling with Deep Contextualized Word Embeddings: An Ecological View |
Renfen Hu, Shen Li, Shichen Liang |
https://www.aclweb.org/anthology/P19-1379.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, train-mll, train-transfer, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-textclass, task-lm |
0 |
Self-Attentive, Multi-Context One-Class Classification for Unsupervised Anomaly Detection on Text |
Lukas Ruff, Yury Zemlyanskiy, Robert Vandermeulen, Thomas Schnake, Marius Kloft |
https://www.aclweb.org/anthology/P19-1398.pdf |
2019 |
ACL |
# optim-adam, optim-adadelta, reg-dropout, pool-max, arch-rnn, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-memo, search-beam, pre-glove, pre-elmo, pre-bert, task-textpair, task-spanlab, task-seq2seq |
4 |
Multi-Hop Paragraph Retrieval for Open-Domain Question Answering |
Yair Feldman, Ran El-Yaniv |
https://www.aclweb.org/anthology/P19-1222.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, norm-layer, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-memo, pre-word2vec, pre-elmo, pre-bert, task-textclass |
4 |
One Time of Interaction May Not Be Enough: Go Deep with an Interaction-over-Interaction Network for Response Selection in Dialogues |
Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, Rui Yan |
https://www.aclweb.org/anthology/P19-1001.pdf |
2019 |
ACL |
# optim-adam, train-mtl, train-mll, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-cloze |
27 |
ERNIE: Enhanced Language Representation with Informative Entities |
Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu |
https://www.aclweb.org/anthology/P19-1139.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-mtl, train-mll, train-transfer, train-active, arch-lstm, arch-bilstm, arch-att, arch-bilinear, comb-ensemble, search-viterbi, pre-elmo, task-seqlab, task-lm, task-relation, task-tree |
1 |
Semi-supervised Domain Adaptation for Dependency Parsing |
Zhenghua Li, Xue Peng, Min Zhang, Rui Wang, Luo Si |
https://www.aclweb.org/anthology/P19-1229.pdf |
2019 |
ACL |
# optim-adam, arch-rnn, arch-lstm, arch-transformer, pre-elmo, pre-bert, task-textpair, task-lm, task-cloze |
37 |
BERT Rediscovers the Classical NLP Pipeline |
Ian Tenney, Dipanjan Das, Ellie Pavlick |
https://www.aclweb.org/anthology/P19-1452.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, norm-gradient, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-memo, search-beam, pre-glove, pre-elmo, task-spanlab |
2 |
Exploiting Explicit Paths for Multi-hop Reading Comprehension |
Souvik Kundu, Tushar Khot, Ashish Sabharwal, Peter Clark |
https://www.aclweb.org/anthology/P19-1263.pdf |
2019 |
ACL |
# optim-adam, arch-rnn, arch-lstm, arch-coverage, comb-ensemble, search-viterbi, pre-elmo |
5 |
Wide-Coverage Neural A* Parsing for Minimalist Grammars |
John Torr, Milos Stanojevic, Mark Steedman, Shay B. Cohen |
https://www.aclweb.org/anthology/P19-1238.pdf |
2019 |
ACL |
# optim-adam, optim-projection, train-mtl, train-transfer, arch-lstm, arch-coverage, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-textclass, task-lm, task-cloze |
0 |
Topic Sensitive Attention on Generic Corpora Corrects Sense Bias in Pretrained Embeddings |
Vihari Piratla, Sunita Sarawagi, Soumen Chakrabarti |
https://www.aclweb.org/anthology/P19-1168.pdf |
2019 |
ACL |
# optim-adam, reg-norm, train-mtl, train-mll, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-glove, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-textclass, task-textpair, task-lm |
0 |
EigenSent: Spectral sentence embeddings using higher-order Dynamic Mode Decomposition |
Subhradeep Kayal, George Tsatsaronis |
https://www.aclweb.org/anthology/P19-1445.pdf |
2019 |
ACL |
# arch-lstm, arch-bilstm, arch-att, arch-transformer, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-textpair, task-lm, task-cloze |
3 |
Classification and Clustering of Arguments with Contextualized Word Embeddings |
Nils Reimers, Benjamin Schiller, Tilman Beck, Johannes Daxenberger, Christian Stab, Iryna Gurevych |
https://www.aclweb.org/anthology/P19-1054.pdf |
2019 |
ACL |
# optim-adam, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-elmo, pre-bert, task-seqlab, task-lm, task-seq2seq, task-relation, meta-arch |
76 |
Energy and Policy Considerations for Deep Learning in NLP |
Emma Strubell, Ananya Ganesh, Andrew McCallum |
https://www.aclweb.org/anthology/P19-1355.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-bigru, arch-att, arch-selfatt, arch-bilinear, pre-glove, pre-elmo, pre-bert, struct-crf, task-seq2seq, task-relation |
3 |
A Unified Linear-Time Framework for Sentence-Level Discourse Parsing |
Xiang Lin, Shafiq Joty, Prathyusha Jwalapuram, M Saiful Bari |
https://www.aclweb.org/anthology/P19-1410.pdf |
2019 |
ACL |
# optim-sgd, optim-adam, optim-projection, reg-dropout, train-mll, train-transfer, arch-lstm, arch-subword, pre-word2vec, pre-elmo, adv-train, loss-svd, task-seqlab, task-lm, task-seq2seq, task-relation, task-lexicon, task-alignment |
1 |
Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models |
Takashi Wada, Tomoharu Iwata, Yuji Matsumoto |
https://www.aclweb.org/anthology/P19-1300.pdf |
2019 |
ACL |
# optim-adam, init-glorot, train-mtl, arch-lstm, arch-att, arch-selfatt, comb-ensemble, pre-elmo, struct-crf, task-seq2seq, task-relation, task-tree |
1 |
How to Best Use Syntax in Semantic Role Labelling |
Yufei Wang, Mark Johnson, Stephen Wan, Yifang Sun, Wei Wang |
https://www.aclweb.org/anthology/P19-1529.pdf |
2019 |
ACL |
# optim-adam, optim-projection, init-glorot, arch-lstm, arch-cnn, arch-att, arch-coverage, pre-glove, pre-elmo, task-lm, task-relation |
1 |
Revisiting Joint Modeling of Cross-document Entity and Event Coreference Resolution |
Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, Ido Dagan |
https://www.aclweb.org/anthology/P19-1409.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, train-mll, arch-lstm, arch-bilstm, arch-att, arch-subword, arch-transformer, comb-ensemble, pre-glove, pre-elmo, pre-bert, latent-vae, task-textclass, task-seq2seq |
0 |
Modelling the interplay of metaphor and emotion through multitask learning |
Verna Dankers, Marek Rei, Martha Lewis, Ekaterina Shutova |
https://www.aclweb.org/anthology/D19-1227.pdf |
2019 |
EMNLP |
# reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, pre-glove, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm, task-seq2seq |
0 |
ner and pos when nothing is capitalized |
Stephen Mayhew, Tatiana Tsygankova, Dan Roth |
https://www.aclweb.org/anthology/D19-1650.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-elmo, pre-bert, adv-examp, task-textclass, task-spanlab, task-lm, task-seq2seq, task-cloze |
0 |
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models |
Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh |
https://www.aclweb.org/anthology/D19-3002.pdf |
2019 |
EMNLP |
# optim-adam, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, search-beam, pre-elmo, task-extractive |
5 |
Neural Extractive Text Summarization with Syntactic Compression |
Jiacheng Xu, Greg Durrett |
https://www.aclweb.org/anthology/D19-1324.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, arch-coverage, comb-ensemble, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-lm, task-relation, task-tree |
0 |
Syntax-aware Multilingual Semantic Role Labeling |
Shexia He, Zuchao Li, Hai Zhao |
https://www.aclweb.org/anthology/D19-1538.pdf |
2019 |
EMNLP |
# activ-relu, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-bilinear, comb-ensemble, pre-elmo, pre-bert, struct-crf, task-seqlab, task-relation, task-tree |
1 |
Dependency-Guided LSTM-CRF for Named Entity Recognition |
Zhanming Jie, Wei Lu |
https://www.aclweb.org/anthology/D19-1399.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, pre-glove, pre-elmo, pre-bert, latent-vae, task-spanlab |
0 |
Answering questions by learning to rank - Learning to rank by answering questions |
George Sebastian Pirtoaca, Traian Rebedea, Stefan Ruseti |
https://www.aclweb.org/anthology/D19-1256.pdf |
2019 |
EMNLP |
# reg-dropout, pool-max, arch-lstm, arch-att, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq |
0 |
Aggregating Bidirectional Encoder Representations Using MatchLSTM for Sequence Matching |
Bo Shao, Yeyun Gong, Weizhen Qi, Nan Duan, Xiaola Lin |
https://www.aclweb.org/anthology/D19-1626.pdf |
2019 |
EMNLP |
# reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-coverage, pre-glove, pre-elmo, pre-bert, loss-svd, task-lm |
0 |
Detect Camouflaged Spam Content via StoneSkipping: Graph and Text Joint Embedding for Chinese Character Variation Representation |
Zhuoren Jiang, Zhe Gao, Guoxiu He, Yangyang Kang, Changlong Sun, Qiong Zhang, Luo Si, Xiaozhong Liu |
https://www.aclweb.org/anthology/D19-1640.pdf |
2019 |
EMNLP |
# init-glorot, reg-dropout, reg-stopping, norm-gradient, train-transfer, train-augment, train-parallel, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-subword, arch-transformer, comb-ensemble, search-beam, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq |
2 |
Denoising based Sequence-to-Sequence Pre-training for Text Generation |
Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, Jingming Liu |
https://www.aclweb.org/anthology/D19-1412.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-att, arch-bilinear, pre-word2vec, pre-elmo, task-seq2seq, task-relation |
0 |
Hierarchical Pointer Net Parsing |
Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, Lidong Bing |
https://www.aclweb.org/anthology/D19-1093.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, reg-norm, reg-decay, arch-lstm, arch-bilstm, arch-bilinear, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-lm, task-seq2seq, task-relation |
1 |
Designing and Interpreting Probes with Control Tasks |
John Hewitt, Percy Liang |
https://www.aclweb.org/anthology/D19-1275.pdf |
2019 |
EMNLP |
# optim-projection, train-transfer, arch-rnn, arch-lstm, pre-elmo, pre-bert, adv-train, task-textclass, task-seqlab, task-lm, task-cloze |
4 |
Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence Labeling |
Xiaochuang Han, Jacob Eisenstein |
https://www.aclweb.org/anthology/D19-1433.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, pre-word2vec, pre-glove, pre-elmo, pre-bert, loss-nce, task-textpair, task-alignment |
0 |
A Gated Self-attention Memory Network for Answer Selection |
Tuan Lai, Quan Hung Tran, Trung Bui, Daisuke Kihara |
https://www.aclweb.org/anthology/D19-1610.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, train-mll, arch-rnn, arch-att, arch-bilinear, arch-subword, comb-ensemble, pre-fasttext, pre-elmo, pre-bert, loss-svd, task-lm, task-seq2seq, task-relation, task-alignment |
3 |
Cross-Lingual BERT Transformation for Zero-Shot Dependency Parsing |
Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, Ting Liu |
https://www.aclweb.org/anthology/D19-1575.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, reg-dropout, arch-lstm, arch-cnn, arch-subword, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert |
0 |
An Improved Neural Baseline for Temporal Relation Extraction |
Qiang Ning, Sanjay Subramanian, Dan Roth |
https://www.aclweb.org/anthology/D19-1642.pdf |
2019 |
EMNLP |
# optim-adam, train-mll, train-augment, pool-max, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-tree |
0 |
Do NLP Models Know Numbers? Probing Numeracy in Embeddings |
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, Matt Gardner |
https://www.aclweb.org/anthology/D19-1534.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-bilinear, search-viterbi, pre-glove, pre-elmo, struct-crf, task-seqlab, task-condlm |
0 |
Phrase Grounding by Soft-Label Chain Conditional Random Field |
Jiacheng Liu, Julia Hockenmaier |
https://www.aclweb.org/anthology/D19-1515.pdf |
2019 |
EMNLP |
# reg-stopping, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, meta-arch |
1 |
Show Your Work: Improved Reporting of Experimental Results |
Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, Noah A. Smith |
https://www.aclweb.org/anthology/D19-1224.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, train-mtl, train-mll, train-transfer, arch-lstm, arch-att, arch-coverage, arch-subword, pre-elmo, pre-bert, task-textpair, task-lm, task-seq2seq, task-cloze, task-alignment |
4 |
Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks |
Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Ming Zhou |
https://www.aclweb.org/anthology/D19-1252.pdf |
2019 |
EMNLP |
# optim-adam, arch-transformer, pre-elmo, adv-examp, task-textclass, task-textpair, task-spanlab |
0 |
Evaluating adversarial attacks against multiple fact verification systems |
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal |
https://www.aclweb.org/anthology/D19-1292.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, reg-dropout, reg-decay, train-mll, train-transfer, pool-max, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze, task-relation |
22 |
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT |
Shijie Wu, Mark Dredze |
https://www.aclweb.org/anthology/D19-1077.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-lstm, arch-att, pre-elmo, pre-bert, task-textpair, task-seqlab, task-lm |
0 |
Entity, Relation, and Event Extraction with Contextualized Span Representations |
David Wadden, Ulme Wennberg, Yi Luan, Hannaneh Hajishirzi |
https://www.aclweb.org/anthology/D19-1585.pdf |
2019 |
EMNLP |
# reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, task-textclass, task-lm |
0 |
DENS: A Dataset for Multi-class Emotion Analysis |
Chen Liu, Muhammad Osama, Anderson De Andrade |
https://www.aclweb.org/anthology/D19-1656.pdf |
2019 |
EMNLP |
# optim-sgd, reg-dropout, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-subword, search-viterbi, pre-word2vec, pre-glove, pre-elmo, struct-crf, latent-vae, task-textpair, task-lm, task-seq2seq |
0 |
A Regularization Approach for Incorporating Event Knowledge and Coreference Relations into Neural Discourse Parsing |
Zeyu Dai, Ruihong Huang |
https://www.aclweb.org/anthology/D19-1295.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, train-mtl, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, arch-subword, comb-ensemble, pre-word2vec, pre-fasttext, pre-elmo, struct-crf, task-lm, task-seq2seq, task-relation, task-tree |
0 |
Semi-Supervised Semantic Role Labeling with Cross-View Training |
Rui Cai, Mirella Lapata |
https://www.aclweb.org/anthology/D19-1094.pdf |
2019 |
EMNLP |
# pre-glove, pre-skipthought, pre-elmo, task-seq2seq, task-tree |
0 |
Split or Merge: Which is Better for Unsupervised RST Parsing? |
Naoki Kobayashi, Tsutomu Hirao, Kengo Nakamura, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata |
https://www.aclweb.org/anthology/D19-1587.pdf |
2019 |
EMNLP |
# reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, pre-bert, latent-vae, task-textpair, task-spanlab, task-seq2seq, task-tree, task-graph |
0 |
Don’t paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing |
Jonathan Herzig, Jonathan Berant |
https://www.aclweb.org/anthology/D19-1394.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, norm-layer, train-mtl, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-coverage, pre-word2vec, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm, task-seq2seq |
0 |
Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations |
Christian Hadiwinoto, Hwee Tou Ng, Wee Chung Gan |
https://www.aclweb.org/anthology/D19-1533.pdf |
2019 |
EMNLP |
# arch-lstm, arch-att, pre-glove, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-tree |
1 |
“Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding |
Ben Zhou, Daniel Khashabi, Qiang Ning, Dan Roth |
https://www.aclweb.org/anthology/D19-1332.pdf |
2019 |
EMNLP |
# norm-batch, arch-rnn, arch-lstm, arch-att, arch-transformer, pre-fasttext, pre-elmo, pre-bert, nondif-gumbelsoftmax, task-textclass, task-seq2seq, task-relation, task-lexicon |
0 |
What Does This Word Mean? Explaining Contextualized Embeddings with Natural Language Definition |
Ting-Yun Chang, Yun-Nung Chen |
https://www.aclweb.org/anthology/D19-1627.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq, task-relation, task-tree |
0 |
Discourse-Aware Semantic Self-Attention for Narrative Reading Comprehension |
Todor Mihaylov, Anette Frank |
https://www.aclweb.org/anthology/D19-1257.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, comb-ensemble, pre-fasttext, pre-elmo, latent-vae, task-textclass, task-seq2seq, task-tree |
1 |
Capturing Argument Interaction in Semantic Role Labeling with Capsule Networks |
Xinchi Chen, Chunchuan Lyu, Ivan Titov |
https://www.aclweb.org/anthology/D19-1544.pdf |
2019 |
EMNLP |
# optim-sgd, reg-dropout, pool-mean, arch-lstm, arch-bilstm, arch-att, arch-coverage, arch-subword, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-textpair, task-lm |
1 |
Incorporating Domain Knowledge into Medical NLI using Knowledge Graphs |
Soumya Sharma, Bishal Santra, Abhik Jana, Santosh Tokala, Niloy Ganguly, Pawan Goyal |
https://www.aclweb.org/anthology/D19-1631.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-transformer, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-lm |
0 |
How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings |
Kawin Ethayarajh |
https://www.aclweb.org/anthology/D19-1006.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, task-condlm, task-seq2seq |
0 |
What You See is What You Get: Visual Pronoun Coreference Resolution in Dialogues |
Xintong Yu, Hongming Zhang, Yangqiu Song, Yan Song, Changshui Zhang |
https://www.aclweb.org/anthology/D19-1516.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, norm-layer, train-mtl, train-mll, arch-subword, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, loss-svd, task-textpair, task-seq2seq |
0 |
Parameter-free Sentence Embedding via Orthogonal Basis |
Ziyi Yang, Chenguang Zhu, Weizhu Chen |
https://www.aclweb.org/anthology/D19-1059.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, train-active, train-augment, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, pre-elmo, pre-bert, latent-vae, task-textclass, task-lm, task-seq2seq |
19 |
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks |
Jason Wei, Kai Zou |
https://www.aclweb.org/anthology/D19-1670.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, reg-dropout, train-mtl, train-mll, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-copy, pre-glove, pre-elmo, pre-bert, adv-train, task-spanlab, task-lm, task-seq2seq |
2 |
Adversarial Domain Adaptation for Machine Reading Comprehension |
Huazheng Wang, Zhe Gan, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Hongning Wang |
https://www.aclweb.org/anthology/D19-1254.pdf |
2019 |
EMNLP |
# reg-stopping, train-mtl, train-active, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, search-viterbi, pre-glove, pre-elmo, pre-bert, struct-hmm, struct-crf, task-seqlab, task-lm, task-seq2seq |
0 |
Similarity Based Auxiliary Classifier for Named Entity Recognition |
Shiyuan Xiao, Yuanxin Ouyang, Wenge Rong, Jianxin Yang, Zhang Xiong |
https://www.aclweb.org/anthology/D19-1105.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, arch-rnn, arch-gru, arch-att, pre-word2vec, pre-elmo, task-condlm |
2 |
Integrating Text and Image: Determining Multimodal Document Intent in Instagram Posts |
Julia Kruk, Jonah Lubin, Karan Sikka, Xiao Lin, Dan Jurafsky, Ajay Divakaran |
https://www.aclweb.org/anthology/D19-1469.pdf |
2019 |
EMNLP |
# reg-dropout, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-memo, arch-coverage, arch-transformer, pre-elmo, pre-bert, task-lm |
1 |
GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge |
Luyao Huang, Chi Sun, Xipeng Qiu, Xuanjing Huang |
https://www.aclweb.org/anthology/D19-1355.pdf |
2019 |
EMNLP |
# optim-sgd, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-textpair, task-lm, task-seq2seq, task-relation |
0 |
Multiplex Word Embeddings for Selectional Preference Acquisition |
Hongming Zhang, Jiaxin Bai, Yan Song, Kun Xu, Changlong Yu, Yangqiu Song, Wilfred Ng, Dong Yu |
https://www.aclweb.org/anthology/D19-1528.pdf |
2019 |
EMNLP |
# optim-adam, reg-stopping, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-coverage, pre-elmo |
0 |
Rewarding Coreference Resolvers for Being Consistent with World Knowledge |
Rahul Aralikatte, Heather Lent, Ana Valeria Gonzalez, Daniel Herschcovich, Chen Qiu, Anders Sandholm, Michael Ringaard, Anders Søgaard |
https://www.aclweb.org/anthology/D19-1118.pdf |
2019 |
EMNLP |
# optim-adam, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-copy, arch-coverage, comb-ensemble, pre-glove, pre-elmo, pre-bert, latent-vae, task-seq2seq, task-tree |
0 |
Data-Efficient Goal-Oriented Conversation with Dialogue Knowledge Transfer Networks |
Igor Shalyminov, Sungjin Lee, Arash Eshghi, Oliver Lemon |
https://www.aclweb.org/anthology/D19-1183.pdf |
2019 |
EMNLP |
# arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-word2vec, pre-elmo, pre-use, task-textclass, task-textpair, task-seq2seq |
0 |
The Feasibility of Embedding Based Automatic Evaluation for Single Document Summarization |
Simeng Sun, Ani Nenkova |
https://www.aclweb.org/anthology/D19-1116.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-fasttext, pre-elmo, pre-bert, struct-crf, task-spanlab, task-lm, task-seq2seq, task-cloze |
0 |
Learning with Limited Data for Multilingual Reading Comprehension |
Kyungjae Lee, Sunghyun Park, Hojae Han, Jinyoung Yeo, Seung-won Hwang, Juho Lee |
https://www.aclweb.org/anthology/D19-1283.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, pool-max, arch-lstm, arch-bilstm, arch-att, arch-memo, arch-coverage, pre-elmo, pre-bert, task-relation |
0 |
MICRON: Multigranular Interaction for Contextualizing RepresentatiON in Non-factoid Question Answering |
Hojae Han, Seungtaek Choi, Haeju Park, Seung-won Hwang |
https://www.aclweb.org/anthology/D19-1601.pdf |
2019 |
EMNLP |
# arch-lstm, arch-bilstm, arch-att, arch-bilinear, arch-coverage, arch-subword, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, latent-topic, task-textclass, task-lm, task-seq2seq |
0 |
Game Theory Meets Embeddings: a Unified Framework for Word Sense Disambiguation |
Rocco Tripodi, Roberto Navigli |
https://www.aclweb.org/anthology/D19-1009.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, optim-projection, reg-dropout, norm-layer, train-mll, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-subword, pre-elmo, pre-bert, struct-crf, loss-triplet, task-seqlab, task-lm, task-seq2seq, task-context |
14 |
Cloze-driven Pretraining of Self-attention Networks |
Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli |
https://www.aclweb.org/anthology/D19-1539.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, train-mtl, train-mll, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-glove, pre-elmo, pre-bert, latent-vae, task-spanlab, task-lm, task-seq2seq |
2 |
Knowledge Enhanced Contextual Word Representations |
Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A. Smith |
https://www.aclweb.org/anthology/D19-1005.pdf |
2019 |
EMNLP |
# optim-adam, optim-adadelta, optim-projection, reg-dropout, train-augment, pool-max, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-transformer, search-greedy, search-beam, pre-elmo, pre-bert, nondif-reinforce, task-textpair, task-seqlab, task-spanlab, task-lm, task-condlm, task-seq2seq |
0 |
Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering |
Shiyue Zhang, Mohit Bansal |
https://www.aclweb.org/anthology/D19-1253.pdf |
2019 |
EMNLP |
# optim-adam, train-transfer, arch-lstm, arch-att, arch-selfatt, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, loss-nce, task-textclass, task-lm, task-cloze |
0 |
Multi-Granularity Representations of Dialog |
Shikib Mehri, Maxine Eskenazi |
https://www.aclweb.org/anthology/D19-1184.pdf |
2019 |
EMNLP |
# train-mll, arch-lstm, arch-cnn, arch-att, arch-memo, pre-glove, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq |
1 |
EntEval: A Holistic Evaluation Benchmark for Entity Representations |
Mingda Chen, Zewei Chu, Yang Chen, Karl Stratos, Kevin Gimpel |
https://www.aclweb.org/anthology/D19-1040.pdf |
2019 |
EMNLP |
# train-mll, arch-lstm, arch-bilstm, arch-att, arch-transformer, pre-glove, pre-elmo, pre-bert, task-seq2seq |
2 |
Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite |
Prathyusha Jwalapuram, Shafiq Joty, Irina Temnikova, Preslav Nakov |
https://www.aclweb.org/anthology/D19-1294.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, search-viterbi, pre-elmo, pre-bert, struct-crf, loss-nce, task-seqlab, task-lm |
0 |
Effective Use of Transformer Networks for Entity Tracking |
Aditya Gupta, Greg Durrett |
https://www.aclweb.org/anthology/D19-1070.pdf |
2019 |
EMNLP |
# arch-lstm, arch-bilstm, arch-att, comb-ensemble, pre-word2vec, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm |
0 |
PubMedQA: A Dataset for Biomedical Research Question Answering |
Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, Xinghua Lu |
https://www.aclweb.org/anthology/D19-1259.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, init-glorot, train-mll, train-active, train-augment, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-copy, pre-glove, pre-elmo, pre-bert, task-seq2seq, task-tree, task-graph, task-alignment |
0 |
Translate and Label! An Encoder-Decoder Approach for Cross-lingual Semantic Role Labeling |
Angel Daza, Anette Frank |
https://www.aclweb.org/anthology/D19-1056.pdf |
2019 |
EMNLP |
# optim-adam, norm-layer, train-mll, train-augment, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-elmo, pre-bert, task-lm, task-condlm, task-seq2seq |
19 |
LXMERT: Learning Cross-Modality Encoder Representations from Transformers |
Hao Tan, Mohit Bansal |
https://www.aclweb.org/anthology/D19-1514.pdf |
2019 |
EMNLP |
# optim-adam, norm-layer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-elmo, pre-bert, task-textclass, task-textpair, task-extractive, task-spanlab, task-lm, task-seq2seq, task-cloze |
0 |
Fine-tune BERT with Sparse Self-Attention Mechanism |
Baiyun Cui, Yingming Li, Ming Chen, Zhongfei Zhang |
https://www.aclweb.org/anthology/D19-1361.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-subword, pre-fasttext, pre-glove, pre-elmo, pre-bert, loss-nce, task-textclass, task-textpair, task-lm, task-cloze |
0 |
Multi-label Categorization of Accounts of Sexism using a Neural Framework |
Pulkit Parikh, Harika Abburi, Pinkesh Badjatiya, Radhika Krishnan, Niyati Chhaya, Manish Gupta, Vasudeva Varma |
https://www.aclweb.org/anthology/D19-1174.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, train-mll, pool-max, arch-subword, arch-transformer, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, latent-vae, task-textclass, task-textpair, task-seq2seq |
0 |
Correlations between Word Vector Sets |
Vitalii Zhelezniak, April Shen, Daniel Busbridge, Aleksandar Savkov, Nils Hammerla |
https://www.aclweb.org/anthology/D19-1008.pdf |
2019 |
EMNLP |
# optim-adam, reg-patience, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm |
10 |
Language Models as Knowledge Bases? |
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller |
https://www.aclweb.org/anthology/D19-1250.pdf |
2019 |
EMNLP |
# optim-projection, train-mtl, arch-cnn, arch-att, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-cloze |
9 |
Patient Knowledge Distillation for BERT Model Compression |
Siqi Sun, Yu Cheng, Zhe Gan, Jingjing Liu |
https://www.aclweb.org/anthology/D19-1441.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-att, pre-elmo, pre-bert, adv-train, task-textclass, task-lm, task-seq2seq, task-tree |
0 |
Transductive Learning of Neural Language Models for Syntactic and Semantic Analysis |
Hiroki Ouchi, Jun Suzuki, Kentaro Inui |
https://www.aclweb.org/anthology/D19-1379.pdf |
2019 |
EMNLP |
# train-active, arch-lstm, arch-bilstm, arch-cnn, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm |
1 |
CrossWeigh: Training Named Entity Tagger from Imperfect Annotations |
Zihan Wang, Jingbo Shang, Liyuan Liu, Lihao Lu, Jiacheng Liu, Jiawei Han |
https://www.aclweb.org/anthology/D19-1519.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-att, pre-glove, pre-skipthought, pre-elmo, pre-use, adv-train, task-textpair, task-seqlab, task-lm, task-seq2seq |
0 |
IMaT: Unsupervised Text Attribute Transfer via Iterative Matching and Translation |
Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, Enrico Santus |
https://www.aclweb.org/anthology/D19-1306.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-decay, arch-att, pre-elmo, loss-svd |
1 |
An Attentive Fine-Grained Entity Typing Model with Latent Type Representation |
Ying Lin, Heng Ji |
https://www.aclweb.org/anthology/D19-1641.pdf |
2019 |
EMNLP |
# reg-stopping, train-mll, train-transfer, arch-transformer, pre-elmo, pre-bert, adv-train, task-textclass, task-lm, task-seq2seq |
0 |
A Robust Self-Learning Framework for Cross-Lingual Text Classification |
Xin Dong, Gerard de Melo |
https://www.aclweb.org/anthology/D19-1658.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, pre-bert, latent-vae, task-textclass, task-textpair, task-spanlab, task-lm |
0 |
Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering |
Vikas Yadav, Steven Bethard, Mihai Surdeanu |
https://www.aclweb.org/anthology/D19-1260.pdf |
2019 |
EMNLP |
# reg-dropout, pool-max, pre-word2vec, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair |
1 |
Efficient Sentence Embedding using Discrete Cosine Transform |
Nada Almarwani, Hanan Aldarmaki, Mona Diab |
https://www.aclweb.org/anthology/D19-1380.pdf |
2019 |
EMNLP |
# train-transfer, arch-att, arch-transformer, pre-elmo, pre-bert, task-textclass, task-lm |
0 |
Pre-Training BERT on Domain Resources for Short Answer Grading |
Chul Sung, Tejas Dhamecha, Swarnadeep Saha, Tengfei Ma, Vinay Reddy, Rishi Arora |
https://www.aclweb.org/anthology/D19-1628.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, train-mll, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, comb-ensemble, search-greedy, search-beam, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, task-lm, task-seq2seq, task-relation |
0 |
Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing - A Tale of Two Parsers Revisited |
Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, Joakim Nivre |
https://www.aclweb.org/anthology/D19-1277.pdf |
2019 |
EMNLP |
# optim-adam, reg-stopping, reg-patience, train-transfer, train-active, arch-lstm, arch-bilstm, comb-ensemble, pre-glove, pre-elmo, adv-examp, adv-train, task-lm |
0 |
To Annotate or Not? Predicting Performance Drop under Domain Shift |
Hady Elsahar, Matthias Gallé |
https://www.aclweb.org/anthology/D19-1222.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, train-mtl, train-transfer, pool-max, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-skipthought, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq, task-cloze |
0 |
Transfer Fine-Tuning: A BERT Case Study |
Yuki Arase, Jun’ichi Tsujii |
https://www.aclweb.org/anthology/D19-1542.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-coverage, comb-ensemble, pre-fasttext, pre-elmo, pre-bert, task-textclass |
0 |
Label Embedding using Hierarchical Structure of Labels for Twitter Classification |
Taro Miyazaki, Kiminobu Makino, Yuka Takei, Hiroki Okamoto, Jun Goto |
https://www.aclweb.org/anthology/D19-1660.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-fasttext, pre-elmo, struct-crf, latent-vae, task-seqlab, task-lm, task-seq2seq, task-relation, task-tree |
0 |
Semantic Role Labeling with Iterative Structure Refinement |
Chunchuan Lyu, Shay B. Cohen, Ivan Titov |
https://www.aclweb.org/anthology/D19-1099.pdf |
2019 |
EMNLP |
# arch-lstm, arch-bilinear, arch-coverage, arch-subword, pre-elmo, pre-bert, latent-vae, task-textpair, task-lm |
0 |
Unsupervised Labeled Parsing with Deep Inside-Outside Recursive Autoencoders |
Andrew Drozdov, Patrick Verga, Yi-Pei Chen, Mohit Iyyer, Andrew McCallum |
https://www.aclweb.org/anthology/D19-1161.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, train-mll, train-active, arch-lstm, arch-bilstm, pre-word2vec, pre-elmo, pre-bert, task-textclass, task-textpair, task-extractive, task-condlm, task-seq2seq |
2 |
MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance |
Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, Steffen Eger |
https://www.aclweb.org/anthology/D19-1053.pdf |
2019 |
EMNLP |
# optim-projection, reg-dropout, train-mtl, train-transfer, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-transformer, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq |
0 |
Shallow Domain Adaptive Embeddings for Sentiment Analysis |
Prathusha K Sarma, Yingyu Liang, William Sethares |
https://www.aclweb.org/anthology/D19-1557.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-bilinear, arch-coverage, pre-elmo, task-textpair, task-seq2seq |
0 |
Query-focused Scenario Construction |
Su Wang, Greg Durrett, Katrin Erk |
https://www.aclweb.org/anthology/D19-1273.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, reg-patience, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-bilinear, arch-subword, arch-transformer, comb-ensemble, pre-elmo, pre-bert, struct-crf, task-textclass, task-lm, task-seq2seq, task-relation |
6 |
SciBERT: A Pretrained Language Model for Scientific Text |
Iz Beltagy, Kyle Lo, Arman Cohan |
https://www.aclweb.org/anthology/D19-1371.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, train-mll, train-transfer, train-active, pre-elmo, pre-bert, adv-train, latent-vae, task-textclass, task-seqlab, task-lm, task-seq2seq, task-lexicon |
2 |
Adversarial Learning with Contextual Embeddings for Zero-resource Cross-lingual Classification and NER |
Phillip Keung, Yichao Lu, Vikas Bhardwaj |
https://www.aclweb.org/anthology/D19-1138.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-elmo, task-seq2seq |
0 |
Open Domain Web Keyphrase Extraction Beyond Language Modeling |
Lee Xiong, Chuan Hu, Chenyan Xiong, Daniel Campos, Arnold Overwijk |
https://www.aclweb.org/anthology/D19-1521.pdf |
2019 |
EMNLP |
# arch-lstm, arch-att, search-beam, pre-elmo, pre-bert, task-lm, task-seq2seq |
0 |
Generating Natural Anagrams: Towards Language Generation Under Hard Combinatorial Constraints |
Masaaki Nishino, Sho Takase, Tsutomu Hirao, Masaaki Nagata |
https://www.aclweb.org/anthology/D19-1674.pdf |
2019 |
EMNLP |
# arch-att, pre-elmo, pre-bert, task-lm, task-seq2seq |
0 |
Higher-order Comparisons of Sentence Encoder Representations |
Mostafa Abdou, Artur Kulmizev, Felix Hill, Daniel M. Low, Anders Søgaard |
https://www.aclweb.org/anthology/D19-1593.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, train-mtl, train-mll, train-transfer, pool-mean, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-transformer, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-relation, task-tree, task-lexicon |
8 |
75 Languages, 1 Model: Parsing Universal Dependencies Universally |
Dan Kondratyuk, Milan Straka |
https://www.aclweb.org/anthology/D19-1279.pdf |
2019 |
EMNLP |
# train-mtl, arch-lstm, arch-cnn, arch-att, pre-fasttext, pre-elmo, pre-bert, task-relation |
0 |
A Context-based Framework for Modeling the Role and Function of On-line Resource Citations in Scientific Literature |
He Zhao, Zhunchen Luo, Chong Feng, Anqing Zheng, Xiaopeng Liu |
https://www.aclweb.org/anthology/D19-1524.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, optim-projection, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-elmo, pre-bert, task-textclass, task-lm, task-cloze |
1 |
Visualizing and Understanding the Effectiveness of BERT |
Yaru Hao, Li Dong, Furu Wei, Ke Xu |
https://www.aclweb.org/anthology/D19-1424.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, init-glorot, reg-norm, train-mll, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, comb-ensemble, pre-elmo, pre-bert, task-lm, task-relation |
0 |
Specializing Word Embeddings (for Parsing) by Information Bottleneck |
Xiang Lisa Li, Jason Eisner |
https://www.aclweb.org/anthology/D19-1276.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, reg-dropout, reg-decay, norm-gradient, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, pre-glove, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm, meta-arch |
0 |
Improved Differentiable Architecture Search for Language Modeling and Named Entity Recognition |
Yufan Jiang, Chi Hu, Tong Xiao, Chunliang Zhang, Jingbo Zhu |
https://www.aclweb.org/anthology/D19-1367.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-att, arch-coverage, arch-transformer, pre-glove, pre-elmo, pre-bert, adv-examp, adv-train, task-textpair, task-lm |
0 |
A Logic-Driven Framework for Consistency of Neural Models |
Tao Li, Vivek Gupta, Maitrey Mehta, Vivek Srikumar |
https://www.aclweb.org/anthology/D19-1405.pdf |
2019 |
EMNLP |
# optim-adam, reg-stopping, train-mtl, train-mll, train-transfer, arch-lstm, arch-coverage, pre-glove, pre-elmo, task-textclass, task-lm |
0 |
Multi-Domain Goal-Oriented Dialogues (MultiDoGO): Strategies toward Curating and Annotating Large Scale Dialogue Data |
Denis Peskov, Nancy Clarke, Jason Krone, Brigi Fodor, Yi Zhang, Adel Youssef, Mona Diab |
https://www.aclweb.org/anthology/D19-1460.pdf |
2019 |
EMNLP |
# optim-adam, optim-adadelta, optim-projection, init-glorot, reg-dropout, norm-layer, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-residual, arch-bilinear, arch-coverage, arch-transformer, pre-glove, pre-elmo, pre-bert, task-textclass, task-seq2seq, task-relation, task-tree |
0 |
Syntax-Enhanced Self-Attention-Based Semantic Role Labeling |
Yue Zhang, Rui Wang, Luo Si |
https://www.aclweb.org/anthology/D19-1057.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, pre-bert, task-spanlab |
0 |
Machine Reading Comprehension Using Structural Knowledge Graph-aware Network |
Delai Qiu, Yuanzhe Zhang, Xinwei Feng, Xiangwen Liao, Wenbin Jiang, Yajuan Lyu, Kang Liu, Jun Zhao |
https://www.aclweb.org/anthology/D19-1602.pdf |
2019 |
EMNLP |
# reg-dropout, reg-stopping, reg-patience, arch-lstm, arch-cnn, search-viterbi, pre-word2vec, pre-fasttext, pre-elmo, pre-bert |
0 |
HARE: a Flexible Highlighting Annotator for Ranking and Exploration |
Denis Newman-Griffis, Eric Fosler-Lussier |
https://www.aclweb.org/anthology/D19-3015.pdf |
2019 |
EMNLP |
# reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-gating, arch-transformer, pre-glove, pre-elmo, pre-bert, struct-crf, task-textclass, task-seqlab, task-seq2seq, meta-arch |
1 |
NeuronBlocks: Building Your NLP DNN Models Like Playing Lego |
Ming Gong, Linjun Shou, Wutao Lin, Zhijie Sang, Quanjia Yan, Ze Yang, Feixiang Cheng, Daxin Jiang |
https://www.aclweb.org/anthology/D19-3028.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, init-glorot, reg-dropout, arch-att, arch-selfatt, arch-memo, arch-coverage, arch-transformer, search-beam, pre-elmo, pre-bert, task-spanlab, task-seq2seq, task-relation, task-tree |
1 |
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning |
Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li |
https://www.aclweb.org/anthology/D19-1170.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-labelsmooth, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-subword, arch-transformer, search-beam, pre-elmo, pre-bert, latent-vae, task-seqlab, task-extractive, task-lm, task-seq2seq, task-cloze |
6 |
Text Summarization with Pretrained Encoders |
Yang Liu, Mirella Lapata |
https://www.aclweb.org/anthology/D19-1387.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, reg-stopping, train-mll, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-elmo, task-spanlab, task-lm, task-seq2seq |
1 |
Retrofitting Contextualized Word Embeddings with Paraphrases |
Weijia Shi, Muhao Chen, Pei Zhou, Kai-Wei Chang |
https://www.aclweb.org/anthology/D19-1113.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, train-mtl, arch-lstm, arch-bilstm, arch-att, arch-memo, pre-glove, pre-elmo, pre-bert, task-textpair, task-spanlab, task-tree |
1 |
What’s Missing: A Knowledge Gap Guided Approach for Multi-hop Question Answering |
Tushar Khot, Ashish Sabharwal, Peter Clark |
https://www.aclweb.org/anthology/D19-1281.pdf |
2019 |
EMNLP |
# reg-dropout, arch-lstm, arch-att, pre-glove, pre-elmo, pre-bert, task-seqlab, task-lm, task-cloze |
1 |
BERT for Coreference Resolution: Baselines and Analysis |
Mandar Joshi, Omer Levy, Luke Zettlemoyer, Daniel Weld |
https://www.aclweb.org/anthology/D19-1588.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-subword, comb-ensemble, search-beam, pre-word2vec, pre-glove, pre-elmo, adv-examp, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq |
8 |
Universal Adversarial Triggers for Attacking and Analyzing NLP |
Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh |
https://www.aclweb.org/anthology/D19-1221.pdf |
2019 |
EMNLP |
# optim-projection, reg-dropout, reg-labelsmooth, arch-rnn, arch-gru, pre-elmo, task-seq2seq |
0 |
Deep Ordinal Regression for Pledge Specificity Prediction |
Shivashankar Subramanian, Trevor Cohn, Timothy Baldwin |
https://www.aclweb.org/anthology/D19-1182.pdf |
2019 |
EMNLP |
# train-mll, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-memo, arch-transformer, pre-glove, pre-elmo, pre-bert, latent-vae, task-lm, task-seq2seq |
0 |
Next Sentence Prediction helps Implicit Discourse Relation Classification within and across Domains |
Wei Shi, Vera Demberg |
https://www.aclweb.org/anthology/D19-1586.pdf |
2019 |
EMNLP |
# arch-lstm, arch-bilstm, arch-coverage, pre-glove, pre-elmo, task-lm, task-seq2seq, task-alignment |
0 |
Recursive Context-Aware Lexical Simplification |
Sian Gooding, Ekaterina Kochmar |
https://www.aclweb.org/anthology/D19-1491.pdf |
2019 |
EMNLP |
# optim-projection, train-mll, train-transfer, arch-lstm, arch-att, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, pre-use, adv-train, loss-cca, task-textclass, task-textpair, task-lm, task-seq2seq, task-cloze |
0 |
Multi-View Domain Adapted Sentence Embeddings for Low-Resource Unsupervised Duplicate Question Detection |
Nina Poerner, Hinrich Schütze |
https://www.aclweb.org/anthology/D19-1173.pdf |
2019 |
EMNLP |
# optim-adam, train-mll, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-memo, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq |
0 |
Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations |
Mingda Chen, Zewei Chu, Kevin Gimpel |
https://www.aclweb.org/anthology/D19-1060.pdf |
2019 |
EMNLP |
# optim-adam, pool-max, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-bilinear, arch-subword, pre-word2vec, pre-elmo, task-lm, task-seq2seq, task-tree |
0 |
A Unified Neural Coherence Model |
Han Cheol Moon, Tasnim Mohiuddin, Shafiq Joty, Chi Xu |
https://www.aclweb.org/anthology/D19-1231.pdf |
2019 |
EMNLP |
# optim-sgd, reg-dropout, arch-lstm, arch-bilstm, arch-att, pre-word2vec, pre-glove, pre-elmo, pre-bert, struct-crf, latent-topic, task-tree |
0 |
Negative Focus Detection via Contextual Attention Mechanism |
Longxiang Shen, Bowei Zou, Yu Hong, Guodong Zhou, Qiaoming Zhu, AiTi Aw |
https://www.aclweb.org/anthology/D19-1230.pdf |
2019 |
NAA-CL |
# optim-sgd, optim-adam, optim-projection, reg-dropout, arch-rnn, arch-lstm, pre-elmo, loss-cca, loss-svd, task-seqlab, task-lm, task-seq2seq |
13 |
Understanding Learning Dynamics Of Language Models with SVCCA |
Naomi Saphra, Adam Lopez |
https://www.aclweb.org/anthology/N19-1329.pdf |
2019 |
NAA-CL |
# reg-dropout, train-mtl, arch-lstm, arch-cnn, arch-att, arch-selfatt, pre-elmo, task-lm, task-tree |
1 |
Neural Constituency Parsing of Speech Transcripts |
Paria Jamshid Lou, Yufei Wang, Mark Johnson |
https://www.aclweb.org/anthology/N19-1282.pdf |
2019 |
NAA-CL |
# optim-adam, reg-stopping, train-augment, arch-lstm, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-subword, arch-transformer, search-beam, pre-glove, pre-elmo, task-textpair, task-lm, task-condlm, task-seq2seq, task-tree |
1 |
Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting |
J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, Benjamin Van Durme |
https://www.aclweb.org/anthology/N19-1090.pdf |
2019 |
NAA-CL |
# optim-adam, optim-projection, reg-dropout, train-mll, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, comb-ensemble, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm, task-seq2seq, task-relation, task-tree, task-lexicon, task-alignment |
12 |
Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog |
Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis |
https://www.aclweb.org/anthology/N19-1380.pdf |
2019 |
NAA-CL |
# optim-sgd, reg-dropout, reg-worddropout, reg-patience, pool-max, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-subword, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm |
27 |
Pooled Contextualized Embeddings for Named Entity Recognition |
Alan Akbik, Tanja Bergmann, Roland Vollgraf |
https://www.aclweb.org/anthology/N19-1078.pdf |
2019 |
NAA-CL |
# optim-adam, arch-cnn, arch-att, arch-coverage, pre-elmo, pre-bert, task-textclass, task-spanlab, task-lm, task-cloze |
19 |
BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis |
Hu Xu, Bing Liu, Lei Shu, Philip Yu |
https://www.aclweb.org/anthology/N19-1242.pdf |
2019 |
NAA-CL |
# optim-sgd, optim-adam, reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-gru, arch-att, arch-coverage, arch-transformer, pre-glove, pre-elmo, task-lm |
3 |
Recursive Routing Networks: Learning to Compose Modules for Language Understanding |
Ignacio Cases, Clemens Rosenbaum, Matthew Riemer, Atticus Geiger, Tim Klinger, Alex Tamkin, Olivia Li, Sandhini Agarwal, Joshua D. Greene, Dan Jurafsky, Christopher Potts, Lauri Karttunen |
https://www.aclweb.org/anthology/N19-1365.pdf |
2019 |
NAA-CL |
# optim-adam, train-mtl, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-treelstm, arch-gnn, arch-cnn, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq |
17 |
Star-Transformer |
Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Xiangyang Xue, Zheng Zhang |
https://www.aclweb.org/anthology/N19-1133.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, pre-glove, pre-elmo, pre-bert, struct-crf, task-seqlab |
3 |
GraphIE: A Graph-Based Framework for Information Extraction |
Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, Regina Barzilay |
https://www.aclweb.org/anthology/N19-1082.pdf |
2019 |
NAA-CL |
# optim-adam, train-mtl, train-mll, arch-att, arch-subword, arch-transformer, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-use, latent-vae, task-textclass, task-textpair, task-seq2seq |
2 |
Correlation Coefficients and Semantic Textual Similarity |
Vitalii Zhelezniak, Aleksandar Savkov, April Shen, Nils Hammerla |
https://www.aclweb.org/anthology/N19-1100.pdf |
2019 |
NAA-CL |
# reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-memo, pre-elmo, pre-bert, task-lm, task-cloze |
16 |
Utilizing BERT for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence |
Chi Sun, Luyao Huang, Xipeng Qiu |
https://www.aclweb.org/anthology/N19-1035.pdf |
2019 |
NAA-CL |
# train-transfer, arch-lstm, arch-bilstm, pre-word2vec, pre-glove, pre-elmo, pre-bert, struct-crf, adv-train, task-textclass, task-textpair, task-seqlab, task-lm |
1 |
Using Similarity Measures to Select Pretraining Data for NER |
Xiang Dai, Sarvnaz Karimi, Ben Hachey, Cecile Paris |
https://www.aclweb.org/anthology/N19-1149.pdf |
2019 |
NAA-CL |
# optim-adam, reg-patience, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-coverage, arch-subword, arch-transformer, pre-glove, pre-elmo, pre-bert, struct-crf, adv-train, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-relation |
51 |
Linguistic Knowledge and Transferability of Contextual Representations |
Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, Noah A. Smith |
https://www.aclweb.org/anthology/N19-1112.pdf |
2019 |
NAA-CL |
# optim-adadelta, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-cnn, arch-att, pre-glove, pre-elmo, pre-bert, task-lm |
4 |
Structural Scaffolds for Citation Intent Classification in Scientific Publications |
Arman Cohan, Waleed Ammar, Madeleine van Zuylen, Field Cady |
https://www.aclweb.org/anthology/N19-1361.pdf |
2019 |
NAA-CL |
# optim-adam, arch-rnn, arch-lstm, arch-att, arch-coverage, pre-word2vec, pre-glove, pre-elmo, latent-vae, task-lm, task-seq2seq |
4 |
In Other News: a Bi-style Text-to-speech Model for Synthesizing Newscaster Voice with Limited Data |
Nishant Prateek, Mateusz Łajszczak, Roberto Barra-Chicote, Thomas Drugman, Jaime Lorenzo-Trueba, Thomas Merritt, Srikanth Ronanki, Trevor Wood |
https://www.aclweb.org/anthology/N19-2026.pdf |
2019 |
NAA-CL |
# optim-sgd, optim-adam, reg-dropout, reg-labelsmooth, norm-layer, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-elmo, task-textclass, task-lm, task-seq2seq |
10 |
Pre-trained language model representations for language generation |
Sergey Edunov, Alexei Baevski, Michael Auli |
https://www.aclweb.org/anthology/N19-1409.pdf |
2019 |
NAA-CL |
# train-mll, arch-lstm, arch-subword, pre-fasttext, pre-elmo, loss-svd, task-lm, task-seq2seq, task-relation, task-alignment |
8 |
Context-Aware Cross-Lingual Mapping |
Hanan Aldarmaki, Mona Diab |
https://www.aclweb.org/anthology/N19-1391.pdf |
2019 |
NAA-CL |
# optim-adam, optim-adagrad, optim-adadelta, optim-projection, reg-dropout, reg-norm, train-mll, train-active, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-gating, arch-bilinear, arch-subword, comb-ensemble, pre-fasttext, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm, task-seq2seq, task-relation, task-lexicon |
7 |
Polyglot Contextual Representations Improve Crosslingual Transfer |
Phoebe Mulcaire, Jungo Kasai, Noah A. Smith |
https://www.aclweb.org/anthology/N19-1392.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, arch-lstm, arch-att, arch-gating, arch-subword, arch-transformer, pre-glove, pre-elmo, pre-bert, adv-train, task-relation |
3 |
Learning to Denoise Distantly-Labeled Data for Entity Typing |
Yasumasa Onoe, Greg Durrett |
https://www.aclweb.org/anthology/N19-1250.pdf |
2019 |
NAA-CL |
# init-glorot, train-mll, train-parallel, arch-rnn, arch-lstm, arch-bilstm, comb-ensemble, pre-elmo, task-relation |
2 |
Recursive Subtree Composition in LSTM-Based Dependency Parsing |
Miryam de Lhoneux, Miguel Ballesteros, Joakim Nivre |
https://www.aclweb.org/anthology/N19-1159.pdf |
2019 |
NAA-CL |
# optim-adam, optim-projection, reg-stopping, train-mtl, train-transfer, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-att, comb-ensemble, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-relation |
0 |
AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning |
Han Guo, Ramakanth Pasunuru, Mohit Bansal |
https://www.aclweb.org/anthology/N19-1355.pdf |
2019 |
NAA-CL |
# arch-rnn, arch-lstm, arch-bilstm, arch-treelstm, arch-att, arch-coverage, arch-transformer, pre-word2vec, pre-elmo, pre-bert, pre-use, task-textpair, task-seq2seq, task-tree |
4 |
Evaluating Coherence in Dialogue Systems using Entailment |
Nouha Dziri, Ehsan Kamalloo, Kory Mathewson, Osmar Zaiane |
https://www.aclweb.org/anthology/N19-1381.pdf |
2019 |
NAA-CL |
# optim-sgd, optim-projection, reg-dropout, reg-decay, norm-gradient, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, pre-bert, struct-crf, latent-vae, task-seqlab, task-lm, task-seq2seq |
2 |
Knowledge-Augmented Language Model and Its Application to Unsupervised Named-Entity Recognition |
Angli Liu, Jingfei Du, Veselin Stoyanov |
https://www.aclweb.org/anthology/N19-1117.pdf |
2019 |
NAA-CL |
# optim-projection, reg-dropout, train-mtl, arch-lstm, arch-bilstm, arch-cnn, arch-att, search-beam, pre-elmo, struct-crf, adv-train, task-seqlab, task-relation |
10 |
A general framework for information extraction using dynamic span graphs |
Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, Hannaneh Hajishirzi |
https://www.aclweb.org/anthology/N19-1308.pdf |
2019 |
NAA-CL |
# optim-adam, train-augment, pre-glove, pre-elmo, pre-bert, task-lm |
24 |
Gender Bias in Contextualized Word Embeddings |
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, Kai-Wei Chang |
https://www.aclweb.org/anthology/N19-1064.pdf |
2019 |
NAA-CL |
# optim-adam, optim-projection, reg-dropout, reg-stopping, norm-gradient, arch-lstm, arch-bilstm, arch-cnn, arch-att, pre-fasttext, pre-elmo, struct-crf, adv-examp, adv-train, task-textclass, task-lm, task-condlm |
3 |
Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems |
Steffen Eger, Gözde Gül Şahin, Andreas Rücklé, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, Iryna Gurevych |
https://www.aclweb.org/anthology/N19-1165.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-bilinear, pre-glove, pre-elmo, pre-bert, task-relation, task-tree |
0 |
Decomposed Local Models for Coordinate Structure Parsing |
Hiroki Teranishi, Hiroyuki Shindo, Yuji Matsumoto |
https://www.aclweb.org/anthology/N19-1343.pdf |
2019 |
NAA-CL |
# train-mll, arch-att, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-seq2seq |
0 |
Big BiRD: A Large, Fine-Grained, Bigram Relatedness Dataset for Examining Semantic Composition |
Shima Asaadi, Saif Mohammad, Svetlana Kiritchenko |
https://www.aclweb.org/anthology/N19-1050.pdf |
2019 |
NAA-CL |
# arch-lstm, arch-att, arch-coverage, pre-fasttext, pre-glove, pre-elmo, pre-use, task-textclass, task-relation, task-tree |
0 |
Outlier Detection for Improved Data Quality and Diversity in Dialog Systems |
Stefan Larson, Anish Mahendran, Andrew Lee, Jonathan K. Kummerfeld, Parker Hill, Michael A. Laurenzano, Johann Hauswald, Lingjia Tang, Jason Mars |
https://www.aclweb.org/anthology/N19-1051.pdf |
2019 |
NAA-CL |
# optim-sgd, optim-adam, reg-dropout, reg-patience, arch-rnn, arch-birnn, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-subword, pre-word2vec, pre-glove, pre-skipthought, pre-elmo, struct-crf, adv-train, latent-vae, task-textclass, task-lm |
6 |
Dialogue Act Classification with Context-Aware Self-Attention |
Vipul Raheja, Joel Tetreault |
https://www.aclweb.org/anthology/N19-1373.pdf |
2019 |
NAA-CL |
# optim-adam, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, pre-elmo, pre-bert, task-lm, task-seq2seq, task-relation |
43 |
A Structural Probe for Finding Syntax in Word Representations |
John Hewitt, Christopher D. Manning |
https://www.aclweb.org/anthology/N19-1419.pdf |
2019 |
NAA-CL |
# optim-sgd, reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-memo, comb-ensemble, pre-glove, pre-elmo, pre-bert, struct-crf, nondif-reinforce, task-seq2seq, task-relation |
7 |
Better, Faster, Stronger Sequence Tagging Constituent Parsers |
David Vilares, Mostafa Abdou, Anders Søgaard |
https://www.aclweb.org/anthology/N19-1341.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, train-mtl, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-gating, arch-bilinear, pre-glove, pre-elmo, pre-bert, task-spanlab, task-lm, task-seq2seq |
4 |
Multi-task Learning with Sample Re-weighting for Machine Reading Comprehension |
Yichong Xu, Xiaodong Liu, Yelong Shen, Jingjing Liu, Jianfeng Gao |
https://www.aclweb.org/anthology/N19-1271.pdf |
2019 |
NAA-CL |
# optim-adam, arch-lstm, arch-bilstm, arch-att, arch-coverage, arch-transformer, search-beam, pre-glove, pre-elmo, pre-bert, struct-crf, adv-examp, task-textpair, task-lm, task-condlm, task-seq2seq, task-alignment |
9 |
PAWS: Paraphrase Adversaries from Word Scrambling |
Yuan Zhang, Jason Baldridge, Luheng He |
https://www.aclweb.org/anthology/N19-1131.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, arch-lstm, arch-att, arch-coverage, pre-glove, pre-elmo, struct-hmm, task-textpair, task-seq2seq |
3 |
Incorporating Context and External Knowledge for Pronoun Coreference Resolution |
Hongming Zhang, Yan Song, Yangqiu Song |
https://www.aclweb.org/anthology/N19-1093.pdf |
2019 |
NAA-CL |
# optim-sgd, train-mll, arch-rnn, arch-lstm, arch-att, arch-memo, arch-coverage, arch-subword, arch-transformer, pre-word2vec, pre-glove, pre-elmo, pre-bert, struct-crf, task-textclass, task-lm, task-seq2seq |
2 |
One Size Does Not Fit All: Comparing NMT Representations of Different Granularities |
Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Yonatan Belinkov, Preslav Nakov |
https://www.aclweb.org/anthology/N19-1154.pdf |
2019 |
NAA-CL |
# optim-adam, optim-projection, reg-dropout, reg-stopping, train-mll, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-bilinear, arch-subword, comb-ensemble, pre-fasttext, pre-elmo, pre-bert, loss-nce, task-textclass, task-lm, task-seq2seq, task-relation, task-lexicon |
12 |
Cross-Lingual Alignment of Contextual Word Embeddings, with Applications to Zero-shot Dependency Parsing |
Tal Schuster, Ori Ram, Regina Barzilay, Amir Globerson |
https://www.aclweb.org/anthology/N19-1162.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-memo, pre-elmo, task-spanlab, task-seq2seq |
7 |
BAG: Bi-directional Attention Entity Graph Convolutional Network for Multi-hop Reasoning Question Answering |
Yu Cao, Meng Fang, Dacheng Tao |
https://www.aclweb.org/anthology/N19-1032.pdf |
2019 |
NAA-CL |
# reg-dropout, reg-decay, train-mtl, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-transformer, comb-ensemble, pre-glove, pre-elmo, pre-bert, task-lm, task-seq2seq, task-tree |
20 |
Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data |
Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu |
https://www.aclweb.org/anthology/N19-1014.pdf |
2019 |
NAA-CL |
# train-mll, arch-rnn, arch-lstm, arch-bilstm, comb-ensemble, pre-word2vec, pre-glove, pre-paravec, pre-skipthought, pre-elmo, task-seq2seq |
0 |
Learning Outside the Box: Discourse-level Features Improve Metaphor Identification |
Jesse Mu, Helen Yannakoudakis, Ekaterina Shutova |
https://www.aclweb.org/anthology/N19-1059.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, reg-decay, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze |
3209 |
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding |
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova |
https://www.aclweb.org/anthology/N19-1423.pdf |
2019 |
NAA-CL |
# optim-adam, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-glove, pre-elmo |
2 |
Ranking and Selecting Multi-Hop Knowledge Paths to Better Predict Human Needs |
Debjit Paul, Anette Frank |
https://www.aclweb.org/anthology/N19-1368.pdf |