Year |
Conf. |
Topic |
Cited |
Paper |
Authors |
Url |
2019 |
ACL |
# optim-adam, init-glorot, reg-dropout, reg-decay, train-mtl, arch-lstm, arch-bilstm, arch-att, task-condlm, task-seq2seq |
1 |
Storyboarding of Recipes: Grounded Contextual Generation |
Khyathi Chandu, Eric Nyberg, Alan W Black |
https://www.aclweb.org/anthology/P19-1606.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-decay, norm-batch, norm-gradient, train-mtl, activ-relu, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-coverage, arch-transformer, pre-fasttext, task-textclass, task-lm |
1 |
#YouToo? Detection of Personal Recollections of Sexual Harassment on Social Media |
Arijit Ghosh Chowdhury, Ramit Sawhney, Rajiv Ratn Shah, Debanjan Mahata |
https://www.aclweb.org/anthology/P19-1241.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-decay, norm-layer, train-augment, arch-gnn, arch-att, arch-selfatt, arch-copy, arch-transformer, search-greedy, pre-glove, pre-bert, task-textclass, task-lm, task-seq2seq, task-tree, task-graph |
2 |
Generating Logical Forms from Graph Representations of Text and Entities |
Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, Yasemin Altun |
https://www.aclweb.org/anthology/P19-1010.pdf |
2019 |
ACL |
# optim-adam, init-glorot, reg-decay, arch-gnn, arch-att, arch-selfatt, arch-memo, arch-coverage, pre-bert, task-spanlab, task-lm |
7 |
Cognitive Graph for Multi-Hop Reading Comprehension at Scale |
Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, Jie Tang |
https://www.aclweb.org/anthology/P19-1259.pdf |
2019 |
ACL |
# optim-adam, init-glorot, reg-decay, train-augment, arch-rnn, arch-att, arch-bilinear, arch-subword, task-textclass, task-lm, task-seq2seq, task-relation |
5 |
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology |
Ran Zmigrod, Sebastian J. Mielke, Hanna Wallach, Ryan Cotterell |
https://www.aclweb.org/anthology/P19-1161.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-decay, pool-max, arch-rnn, arch-cnn, arch-coverage, search-beam, pre-glove, pre-skipthought, task-textpair, task-lm |
0 |
A Cross-Domain Transferable Neural Coherence Model |
Peng Xu, Hamidreza Saghir, Jin Sung Kang, Teng Long, Avishek Joey Bose, Yanshuai Cao, Jackie Chi Kit Cheung |
https://www.aclweb.org/anthology/P19-1067.pdf |
2019 |
ACL |
# optim-adam, reg-decay, arch-lstm, arch-att, arch-coverage, comb-ensemble, pre-paravec, latent-topic, task-relation |
0 |
Modeling Financial Analysts’ Decision Making via the Pragmatics and Semantics of Earnings Calls |
Katherine Keith, Amanda Stent |
https://www.aclweb.org/anthology/P19-1047.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-decay, train-transfer, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-memo, comb-ensemble, task-textclass, task-lm, task-seq2seq |
0 |
StRE: Self Attentive Edit Quality Prediction in Wikipedia |
Soumya Sarkar, Bhanu Prakash Reddy, Sandipan Sikdar, Animesh Mukherjee |
https://www.aclweb.org/anthology/P19-1387.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-decay, reg-labelsmooth, train-mll, train-transfer, arch-att, arch-selfatt, arch-transformer, comb-ensemble, search-beam, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-cloze |
1 |
A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning |
Gonçalo M. Correia, André F. T. Martins |
https://www.aclweb.org/anthology/P19-1292.pdf |
2019 |
ACL |
# optim-adam, reg-stopping, reg-patience, reg-decay, arch-att, arch-coverage, arch-transformer, comb-ensemble, pre-elmo, pre-bert, task-textpair, task-lm |
1 |
GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification |
Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, Maosong Sun |
https://www.aclweb.org/anthology/P19-1085.pdf |
2019 |
ACL |
# optim-adagrad, reg-decay, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, pre-glove, adv-train, task-textpair, task-seqlab |
5 |
OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs |
Seungwhan Moon, Pararth Shah, Anuj Kumar, Rajen Subba |
https://www.aclweb.org/anthology/P19-1081.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-decay, train-mtl, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-cnn, arch-att, pre-fasttext, pre-glove, struct-crf, task-textclass, task-seqlab, task-lm, task-seq2seq, task-relation, task-alignment |
1 |
Exploring Sequence-to-Sequence Learning in Aspect Term Extraction |
Dehong Ma, Sujian Li, Fangzhao Wu, Xing Xie, Houfeng Wang |
https://www.aclweb.org/anthology/P19-1344.pdf |
2019 |
ACL |
# reg-decay, arch-lstm, arch-att, arch-selfatt, comb-ensemble, pre-glove, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, task-tree |
8 |
Explain Yourself! Leveraging Language Models for Commonsense Reasoning |
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, Richard Socher |
https://www.aclweb.org/anthology/P19-1487.pdf |
2019 |
ACL |
# optim-sgd, reg-dropout, reg-worddropout, reg-norm, reg-decay, train-mtl, arch-rnn, arch-lstm, arch-transformer, task-lm, task-seq2seq |
2 |
Improved Language Modeling by Decoding the Past |
Siddhartha Brahma |
https://www.aclweb.org/anthology/P19-1142.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, reg-decay, train-mll, train-transfer, pool-max, arch-lstm, arch-bilstm, arch-cnn, arch-coverage, arch-transformer, struct-crf, task-seqlab |
1 |
Low-Resource Name Tagging Learned with Weakly Labeled Data |
Yixin Cao, Zikun Hu, Tat-seng Chua, Zhiyuan Liu, Heng Ji |
https://www.aclweb.org/anthology/D19-1025.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-patience, reg-decay, arch-att, arch-coverage, arch-subword, pre-bert, task-seqlab |
13 |
Fine-Grained Analysis of Propaganda in News Article |
Giovanni Da San Martino, Seunghak Yu, Alberto Barrón-Cedeño, Rostislav Petrov, Preslav Nakov |
https://www.aclweb.org/anthology/D19-1565.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, reg-stopping, reg-decay, reg-labelsmooth, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-subword, arch-transformer, task-lm, task-seq2seq, task-alignment |
0 |
Jointly Learning to Align and Translate with Transformer Models |
Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, Matthias Paulik |
https://www.aclweb.org/anthology/D19-1453.pdf |
2019 |
EMNLP |
# optim-sgd, reg-decay, train-transfer, arch-lstm, arch-subword, search-beam, task-seq2seq, task-alignment |
0 |
HABLex: Human Annotated Bilingual Lexicons for Experiments in Machine Translation |
Brian Thompson, Rebecca Knowles, Xuan Zhang, Huda Khayrallah, Kevin Duh, Philipp Koehn |
https://www.aclweb.org/anthology/D19-1142.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, reg-norm, reg-decay, arch-lstm, arch-bilstm, arch-bilinear, pre-word2vec, pre-glove, pre-elmo, pre-bert, task-lm, task-seq2seq, task-relation |
1 |
Designing and Interpreting Probes with Control Tasks |
John Hewitt, Percy Liang |
https://www.aclweb.org/anthology/D19-1275.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-memo, arch-subword, arch-transformer, pre-word2vec, pre-glove, pre-elmo, pre-bert, loss-nce, task-textpair, task-alignment |
0 |
A Gated Self-attention Memory Network for Answer Selection |
Tuan Lai, Quan Hung Tran, Trung Bui, Daisuke Kihara |
https://www.aclweb.org/anthology/D19-1610.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, norm-layer, train-mll, pool-mean, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, task-spanlab, task-lm, task-seq2seq |
0 |
Cross-Lingual Machine Reading Comprehension |
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu |
https://www.aclweb.org/anthology/D19-1169.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-decay, norm-layer, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, pre-bert, task-textclass, task-seqlab, task-lm, task-seq2seq |
0 |
Subword Language Model for Query Auto-Completion |
Gyuwan Kim |
https://www.aclweb.org/anthology/D19-1507.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, reg-dropout, reg-decay, train-mll, train-transfer, pool-max, arch-lstm, arch-att, arch-selfatt, arch-bilinear, arch-subword, arch-transformer, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze, task-relation |
22 |
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT |
Shijie Wu, Mark Dredze |
https://www.aclweb.org/anthology/D19-1077.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, train-augment, arch-att, comb-ensemble, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-relation |
0 |
CFO: A Framework for Building Production NLP Systems |
Rishav Chakravarti, Cezar Pendus, Andrzej Sakrajda, Anthony Ferritto, Lin Pan, Michael Glass, Vittorio Castelli, J William Murdock, Radu Florian, Salim Roukos, Avi Sil |
https://www.aclweb.org/anthology/D19-3006.pdf |
2019 |
EMNLP |
# optim-adam, optim-projection, reg-dropout, reg-decay, arch-rnn, arch-lstm, arch-gru, arch-att, arch-memo, latent-vae, latent-topic, loss-svd, task-lm, task-seq2seq |
0 |
Adaptive Parameterization for Neural Dialogue Generation |
Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao, Dawei Yin |
https://www.aclweb.org/anthology/D19-1188.pdf |
2019 |
EMNLP |
# optim-adam, init-glorot, reg-dropout, reg-decay, train-mtl, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-gnn, arch-att, pre-fasttext, pre-glove, task-textclass |
0 |
Text Level Graph Neural Network for Text Classification |
Lianzhe Huang, Dehong Ma, Sujian Li, Xiaodong Zhang, Houfeng Wang |
https://www.aclweb.org/anthology/D19-1345.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, pool-max, arch-gcnn, arch-cnn, arch-att, pre-glove, task-textclass, task-condlm, task-seq2seq |
0 |
Hierarchical Text Classification with Reinforced Label Assignment |
Yuning Mao, Jingjing Tian, Jiawei Han, Xiang Ren |
https://www.aclweb.org/anthology/D19-1042.pdf |
2019 |
EMNLP |
# optim-amsgrad, reg-dropout, reg-decay, train-transfer, arch-lstm, arch-bilstm, arch-cnn, arch-coverage, comb-ensemble, search-beam, task-relation |
0 |
A Search-based Neural Model for Biomedical Nested and Overlapping Event Detection |
Kurt Junshean Espinosa, Makoto Miwa, Sophia Ananiadou |
https://www.aclweb.org/anthology/D19-1381.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-decay, train-mll, train-transfer, arch-lstm, arch-att, arch-subword, arch-transformer, search-beam, task-lm, task-seq2seq |
1 |
The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English |
Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, Marc’Aurelio Ranzato |
https://www.aclweb.org/anthology/D19-1632.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-bert, task-lm |
1 |
Attending to Future Tokens for Bidirectional Sequence Generation |
Carolin Lawrence, Bhushan Kotnis, Mathias Niepert |
https://www.aclweb.org/anthology/D19-1001.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, arch-rnn, arch-gru, arch-att, pre-glove, task-seq2seq |
0 |
Who Is Speaking to Whom? Learning to Identify Utterance Addressee in Multi-Party Conversations |
Ran Le, Wenpeng Hu, Mingyue Shang, Zhenjun You, Lidong Bing, Dongyan Zhao, Rui Yan |
https://www.aclweb.org/anthology/D19-1199.pdf |
2019 |
EMNLP |
# reg-dropout, reg-decay, reg-labelsmooth, train-mll, arch-rnn, arch-lstm, arch-att, arch-subword, arch-transformer, pre-bert, task-textclass, task-lm, task-seq2seq |
2 |
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning |
Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kadras, Sylvain Gugger, Jeremy Howard |
https://www.aclweb.org/anthology/D19-1572.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-decay, train-mtl, arch-rnn, arch-lstm, arch-cnn, arch-att, pre-word2vec, task-textclass |
0 |
Using Clinical Notes with Time Series Data for ICU Management |
Swaraj Khadanga, Karan Aggarwal, Shafiq Joty, Jaideep Srivastava |
https://www.aclweb.org/anthology/D19-1678.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-decay, arch-att, pre-elmo, loss-svd |
1 |
An Attentive Fine-Grained Entity Typing Model with Latent Type Representation |
Ying Lin, Heng Ji |
https://www.aclweb.org/anthology/D19-1641.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, norm-gradient, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-coverage, nondif-reinforce, task-extractive, task-spanlab, task-condlm, task-seq2seq |
0 |
Reading Like HER: Human Reading Inspired Extractive Summarization |
Ling Luo, Xiang Ao, Yan Song, Feiyang Pan, Min Yang, Qing He |
https://www.aclweb.org/anthology/D19-1300.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, norm-gradient, arch-gnn, arch-att, arch-selfatt, arch-transformer, pre-bert, task-spanlab, task-tree |
0 |
NumNet: Machine Reading Comprehension with Numerical Reasoning |
Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, Zhiyuan Liu |
https://www.aclweb.org/anthology/D19-1251.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, arch-lstm, arch-cnn, pre-glove, task-textclass |
0 |
Learning Only from Relevant Keywords and Unlabeled Documents |
Nontawat Charoenphakdee, Jongyeong Lee, Yiping Jin, Dittaya Wanvarie, Masashi Sugiyama |
https://www.aclweb.org/anthology/D19-1411.pdf |
2019 |
EMNLP |
# optim-sgd, reg-dropout, reg-decay, arch-rnn, arch-lstm, arch-bilstm, arch-att, task-lm, task-seq2seq |
3 |
Language Modeling for Code-Switching: Evaluation, Integration of Monolingual Data, and Discriminative Training |
Hila Gonen, Yoav Goldberg |
https://www.aclweb.org/anthology/D19-1427.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, reg-dropout, reg-decay, norm-gradient, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, pre-glove, pre-elmo, pre-bert, struct-crf, task-seqlab, task-lm, meta-arch |
0 |
Improved Differentiable Architecture Search for Language Modeling and Named Entity Recognition |
Yufan Jiang, Chi Hu, Tong Xiao, Chunliang Zhang, Jingbo Zhu |
https://www.aclweb.org/anthology/D19-1367.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-decay, norm-layer, train-mll, train-parallel, arch-rnn, arch-att, arch-subword, arch-transformer, search-greedy, search-beam, pre-bert, task-lm, task-seq2seq, task-cloze |
2 |
Mask-Predict: Parallel Decoding of Conditional Masked Language Models |
Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer |
https://www.aclweb.org/anthology/D19-1633.pdf |
2019 |
NAA-CL |
# optim-sgd, optim-adam, optim-adadelta, reg-decay, arch-lstm, arch-recnn, arch-treelstm, arch-att, arch-coverage, search-beam, pre-glove, struct-cfg, nondif-reinforce, latent-vae, task-textpair, task-lm, task-condlm |
5 |
Cooperative Learning of Disjoint Syntax and Semantics |
Serhii Havrylov, Germán Kruszewski, Armand Joulin |
https://www.aclweb.org/anthology/N19-1115.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, reg-decay, train-transfer, arch-rnn, arch-att, arch-selfatt, arch-subword, arch-transformer, comb-ensemble, search-beam, task-spanlab, task-seq2seq |
0 |
Online Distilling from Checkpoints for Neural Machine Translation |
Hao-Ran Wei, Shujian Huang, Ran Wang, Xin-yu Dai, Jiajun Chen |
https://www.aclweb.org/anthology/N19-1192.pdf |
2019 |
NAA-CL |
# optim-adam, reg-stopping, reg-decay, train-active, arch-lstm, pre-word2vec, latent-vae, task-textclass, task-seqlab |
0 |
Modelling Instance-Level Annotator Reliability for Natural Language Labelling Tasks |
Maolin Li, Arvid Fahlström Myrman, Tingting Mu, Sophia Ananiadou |
https://www.aclweb.org/anthology/N19-1295.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, reg-decay, norm-batch, train-mtl, train-mll, train-transfer, arch-rnn, arch-lstm, arch-att, arch-memo, arch-subword, search-beam, task-lm, task-seq2seq |
23 |
Pre-training on high-resource speech recognition improves low-resource speech-to-text translation |
Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, Sharon Goldwater |
https://www.aclweb.org/anthology/N19-1006.pdf |
2019 |
NAA-CL |
# optim-sgd, optim-projection, reg-dropout, reg-decay, norm-gradient, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-coverage, pre-glove, pre-elmo, pre-bert, struct-crf, latent-vae, task-seqlab, task-lm, task-seq2seq |
2 |
Knowledge-Augmented Language Model and Its Application to Unsupervised Named-Entity Recognition |
Angli Liu, Jingfei Du, Veselin Stoyanov |
https://www.aclweb.org/anthology/N19-1117.pdf |
2019 |
NAA-CL |
# optim-adadelta, reg-decay, train-mtl, arch-rnn, arch-att, pre-word2vec, struct-crf, latent-vae, latent-topic, task-textclass, task-relation |
0 |
A Variational Approach to Weakly Supervised Document-Level Multi-Aspect Sentiment Classification |
Ziqian Zeng, Wenxuan Zhou, Xin Liu, Yangqiu Song |
https://www.aclweb.org/anthology/N19-1036.pdf |
2019 |
NAA-CL |
# reg-dropout, reg-decay, train-mtl, arch-rnn, arch-att, arch-selfatt, arch-copy, arch-transformer, comb-ensemble, pre-glove, pre-elmo, pre-bert, task-lm, task-seq2seq, task-tree |
20 |
Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data |
Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu |
https://www.aclweb.org/anthology/N19-1014.pdf |
2019 |
NAA-CL |
# optim-adam, reg-decay, train-augment, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-att, arch-selfatt, comb-ensemble, pre-glove |
0 |
On Knowledge distillation from complex networks for response prediction |
Siddhartha Arora, Mitesh M. Khapra, Harish G. Ramaswamy |
https://www.aclweb.org/anthology/N19-1382.pdf |
2019 |
NAA-CL |
# optim-adam, reg-dropout, reg-decay, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze |
3209 |
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding |
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova |
https://www.aclweb.org/anthology/N19-1423.pdf |