Year Conf. Topic Cited Paper Authors Url
2019 ACL # optim-adam, train-mll, arch-cnn, arch-att, arch-transformer, pre-skipthought, pre-bert, task-extractive, task-lm, task-seq2seq, task-cloze 0 Sentence Centrality Revisited for Unsupervised Summarization Hao Zheng, Mirella Lapata https://www.aclweb.org/anthology/P19-1628.pdf
2019 ACL # optim-adagrad, arch-rnn, arch-lstm, arch-gru, arch-bigru, arch-gnn, arch-att, search-beam, search-viterbi, pre-glove, pre-skipthought, pre-elmo, task-lm, task-relation 0 Multi-Relational Script Learning for Discourse Relations I-Ta Lee, Dan Goldwasser https://www.aclweb.org/anthology/P19-1413.pdf
2019 ACL # optim-adam, reg-dropout, train-mtl, train-mll, pool-max, pool-mean, arch-lstm, arch-bilstm, arch-gru, arch-bigru, arch-coverage, pre-glove, pre-skipthought, pre-elmo, pre-bert, adv-train, task-textpair, task-lm, task-cloze, task-relation 2 DisSent: Learning Sentence Representations from Explicit Discourse Relations Allen Nie, Erin Bennett, Noah Goodman https://www.aclweb.org/anthology/P19-1442.pdf
2019 ACL # optim-adam, optim-projection, arch-rnn, arch-cnn, arch-att, arch-memo, arch-transformer, comb-ensemble, pre-glove, pre-skipthought, task-lm, task-relation 0 Global Textual Relation Embedding for Relational Understanding Zhiyu Chen, Hanwen Zha, Honglei Liu, Wenhu Chen, Xifeng Yan, Yu Su https://www.aclweb.org/anthology/P19-1127.pdf
2019 ACL # optim-adam, optim-projection, norm-gradient, train-mll, train-transfer, activ-relu, pool-max, arch-rnn, arch-lstm, arch-subword, comb-ensemble, pre-fasttext, pre-skipthought, latent-vae, task-textpair, task-lm, task-seq2seq, task-cloze 0 Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning Shuai Tang, Virginia R. de Sa https://www.aclweb.org/anthology/P19-1397.pdf
2019 ACL # optim-sgd, reg-dropout, pool-max, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-coverage, comb-ensemble, pre-skipthought, pre-bert, task-seqlab 0 Tree LSTMs with Convolution Units to Predict Stance and Rumor Veracity in Social Media Conversations Sumeet Kumar, Kathleen Carley https://www.aclweb.org/anthology/P19-1498.pdf
2019 ACL # optim-adam, optim-amsgrad, optim-projection, reg-stopping, train-mtl, train-transfer, arch-lstm, arch-bilstm, arch-att, arch-coverage, arch-transformer, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-seq2seq, task-relation 0 Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, Samuel R. Bowman https://www.aclweb.org/anthology/P19-1439.pdf
2019 ACL # optim-adam, optim-projection, arch-rnn, arch-lstm, arch-subword, arch-transformer, pre-fasttext, pre-skipthought, pre-bert, struct-cfg, task-textpair, task-lm, task-tree 5 Correlating Neural and Symbolic Representations of Language Grzegorz Chrupała, Afra Alishahi https://www.aclweb.org/anthology/P19-1283.pdf
2019 ACL # optim-adam, arch-lstm, arch-cnn, arch-att, arch-coverage, pre-glove, pre-skipthought, task-textclass 0 Encoding Social Information with Graph Convolutional Networks forPolitical Perspective Detection in News Media Chang Li, Dan Goldwasser https://www.aclweb.org/anthology/P19-1247.pdf
2019 ACL # optim-adam, optim-projection, init-glorot, pool-max, arch-lstm, arch-att, arch-coverage, pre-fasttext, pre-skipthought, latent-vae, loss-svd, task-textclass, task-textpair, task-lm, task-seq2seq 1 Learning Compressed Sentence Representations for On-Device Text Processing Dinghan Shen, Pengyu Cheng, Dhanasekar Sundararaman, Xinyuan Zhang, Qian Yang, Meng Tang, Asli Celikyilmaz, Lawrence Carin https://www.aclweb.org/anthology/P19-1011.pdf
2019 ACL # optim-adam, reg-dropout, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-transformer, search-viterbi, pre-skipthought, pre-bert, struct-crf, latent-vae, task-seqlab, task-lm, task-seq2seq 2 Extracting Symptoms and their Status from Clinical Conversations Nan Du, Kai Chen, Anjuli Kannan, Linh Tran, Yuhui Chen, Izhak Shafran https://www.aclweb.org/anthology/P19-1087.pdf
2019 ACL # optim-adam, reg-dropout, reg-decay, pool-max, arch-rnn, arch-cnn, arch-coverage, search-beam, pre-glove, pre-skipthought, task-textpair, task-lm 0 A Cross-Domain Transferable Neural Coherence Model Peng Xu, Hamidreza Saghir, Jin Sung Kang, Teng Long, Avishek Joey Bose, Yanshuai Cao, Jackie Chi Kit Cheung https://www.aclweb.org/anthology/P19-1067.pdf
2019 ACL # optim-adam, norm-layer, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-transformer, pre-glove, pre-skipthought, pre-bert, task-textpair, task-lm, task-tree 0 Towards Lossless Encoding of Sentences Gabriele Prato, Mathieu Duchesneau, Sarath Chandar, Alain Tapp https://www.aclweb.org/anthology/P19-1153.pdf
2019 ACL # optim-adam, reg-dropout, train-mtl, train-mll, pool-max, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-coverage, arch-subword, pre-word2vec, pre-fasttext, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-textclass, task-textpair, task-seqlab 2 Robust Representation Learning of Biomedical Names Minh C. Phan, Aixin Sun, Yi Tay https://www.aclweb.org/anthology/P19-1317.pdf
2019 ACL # optim-adagrad, reg-dropout, norm-layer, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-cnn, arch-att, arch-gating, arch-memo, arch-transformer, pre-glove, pre-skipthought, task-textpair, task-lm, task-seq2seq 0 You Only Need Attention to Traverse Trees Mahtab Ahmed, Muhammad Rifayat Samee, Robert E. Mercer https://www.aclweb.org/anthology/P19-1030.pdf
2019 ACL # optim-adam, reg-dropout, reg-stopping, train-mtl, train-transfer, arch-lstm, pre-glove, pre-skipthought, pre-elmo, task-textclass, task-textpair, task-lm, task-seq2seq 0 Encouraging Paragraph Embeddings to Remember Sentence Identity Improves Classification Tu Vu, Mohit Iyyer https://www.aclweb.org/anthology/P19-1638.pdf
2019 ACL # optim-sgd, optim-adam, train-mll, train-transfer, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-textclass, task-lm 0 Self-Attentive, Multi-Context One-Class Classification for Unsupervised Anomaly Detection on Text Lukas Ruff, Yury Zemlyanskiy, Robert Vandermeulen, Thomas Schnake, Marius Kloft https://www.aclweb.org/anthology/P19-1398.pdf
2019 ACL # optim-adam, reg-norm, train-mtl, train-mll, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-glove, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-textclass, task-textpair, task-lm 0 EigenSent: Spectral sentence embeddings using higher-order Dynamic Mode Decomposition Subhradeep Kayal, George Tsatsaronis https://www.aclweb.org/anthology/P19-1445.pdf
2019 EMNLP # train-mtl, train-transfer, arch-rnn, arch-att, arch-subword, arch-transformer, pre-skipthought, pre-bert, task-textclass, task-lm, task-seq2seq 1 Improving Neural Story Generation by Targeted Common Sense Grounding Huanru Henry Mao, Bodhisattwa Prasad Majumder, Julian McAuley, Garrison Cottrell https://www.aclweb.org/anthology/D19-1615.pdf
2019 EMNLP # pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-skipthought, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-lm, task-seq2seq, task-cloze 0 UER: An Open-Source Toolkit for Pre-training Models Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, Xiaoyong Du https://www.aclweb.org/anthology/D19-3041.pdf
2019 EMNLP # pre-glove, pre-skipthought, pre-elmo, task-seq2seq, task-tree 0 Split or Merge: Which is Better for Unsupervised RST Parsing? Naoki Kobayashi, Tsutomu Hirao, Kengo Nakamura, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata https://www.aclweb.org/anthology/D19-1587.pdf
2019 EMNLP # optim-sgd, optim-adam, optim-projection, train-augment, arch-rnn, arch-cnn, arch-att, comb-ensemble, pre-word2vec, pre-skipthought, pre-bert, task-textclass, task-textpair, task-seq2seq 0 Text Emotion Distribution Learning from Small Sample: A Meta-Learning Approach Zhenjie Zhao, Xiaojuan Ma https://www.aclweb.org/anthology/D19-1408.pdf
2019 EMNLP # optim-adam, optim-projection, arch-cnn, arch-att, arch-coverage, arch-transformer, pre-glove, pre-skipthought, pre-bert, task-spanlab, task-lm, task-relation, task-tree 0 Linking artificial and human neural representations of language Jon Gauthier, Roger Levy https://www.aclweb.org/anthology/D19-1050.pdf
2019 EMNLP # optim-adam, optim-projection, arch-lstm, arch-gru, arch-cnn, pre-word2vec, pre-skipthought, task-textpair, task-lm, task-seq2seq, task-lexicon 0 Incorporating Visual Semantics into Sentence Representations within a Grounded Space Patrick Bordes, Eloi Zablocki, Laure Soulier, Benjamin Piwowarski, Patrick Gallinari https://www.aclweb.org/anthology/D19-1064.pdf
2019 EMNLP # optim-adam, optim-projection, norm-layer, train-mtl, train-mll, arch-subword, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, loss-svd, task-textpair, task-seq2seq 0 Parameter-free Sentence Embedding via Orthogonal Basis Ziyi Yang, Chenguang Zhu, Weizhu Chen https://www.aclweb.org/anthology/D19-1059.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, train-mtl, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-bilinear, search-viterbi, pre-word2vec, pre-skipthought, struct-crf, task-seqlab, task-lm, task-seq2seq, task-relation 0 Learning to Infer Entities, Properties and their Relations from Clinical Conversations Nan Du, Mingqiu Wang, Linh Tran, Gang Lee, Izhak Shafran https://www.aclweb.org/anthology/D19-1503.pdf
2019 EMNLP # optim-adam, train-mtl, train-mll, train-transfer, arch-att, arch-coverage, arch-transformer, pre-skipthought, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq, meta-init 0 Investigating Meta-Learning Algorithms for Low-Resource Natural Language Understanding Tasks Zi-Yi Dou, Keyi Yu, Antonios Anastasopoulos https://www.aclweb.org/anthology/D19-1112.pdf
2019 EMNLP # optim-adam, train-transfer, arch-lstm, arch-att, arch-selfatt, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, loss-nce, task-textclass, task-lm, task-cloze 0 Multi-Granularity Representations of Dialog Shikib Mehri, Maxine Eskenazi https://www.aclweb.org/anthology/D19-1184.pdf
2019 EMNLP # optim-projection, train-mtl, arch-cnn, arch-att, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-spanlab, task-lm, task-cloze 9 Patient Knowledge Distillation for BERT Model Compression Siqi Sun, Yu Cheng, Zhe Gan, Jingjing Liu https://www.aclweb.org/anthology/D19-1441.pdf
2019 EMNLP # optim-sgd, optim-adam, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-att, pre-glove, pre-skipthought, pre-elmo, pre-use, adv-train, task-textpair, task-seqlab, task-lm, task-seq2seq 0 IMaT: Unsupervised Text Attribute Transfer via Iterative Matching and Translation Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, Enrico Santus https://www.aclweb.org/anthology/D19-1306.pdf
2019 EMNLP # reg-dropout, pool-max, pre-word2vec, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair 1 Efficient Sentence Embedding using Discrete Cosine Transform Nada Almarwani, Hanan Aldarmaki, Mona Diab https://www.aclweb.org/anthology/D19-1380.pdf
2019 EMNLP # optim-adam, reg-dropout, train-mtl, train-transfer, pool-max, arch-rnn, arch-att, arch-selfatt, arch-coverage, arch-transformer, pre-skipthought, pre-elmo, pre-bert, task-textpair, task-spanlab, task-lm, task-seq2seq, task-cloze 0 Transfer Fine-Tuning: A BERT Case Study Yuki Arase, Jun’ichi Tsujii https://www.aclweb.org/anthology/D19-1542.pdf
2019 EMNLP # train-mll, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-copy, pre-skipthought, task-seq2seq 0 A Modular Architecture for Unsupervised Sarcasm Generation Abhijit Mishra, Tarun Tater, Karthik Sankaranarayanan https://www.aclweb.org/anthology/D19-1636.pdf
2019 EMNLP # optim-adam, train-transfer, pool-max, arch-lstm, arch-bilstm, arch-att, arch-transformer, pre-glove, pre-skipthought, pre-bert, pre-use, loss-triplet, task-textpair 5 Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks Nils Reimers, Iryna Gurevych https://www.aclweb.org/anthology/D19-1410.pdf
2019 EMNLP # optim-projection, train-mll, train-transfer, arch-lstm, arch-att, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, pre-use, adv-train, loss-cca, task-textclass, task-textpair, task-lm, task-seq2seq, task-cloze 0 Multi-View Domain Adapted Sentence Embeddings for Low-Resource Unsupervised Duplicate Question Detection Nina Poerner, Hinrich Schütze https://www.aclweb.org/anthology/D19-1173.pdf
2019 EMNLP # optim-adam, train-mll, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-memo, pre-skipthought, pre-elmo, pre-bert, task-textclass, task-textpair, task-lm, task-seq2seq 0 Evaluation Benchmarks and Learning Criteria for Discourse-Aware Sentence Representations Mingda Chen, Zewei Chu, Kevin Gimpel https://www.aclweb.org/anthology/D19-1060.pdf
2019 NAA-CL # optim-sgd, optim-adam, train-mtl, train-transfer, arch-lstm, arch-coverage, pre-glove, pre-skipthought, pre-bert, task-textpair, task-lm 3 Mining Discourse Markers for Unsupervised Sentence Representation Learning Damien Sileo, Tim Van De Cruys, Camille Pradel, Philippe Muller https://www.aclweb.org/anthology/N19-1351.pdf
2019 NAA-CL # optim-sgd, optim-adam, init-glorot, reg-dropout, train-transfer, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-gating, arch-energy, search-viterbi, pre-word2vec, pre-skipthought, struct-hmm, struct-crf, adv-train, task-textclass, task-seq2seq 0 Adaptation of Hierarchical Structured Models for Speech Act Recognition in Asynchronous Conversation Tasnim Mohiuddin, Thanh-Tung Nguyen, Shafiq Joty https://www.aclweb.org/anthology/N19-1134.pdf
2019 NAA-CL # arch-rnn, arch-lstm, arch-cnn, arch-att, arch-coverage, comb-ensemble, pre-word2vec, pre-paravec, pre-skipthought, loss-svd, task-seq2seq 0 Automatic learner summary assessment for reading comprehension Menglin Xia, Ekaterina Kochmar, Ted Briscoe https://www.aclweb.org/anthology/N19-1261.pdf
2019 NAA-CL # train-mll, arch-att, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, loss-svd, task-seq2seq 0 Big BiRD: A Large, Fine-Grained, Bigram Relatedness Dataset for Examining Semantic Composition Shima Asaadi, Saif Mohammad, Svetlana Kiritchenko https://www.aclweb.org/anthology/N19-1050.pdf
2019 NAA-CL # optim-sgd, optim-adam, reg-dropout, reg-patience, arch-rnn, arch-birnn, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-subword, pre-word2vec, pre-glove, pre-skipthought, pre-elmo, struct-crf, adv-train, latent-vae, task-textclass, task-lm 6 Dialogue Act Classification with Context-Aware Self-Attention Vipul Raheja, Joel Tetreault https://www.aclweb.org/anthology/N19-1373.pdf
2019 NAA-CL # train-mll, arch-rnn, arch-lstm, arch-bilstm, comb-ensemble, pre-word2vec, pre-glove, pre-paravec, pre-skipthought, pre-elmo, task-seq2seq 0 Learning Outside the Box: Discourse-level Features Improve Metaphor Identification Jesse Mu, Helen Yannakoudakis, Ekaterina Shutova https://www.aclweb.org/anthology/N19-1059.pdf
2019 NAA-CL # arch-att, arch-coverage, arch-subword, pre-fasttext, pre-glove, pre-skipthought, pre-use, task-textpair 0 Evaluating Composition Models for Verb Phrase Elliptical Sentence Embeddings Gijs Wijnholds, Mehrnoosh Sadrzadeh https://www.aclweb.org/anthology/N19-1023.pdf
2019 NAA-CL # optim-adam, reg-dropout, reg-decay, train-transfer, train-augment, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, comb-ensemble, pre-glove, pre-skipthought, pre-elmo, pre-bert, struct-crf, task-textclass, task-textpair, task-seqlab, task-spanlab, task-lm, task-seq2seq, task-cloze 3209 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova https://www.aclweb.org/anthology/N19-1423.pdf