Year Conf. Topic Cited Paper Authors Url
2019 ACL # optim-adam, optim-projection, train-mll, pool-max, arch-lstm, arch-coverage, arch-subword, pre-fasttext, loss-cca, loss-svd, task-textpair, task-lm, task-seq2seq, task-lexicon 29 How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions Goran Glavaš, Robert Litschko, Sebastian Ruder, Ivan Vulić https://www.aclweb.org/anthology/P19-1070.pdf
2019 ACL # optim-projection, arch-coverage, pre-word2vec, pre-glove, pre-bert, latent-vae, latent-topic, loss-cca, loss-svd, task-textclass 0 Word2Sense: Sparse Interpretable Word Embeddings Abhishek Panigrahi, Harsha Vardhan Simhadri, Chiranjib Bhattacharyya https://www.aclweb.org/anthology/P19-1570.pdf
2019 ACL # arch-rnn, arch-lstm, arch-att, pre-glove, loss-cca, task-seq2seq 0 Exploring Author Context for Detecting Intended vs Perceived Sarcasm Silviu Oprea, Walid Magdy https://www.aclweb.org/anthology/P19-1275.pdf
2019 ACL # optim-adam, reg-dropout, norm-batch, norm-gradient, train-transfer, pool-mean, arch-lstm, arch-bilstm, arch-att, arch-memo, comb-ensemble, pre-elmo, loss-cca, task-lm, task-seq2seq, task-alignment 1 Multimodal and Multi-view Models for Emotion Recognition Gustavo Aguilar, Viktor Rozgic, Weiran Wang, Chao Wang https://www.aclweb.org/anthology/P19-1095.pdf
2019 ACL # init-glorot, reg-dropout, train-transfer, arch-att, pre-elmo, loss-cca, task-seq2seq 2 Fine-Grained Temporal Relation Extraction Siddharth Vashishtha, Benjamin Van Durme, Aaron Steven White https://www.aclweb.org/anthology/P19-1280.pdf
2019 EMNLP # train-mll, arch-transformer, comb-ensemble, pre-fasttext, pre-bert, adv-train, loss-cca, loss-svd, task-seq2seq, task-relation, task-lexicon 1 Weakly-Supervised Concept-based Adversarial Learning for Cross-lingual Word Embeddings Haozhou Wang, James Henderson, Paola Merlo https://www.aclweb.org/anthology/D19-1450.pdf
2019 EMNLP # optim-adam, optim-projection, train-mll, train-transfer, pool-mean, arch-att, arch-transformer, pre-fasttext, pre-bert, loss-cca, task-spanlab, task-lm, task-seq2seq, task-lexicon 0 Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model Tsung-Yuan Hsu, Chi-Liang Liu, Hung-yi Lee https://www.aclweb.org/anthology/D19-1607.pdf
2019 EMNLP # optim-adam, init-glorot, reg-dropout, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-memo, arch-bilinear, pre-glove, struct-crf, loss-cca, task-seqlab, task-condlm, task-seq2seq, task-relation 0 Partners in Crime: Multi-view Sequential Inference for Movie Understanding Nikos Papasarantopoulos, Lea Frermann, Mirella Lapata, Shay B. Cohen https://www.aclweb.org/anthology/D19-1212.pdf
2019 EMNLP # optim-projection, arch-subword, pre-fasttext, loss-cca, loss-svd 0 Quantifying the Semantic Core of Gender Systems Adina Williams, Damian Blasi, Lawrence Wolf-Sonkin, Hanna Wallach, Ryan Cotterell https://www.aclweb.org/anthology/D19-1577.pdf
2019 EMNLP # optim-adam, train-mll, arch-rnn, arch-lstm, arch-bilstm, pre-glove, pre-bert, loss-cca, task-seq2seq 0 Dialog Intent Induction with Deep Multi-View Clustering Hugh Perkins, Yi Yang https://www.aclweb.org/anthology/D19-1413.pdf
2019 EMNLP # optim-adam, optim-projection, reg-dropout, reg-worddropout, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-subword, arch-transformer, pre-bert, loss-cca, task-lm, task-seq2seq, task-cloze 1 The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives Elena Voita, Rico Sennrich, Ivan Titov https://www.aclweb.org/anthology/D19-1448.pdf
2019 EMNLP # optim-adam, reg-dropout, train-mtl, arch-lstm, arch-gru, arch-bigru, arch-cnn, arch-att, arch-selfatt, loss-cca, task-seq2seq 0 Context-aware Interactive Attention for Multi-modal Sentiment and Emotion Analysis Dushyant Singh Chauhan, Md Shad Akhtar, Asif Ekbal, Pushpak Bhattacharyya https://www.aclweb.org/anthology/D19-1566.pdf
2019 EMNLP # optim-adam, train-mll, train-transfer, arch-att, arch-subword, arch-transformer, pre-bert, loss-cca, loss-svd, task-lm, task-seq2seq 4 Investigating Multilingual NMT Representations at Scale Sneha Kudugunta, Ankur Bapna, Isaac Caswell, Orhan Firat https://www.aclweb.org/anthology/D19-1167.pdf
2019 EMNLP # optim-projection, train-mll, train-transfer, arch-lstm, arch-att, arch-coverage, arch-subword, arch-transformer, comb-ensemble, pre-word2vec, pre-fasttext, pre-glove, pre-skipthought, pre-elmo, pre-bert, pre-use, adv-train, loss-cca, task-textclass, task-textpair, task-lm, task-seq2seq, task-cloze 0 Multi-View Domain Adapted Sentence Embeddings for Low-Resource Unsupervised Duplicate Question Detection Nina Poerner, Hinrich Schütze https://www.aclweb.org/anthology/D19-1173.pdf
2019 NAA-CL # optim-sgd, optim-adam, optim-projection, reg-dropout, arch-rnn, arch-lstm, pre-elmo, loss-cca, loss-svd, task-seqlab, task-lm, task-seq2seq 13 Understanding Learning Dynamics Of Language Models with SVCCA Naomi Saphra, Adam Lopez https://www.aclweb.org/anthology/N19-1329.pdf
2019 NAA-CL # pool-mean, arch-lstm, arch-cnn, arch-att, arch-selfatt, comb-ensemble, pre-fasttext, pre-glove, struct-crf, loss-cca, loss-margin, task-textclass 1 Ranking-Based Autoencoder for Extreme Multi-label Classification Bingyu Wang, Li Chen, Wei Sun, Kechen Qin, Kefeng Li, Hui Zhou https://www.aclweb.org/anthology/N19-1289.pdf
2019 NAA-CL # optim-projection, reg-patience, train-mll, arch-lstm, arch-att, pre-fasttext, adv-train, loss-cca, loss-svd, task-seq2seq, task-relation, task-lexicon, task-alignment 3 Learning Unsupervised Multilingual Word Embeddings with Incremental Multilingual Hubs Geert Heyman, Bregt Verreet, Ivan Vulić, Marie-Francine Moens https://www.aclweb.org/anthology/N19-1188.pdf