Year Conf. Topic Cited Paper Authors Url
2019 ACL # optim-adam, pool-max, arch-lstm, arch-gru, arch-att, arch-bilinear, pre-glove, pre-paravec, task-condlm, task-seq2seq 1 Improving Visual Question Answering by Referring to Generated Paragraph Captions Hyounghun Kim, Mohit Bansal https://www.aclweb.org/anthology/P19-1351.pdf
2019 ACL # optim-adam, reg-dropout, train-mtl, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-paravec, pre-bert, adv-examp, task-spanlab 2 Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading Comprehension Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li https://www.aclweb.org/anthology/P19-1221.pdf
2019 ACL # optim-adam, reg-decay, arch-lstm, arch-att, arch-coverage, comb-ensemble, pre-paravec, latent-topic, task-relation 0 Modeling Financial Analysts’ Decision Making via the Pragmatics and Semantics of Earnings Calls Katherine Keith, Amanda Stent https://www.aclweb.org/anthology/P19-1047.pdf
2019 ACL # optim-adam, reg-dropout, train-mtl, arch-lstm, arch-bilstm, arch-cnn, arch-att, pre-glove, pre-paravec, pre-bert, task-relation 1 Multi-Task Learning for Coherence Modeling Youmna Farag, Helen Yannakoudakis https://www.aclweb.org/anthology/P19-1060.pdf
2019 ACL # optim-adam, optim-adagrad, reg-dropout, reg-labelsmooth, norm-layer, pool-max, arch-rnn, arch-lstm, arch-att, arch-selfatt, arch-subword, arch-transformer, search-beam, pre-paravec, task-seq2seq 9 Hierarchical Transformers for Multi-Document Summarization Yang Liu, Mirella Lapata https://www.aclweb.org/anthology/P19-1500.pdf
2019 ACL # optim-adam, optim-projection, init-glorot, reg-dropout, train-mll, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-subword, pre-fasttext, pre-paravec, pre-bert, struct-crf, adv-train, task-textclass, task-seqlab, task-seq2seq, task-tree, task-lexicon, task-alignment 3 Multi-Source Cross-Lingual Model Transfer: Learning What to Share Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, Claire Cardie https://www.aclweb.org/anthology/P19-1299.pdf
2019 EMNLP # optim-adam, optim-adagrad, norm-layer, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, search-beam, pre-glove, pre-paravec, adv-train, latent-topic, task-textpair, task-extractive, task-spanlab, task-lm, task-condlm, task-seq2seq 0 Topic-Guided Coherence Modeling for Sentence Ordering by Preserving Global and Local Information Byungkook Oh, Seungmin Seo, Cheolheon Shin, Eunju Jo, Kyong-Ho Lee https://www.aclweb.org/anthology/D19-1232.pdf
2019 NAA-CL # arch-rnn, arch-lstm, arch-cnn, arch-att, arch-coverage, comb-ensemble, pre-word2vec, pre-paravec, pre-skipthought, loss-svd, task-seq2seq 0 Automatic learner summary assessment for reading comprehension Menglin Xia, Ekaterina Kochmar, Ted Briscoe https://www.aclweb.org/anthology/N19-1261.pdf
2019 NAA-CL # train-mll, arch-rnn, arch-lstm, arch-bilstm, comb-ensemble, pre-word2vec, pre-glove, pre-paravec, pre-skipthought, pre-elmo, task-seq2seq 0 Learning Outside the Box: Discourse-level Features Improve Metaphor Identification Jesse Mu, Helen Yannakoudakis, Ekaterina Shutova https://www.aclweb.org/anthology/N19-1059.pdf