Year Conf. Topic Cited Paper Authors Url
2019 ACL # optim-adam, train-mtl, train-transfer, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-extractive, task-lm, task-context 0 Self-Supervised Learning for Contextualized Extractive Summarization Hong Wang, Xin Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang https://www.aclweb.org/anthology/P19-1214.pdf
2019 ACL # train-mll, pool-max, arch-lstm, arch-att, arch-memo, nondif-reinforce, adv-train, latent-vae, task-extractive, task-lm, task-seq2seq, task-cloze, task-context 1 Self-Supervised Dialogue Learning Jiawei Wu, Xin Wang, William Yang Wang https://www.aclweb.org/anthology/P19-1375.pdf
2019 ACL # optim-sgd, optim-adam, reg-dropout, pool-max, arch-rnn, arch-att, arch-transformer, pre-word2vec, task-context 0 DeepSentiPeer: Harnessing Sentiment in Review Texts to Recommend Peer Review Decisions Tirthankar Ghosal, Rajeev Verma, Asif Ekbal, Pushpak Bhattacharyya https://www.aclweb.org/anthology/P19-1106.pdf
2019 EMNLP # optim-sgd, optim-adam, optim-projection, reg-dropout, norm-layer, train-mll, arch-lstm, arch-att, arch-selfatt, arch-coverage, arch-subword, pre-elmo, pre-bert, struct-crf, loss-triplet, task-seqlab, task-lm, task-seq2seq, task-context 14 Cloze-driven Pretraining of Self-attention Networks Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli https://www.aclweb.org/anthology/D19-1539.pdf