Year |
Conf. |
Topic |
Cited |
Paper |
Authors |
Url |
2019 |
ACL |
# arch-att, arch-copy, search-viterbi, pre-glove, task-extractive, task-seq2seq, task-relation |
2 |
TalkSumm: A Dataset and Scalable Annotation Method for Scientific Paper Summarization Based on Conference Talks |
Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, David Konopnicki |
https://www.aclweb.org/anthology/P19-1204.pdf |
2019 |
ACL |
# optim-adam, train-mll, arch-cnn, arch-att, arch-transformer, pre-skipthought, pre-bert, task-extractive, task-lm, task-seq2seq, task-cloze |
0 |
Sentence Centrality Revisited for Unsupervised Summarization |
Hao Zheng, Mirella Lapata |
https://www.aclweb.org/anthology/P19-1628.pdf |
2019 |
ACL |
# optim-adam, train-mtl, train-transfer, arch-lstm, arch-att, arch-selfatt, arch-transformer, pre-glove, pre-bert, task-extractive, task-lm, task-context |
0 |
Self-Supervised Learning for Contextualized Extractive Summarization |
Hong Wang, Xin Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang |
https://www.aclweb.org/anthology/P19-1214.pdf |
2019 |
ACL |
# train-mll, pool-max, arch-lstm, arch-att, arch-memo, nondif-reinforce, adv-train, latent-vae, task-extractive, task-lm, task-seq2seq, task-cloze, task-context |
1 |
Self-Supervised Dialogue Learning |
Jiawei Wu, Xin Wang, William Yang Wang |
https://www.aclweb.org/anthology/P19-1375.pdf |
2019 |
ACL |
# optim-sgd, reg-dropout, norm-gradient, arch-rnn, arch-lstm, arch-att, search-beam, pre-elmo, latent-vae, task-extractive, task-lm, task-seq2seq |
1 |
Simple Unsupervised Summarization by Contextual Matching |
Jiawei Zhou, Alexander Rush |
https://www.aclweb.org/anthology/P19-1503.pdf |
2019 |
ACL |
# optim-projection, arch-rnn, arch-gru, arch-att, arch-copy, arch-coverage, pre-glove, task-extractive, task-seq2seq |
1 |
Keep Meeting Summaries on Topic: Abstractive Multi-Modal Meeting Summarization |
Manling Li, Lingyu Zhang, Heng Ji, Richard J. Radke |
https://www.aclweb.org/anthology/P19-1210.pdf |
2019 |
ACL |
# arch-rnn, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-coverage, arch-transformer, task-textclass, task-extractive, task-lm, task-seq2seq |
4 |
Multi-News: A Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model |
Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, Dragomir Radev |
https://www.aclweb.org/anthology/P19-1102.pdf |
2019 |
ACL |
# train-mll, pool-max, arch-lstm, arch-att, arch-copy, arch-coverage, comb-ensemble, task-textpair, task-extractive, task-seq2seq |
3 |
Improving the Similarity Measure of Determinantal Point Processes for Extractive Multi-Document Summarization |
Sangwoo Cho, Logan Lebanoff, Hassan Foroosh, Fei Liu |
https://www.aclweb.org/anthology/P19-1098.pdf |
2019 |
ACL |
# optim-adam, arch-rnn, arch-lstm, arch-att, arch-selfatt, search-beam, pre-word2vec, pre-fasttext, pre-glove, pre-elmo, pre-bert, nondif-reinforce, task-extractive, task-lm, task-condlm, task-seq2seq, task-lexicon |
2 |
Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts |
Elizabeth Clark, Asli Celikyilmaz, Noah A. Smith |
https://www.aclweb.org/anthology/P19-1264.pdf |
2019 |
ACL |
# optim-adam, train-mtl, arch-rnn, arch-birnn, arch-lstm, arch-att, arch-coverage, arch-transformer, comb-ensemble, search-beam, pre-bert, task-textpair, task-extractive, task-spanlab, task-lm, task-seq2seq |
4 |
Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction |
Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, Junji Tomita |
https://www.aclweb.org/anthology/P19-1225.pdf |
2019 |
ACL |
# optim-adam, init-glorot, reg-dropout, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-selfatt, arch-memo, arch-copy, arch-coverage, arch-subword, arch-transformer, pre-word2vec, pre-elmo, pre-bert, struct-hmm, latent-vae, task-extractive, task-lm, task-seq2seq, task-cloze |
0 |
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization |
Xingxing Zhang, Furu Wei, Ming Zhou |
https://www.aclweb.org/anthology/P19-1499.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-labelsmooth, norm-layer, arch-rnn, arch-lstm, arch-bilstm, arch-gru, arch-att, arch-memo, arch-copy, arch-coverage, arch-subword, arch-transformer, task-extractive, task-seq2seq, task-tree |
1 |
Keeping Notes: Conditional Natural Language Generation with a Scratchpad Encoder |
Ryan Benmalek, Madian Khabsa, Suma Desu, Claire Cardie, Michele Banko |
https://www.aclweb.org/anthology/P19-1407.pdf |
2019 |
ACL |
# optim-adam, reg-dropout, reg-patience, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, pre-word2vec, task-textclass, task-extractive, task-lm, task-seq2seq |
11 |
Is Attention Interpretable? |
Sofia Serrano, Noah A. Smith |
https://www.aclweb.org/anthology/P19-1282.pdf |
2019 |
ACL |
# optim-adam, norm-gradient, arch-rnn, arch-lstm, arch-cnn, arch-att, arch-copy, arch-bilinear, arch-coverage, pre-word2vec, task-extractive, task-seq2seq |
0 |
This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation |
Rui Zhang, Joel Tetreault |
https://www.aclweb.org/anthology/P19-1043.pdf |
2019 |
ACL |
# train-mtl, pool-mean, arch-rnn, arch-lstm, arch-att, arch-transformer, pre-word2vec, pre-glove, pre-bert, task-textclass, task-textpair, task-extractive, task-seq2seq |
3 |
Searching for Effective Neural Extractive Summarization: What Works and What’s Next |
Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, Xuanjing Huang |
https://www.aclweb.org/anthology/P19-1100.pdf |
2019 |
EMNLP |
# optim-adam, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, search-beam, pre-elmo, task-extractive |
5 |
Neural Extractive Text Summarization with Syntactic Compression |
Jiacheng Xu, Greg Durrett |
https://www.aclweb.org/anthology/D19-1324.pdf |
2019 |
EMNLP |
# train-transfer, arch-rnn, arch-att, comb-ensemble, pre-bert, task-extractive, task-lm, task-cloze |
0 |
AMPERSAND: Argument Mining for PERSuAsive oNline Discussions |
Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, Alyssa Hwang |
https://www.aclweb.org/anthology/D19-1291.pdf |
2019 |
EMNLP |
# optim-adam, optim-adagrad, reg-dropout, arch-rnn, arch-birnn, arch-lstm, arch-gru, arch-cnn, arch-att, arch-bilinear, arch-coverage, search-beam, task-extractive, task-seq2seq |
0 |
How to Write Summaries with Patterns? Learning towards Abstractive Summarization through Prototype Editing |
Shen Gao, Xiuying Chen, Piji Li, Zhangming Chan, Dongyan Zhao, Rui Yan |
https://www.aclweb.org/anthology/D19-1388.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, arch-lstm, arch-att, arch-selfatt, arch-transformer, comb-ensemble, pre-bert, struct-crf, task-textclass, task-extractive, task-lm, task-seq2seq |
0 |
Pretrained Language Models for Sequential Sentence Classification |
Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld |
https://www.aclweb.org/anthology/D19-1383.pdf |
2019 |
EMNLP |
# optim-adagrad, arch-lstm, arch-bilstm, arch-att, arch-coverage, task-extractive, task-condlm, task-seq2seq |
0 |
Attention Optimization for Abstractive Document Summarization |
Min Gui, Junfeng Tian, Rui Wang, Zhenglu Yang |
https://www.aclweb.org/anthology/D19-1117.pdf |
2019 |
EMNLP |
# optim-adam, optim-adagrad, norm-layer, pool-mean, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-selfatt, arch-transformer, search-beam, pre-glove, pre-paravec, adv-train, latent-topic, task-textpair, task-extractive, task-spanlab, task-lm, task-condlm, task-seq2seq |
0 |
Topic-Guided Coherence Modeling for Sentence Ordering by Preserving Global and Local Information |
Byungkook Oh, Seungmin Seo, Cheolheon Shin, Eunju Jo, Kyong-Ho Lee |
https://www.aclweb.org/anthology/D19-1232.pdf |
2019 |
EMNLP |
# optim-adam, arch-lstm, arch-bilstm, arch-att, arch-coverage, search-greedy, task-extractive, task-condlm, task-seq2seq |
1 |
An Entity-Driven Framework for Abstractive Summarization |
Eva Sharma, Luyang Huang, Zhe Hu, Lu Wang |
https://www.aclweb.org/anthology/D19-1323.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, pool-mean, arch-rnn, arch-lstm, arch-att, arch-coverage, arch-subword, nondif-reinforce, latent-vae, task-extractive, task-condlm, task-seq2seq |
0 |
Select and Attend: Towards Controllable Content Selection in Text Generation |
Xiaoyu Shen, Jun Suzuki, Kentaro Inui, Hui Su, Dietrich Klakow, Satoshi Sekine |
https://www.aclweb.org/anthology/D19-1054.pdf |
2019 |
EMNLP |
# arch-rnn, arch-att, search-beam, latent-vae, task-extractive, task-lm, task-seq2seq |
1 |
BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle |
Peter West, Ari Holtzman, Jan Buys, Yejin Choi |
https://www.aclweb.org/anthology/D19-1389.pdf |
2019 |
EMNLP |
# optim-adam, norm-layer, arch-lstm, arch-bilstm, arch-att, arch-selfatt, arch-transformer, pre-word2vec, pre-elmo, pre-bert, task-textclass, task-textpair, task-extractive, task-spanlab, task-lm, task-seq2seq, task-cloze |
0 |
Fine-tune BERT with Sparse Self-Attention Mechanism |
Baiyun Cui, Yingming Li, Ming Chen, Zhongfei Zhang |
https://www.aclweb.org/anthology/D19-1361.pdf |
2019 |
EMNLP |
# optim-adagrad, arch-rnn, arch-lstm, arch-att, arch-copy, pre-bert, nondif-reinforce, task-extractive, task-spanlab, task-lm, task-condlm, task-seq2seq |
1 |
Answers Unite! Unsupervised Metrics for Reinforced Summarization Models |
Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano |
https://www.aclweb.org/anthology/D19-1320.pdf |
2019 |
EMNLP |
# optim-adam, reg-decay, norm-gradient, train-transfer, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-coverage, nondif-reinforce, task-extractive, task-spanlab, task-condlm, task-seq2seq |
0 |
Reading Like HER: Human Reading Inspired Extractive Summarization |
Ling Luo, Xiang Ao, Yan Song, Feiyang Pan, Min Yang, Qing He |
https://www.aclweb.org/anthology/D19-1300.pdf |
2019 |
EMNLP |
# reg-dropout, arch-rnn, arch-gru, arch-att, arch-selfatt, arch-coverage, search-beam, task-extractive, task-condlm, task-seq2seq |
0 |
Set to Ordered Text: Generating Discharge Instructions from Medical Billing Codes |
Litton J Kurisinkel, Nancy Chen |
https://www.aclweb.org/anthology/D19-1638.pdf |
2019 |
EMNLP |
# optim-adam, pool-max, arch-rnn, arch-lstm, arch-att, arch-copy, arch-coverage, nondif-minrisk, nondif-reinforce, task-extractive, task-lm, task-seq2seq |
0 |
Clickbait? Sensational Headline Generation with Auto-tuned Reinforcement Learning |
Peng Xu, Chien-Sheng Wu, Andrea Madotto, Pascale Fung |
https://www.aclweb.org/anthology/D19-1303.pdf |
2019 |
EMNLP |
# optim-projection, arch-rnn, arch-att, arch-coverage, comb-ensemble, pre-bert, task-extractive, task-seq2seq |
0 |
Earlier Isn’t Always Better: Sub-aspect Analysis on Corpus and System Biases in Summarization |
Taehee Jung, Dongyeop Kang, Lucas Mentch, Eduard Hovy |
https://www.aclweb.org/anthology/D19-1327.pdf |
2019 |
EMNLP |
# optim-sgd, optim-adam, train-mll, train-active, arch-lstm, arch-bilstm, pre-word2vec, pre-elmo, pre-bert, task-textclass, task-textpair, task-extractive, task-condlm, task-seq2seq |
2 |
MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance |
Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, Steffen Eger |
https://www.aclweb.org/anthology/D19-1053.pdf |
2019 |
EMNLP |
# optim-adam, train-mtl, arch-rnn, arch-gru, arch-att, latent-topic, task-extractive, task-seq2seq |
0 |
Subtopic-driven Multi-Document Summarization |
Xin Zheng, Aixin Sun, Jing Li, Karthik Muthuswamy |
https://www.aclweb.org/anthology/D19-1311.pdf |
2019 |
EMNLP |
# pre-glove, pre-bert, task-extractive, task-lm, task-condlm, task-seq2seq, task-tree |
1 |
Counterfactual Story Reasoning and Generation |
Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, Yejin Choi |
https://www.aclweb.org/anthology/D19-1509.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-cnn, arch-att, arch-memo, arch-copy, task-extractive, task-seq2seq |
4 |
Neural Text Summarization: A Critical Evaluation |
Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, Richard Socher |
https://www.aclweb.org/anthology/D19-1051.pdf |
2019 |
EMNLP |
# train-mtl, arch-rnn, arch-lstm, arch-gru, arch-cnn, pre-bert, task-extractive |
1 |
Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training and Auxiliary Losses |
Matt Grenander, Yue Dong, Jackie Chi Kit Cheung, Annie Louis |
https://www.aclweb.org/anthology/D19-1620.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-labelsmooth, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-copy, arch-coverage, arch-subword, arch-transformer, search-beam, pre-elmo, pre-bert, latent-vae, task-seqlab, task-extractive, task-lm, task-seq2seq, task-cloze |
6 |
Text Summarization with Pretrained Encoders |
Yang Liu, Mirella Lapata |
https://www.aclweb.org/anthology/D19-1387.pdf |
2019 |
EMNLP |
# optim-adam, reg-dropout, reg-stopping, arch-rnn, arch-lstm, arch-gru, arch-cnn, arch-att, pre-glove, pre-bert, struct-crf, task-extractive, task-seq2seq, task-relation |
0 |
Extractive Summarization of Long Documents by Combining Global and Local Context |
Wen Xiao, Giuseppe Carenini |
https://www.aclweb.org/anthology/D19-1298.pdf |
2019 |
EMNLP |
# optim-projection, arch-lstm, arch-gru, arch-bigru, arch-att, arch-selfatt, task-textclass, task-extractive, task-lm, task-condlm, task-seq2seq |
0 |
From the Token to the Review: A Hierarchical Multimodal approach to Opinion Mining |
Alexandre Garcia, Pierre Colombo, Florence d’Alché-Buc, Slim Essid, Chloé Clavel |
https://www.aclweb.org/anthology/D19-1556.pdf |
2019 |
EMNLP |
# arch-rnn, arch-att, pre-bert, task-extractive, task-seq2seq |
1 |
Deep Reinforcement Learning with Distributional Semantic Rewards for Abstractive Summarization |
Siyao Li, Deren Lei, Pengda Qin, William Yang Wang |
https://www.aclweb.org/anthology/D19-1623.pdf |
2019 |
EMNLP |
# optim-adam, arch-rnn, arch-lstm, arch-att, arch-copy, arch-coverage, search-beam, pre-glove, loss-nce, task-extractive, task-lm |
0 |
Summary Cloze: A New Task for Content Selection in Topic-Focused Summarization |
Daniel Deutsch, Dan Roth |
https://www.aclweb.org/anthology/D19-1386.pdf |
2019 |
NAA-CL |
# optim-adam, train-mtl, arch-rnn, arch-lstm, arch-cnn, arch-att, latent-vae, task-seqlab, task-extractive, task-lm, task-seq2seq |
2 |
Jointly Extracting and Compressing Documents with Summary State Representations |
Afonso Mendes, Shashi Narayan, Sebastião Miranda, Zita Marinho, André F. T. Martins, Shay B. Cohen |
https://www.aclweb.org/anthology/N19-1397.pdf |
2019 |
NAA-CL |
# optim-adam, init-glorot, norm-layer, arch-rnn, arch-lstm, arch-gru, arch-att, arch-selfatt, arch-transformer, task-extractive, task-seq2seq, task-relation |
4 |
Single Document Summarization as Tree Induction |
Yang Liu, Ivan Titov, Mirella Lapata |
https://www.aclweb.org/anthology/N19-1173.pdf |
2019 |
NAA-CL |
# optim-adam, train-mtl, pool-max, arch-rnn, arch-lstm, arch-bilstm, arch-cnn, arch-att, arch-copy, arch-bilinear, arch-coverage, task-textpair, task-extractive, task-lm, task-seq2seq, task-relation |
6 |
Guiding Extractive Summarization with Question-Answering Rewards |
Kristjan Arumae, Fei Liu |
https://www.aclweb.org/anthology/N19-1264.pdf |
2019 |
NAA-CL |
# optim-adam, optim-adagrad, arch-lstm, arch-att, arch-coverage, search-beam, task-extractive, task-seq2seq, task-tree |
4 |
Question Answering as an Automatic Evaluation Metric for News Article Summarization |
Matan Eyal, Tal Baumel, Michael Elhadad |
https://www.aclweb.org/anthology/N19-1395.pdf |
2019 |
NAA-CL |
# optim-adam, init-glorot, reg-labelsmooth, norm-layer, pool-max, arch-rnn, arch-cnn, arch-att, arch-selfatt, arch-memo, pre-fasttext, latent-vae, task-extractive, task-seq2seq |
9 |
Abstractive Summarization of Reddit Posts with Multi-level Memory Networks |
Byeongchang Kim, Hyunwoo Kim, Gunhee Kim |
https://www.aclweb.org/anthology/N19-1260.pdf |
2019 |
NAA-CL |
# optim-adam, arch-rnn, arch-att, latent-topic, loss-nce, task-extractive, task-seq2seq |
0 |
News Article Teaser Tweets and How to Generate Them |
Sanjeev Kumar Karn, Mark Buckley, Ulli Waltinger, Hinrich Schütze |
https://www.aclweb.org/anthology/N19-1398.pdf |
2019 |
NAA-CL |
# optim-adam, init-glorot, reg-dropout, reg-stopping, reg-patience, reg-norm, arch-rnn, arch-lstm, arch-gru, arch-att, arch-copy, comb-ensemble, pre-glove, struct-crf, latent-topic, task-extractive |
0 |
Glocal: Incorporating Global Information in Local Convolution for Keyphrase Extraction |
Animesh Prasad, Min-Yen Kan |
https://www.aclweb.org/anthology/N19-1182.pdf |