Year |
Conf. |
Concept |
Cited |
Paper |
Authors |
Url |
2019 |
ICML |
pre-unilm |
35 |
Unified Language Model Pre-training for Natural Language Understanding and Generation |
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon |
https://arxiv.org/pdf/1905.03197.pdf |
2019 |
Arxiv |
pre-gpt |
6 |
Sample Efficient Text Summarization Using a Single Pre-Trained Transformer |
https://arxiv.org/pdf/1905.08836.pdf |
Urvashi Khandelwal, Kevin Clark, Dan Jurafsky, Lukasz Kaiser |
2019 |
Arxiv |
pre-bart |
5 |
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension |
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer |
https://arxiv.org/pdf/1910.13461.pdf |
2019 |
ICML |
pre-mass |
56 |
MASS: Masked Sequence to Sequence Pre-training for Language Generation |
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu |
https://arxiv.org/pdf/1905.02450.pdf |
2019 |
Arxiv |
pre-pegasus |
2 |
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization |
Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J. Liu |
https://arxiv.org/pdf/1912.08777.pdf |
2020 |
AAAI |
gen-ext |
1 |
Controlling the Amount of Verbatim Copying in Abstractive Summarization |
Kaiqiang Song,Bingqing Wang,Zhe Feng,Liu Ren,Fei Liu |
https://arxiv.org/pdf/1911.10390.pdf |
2019 |
ACL |
sup-sup |
1 |
Sentence Centrality Revisited for Unsupervised Summarization |
Hao Zheng,Mirella Lapata |
https://arxiv.org/pdf/1906.03508.pdf |
2019 |
EMNLP |
gen-sci |
14 |
SciBERT: A Pretrained Language Model for Scientific Text |
Iz Beltagy, Kyle Lo, Arman Cohan |
https://arxiv.org/pdf/1903.10676.pdf |
2019 |
ACL |
arch-rnn |
8 |
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization |
Xingxing Zhang,Furu Wei,Ming Zhou |
https://arxiv.org/pdf/1905.06566.pdf |
2019 |
EMNLP |
gen-abs |
1 |
An Entity-Driven Framework for Abstractive Summarization |
Eva Sharma,Luyang Huang,Zhe Hu,Lu Wang |
https://arxiv.org/pdf/1909.02059.pdf |
2019 |
EMNLP |
gen-abs |
0 |
The Feasibility of Embedding Based Automatic Evaluation for Single Document Summarization |
Simeng Sun,Ani Nenkova |
https://pdfs.semanticscholar.org/584b/9e321c8f82fed514f70b10313b5fd971b277.pdf |
2019 |
EMNLP |
arch-cnn |
0 |
Summary Level Training of Sentence Rewriting for Abstractive Summarization |
Sanghwan Bae,Taeuk Kim,Jihoon Kim,Sang-goo Lee |
https://arxiv.org/pdf/1909.08752.pdf |
2019 |
EMNLP |
pre-elmo |
0 |
Multi-Document Summarization with Determinantal Point Processes and Contextualized Representations |
Sangwoo Cho,Chen Li,D. H. Yu,Hassan Foroosh,Fei Liu |
https://arxiv.org/pdf/1910.11411.pdf |
2018 |
AAAI |
pre-bert |
42 |
Generative Adversarial Network for Abstractive Text Summarization |
Linqing Liu,Yao Lu,Min Yang,Qiang Qu,Jia Zhu,Hongyan Li |
https://arxiv.org/pdf/1711.09357.pdf |
2019 |
Arxiv |
train-meta |
0 |
Exploring Domain Shift in Extractive Text Summarization |
Danqing Wang, Pengfei Liu, Ming Zhong, Jie Fu, Xipeng Qiu, Xuanjing Huang |
https://arxiv.org/pdf/1908.11664.pdf |
2019 |
ACL |
gen-ext |
4 |
Searching for Effective Neural Extractive Summarization: What Works and What's Next |
Ming Zhong,Pengfei Liu,Danqing Wang,Xipeng Qiu,Xuanjing Huang |
https://arxiv.org/pdf/1907.03491.pdf |
2019 |
EMNLP |
gen-ext, gen-abs, gen-2stage, arch-transformer |
12 |
Text Summarization with Pretrained Encoders |
Yang Liu,Mirella Lapata |
https://arxiv.org/pdf/1908.08345.pdf |