Rethinking Generalization of Neural Models: A Named Entity Recognition Case Study

Jinlan Fu*, Pengfei Liu*, Qi Zhang, Xuanjing Huang

Do you have these questions?

We help you to find the answer!

In this paper, we take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives and characterize the differences of their generalization abilities through the lens of our proposed measures, which guides us to better design models and training methods. Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models in terms of breakdown performance analysis, annotation errors, dataset bias, and category relationships, which suggest directions for improvement. We have released the datasets: (\textit{ReCoNLL}, \textit{PLONER}) for the future research.


Fig. Neural NER systems with different architectures andpre-trained knowledge, which we studied in this paper.

Fig. The breakdown performance on CoNLL.

Fig. Experimental design for understanding cross-dataset generalization.

Fig. Probing Inter-category Relationships via Consistency.

Our Paper

Notable Conclusions

Dataset

Bonus: a NER Paper Searching System