Tags:Cross-lingual Classification, Deep Learning, ICD-10 Coding, Multilabel Text Classification, Multilingual Neural Language Model, Transformers and XLM
Abstract:
The automatic ICD-10 classification of medical documents is actually an unresolved issue, despite its crucial importance. The need of machine learning approaches devoted to this task is in contrast with the lack of annotated resources, especially for languages different from English. Recent Transformer-based multilingual neural language models at scale have provided an innovative approach for dealing with cross lingual Natural Language Processing tasks. In this paper, we present a preliminary evaluation of the Cross-lingual Language Model (XLM) architecture, a recent multilingual Transformer-based model presented in literature, tested in the cross lingual ICD-10 multilabel classification of short medical notes. In detail, we analysed the performances obtained by fine tuning the XLM model on English language training data, used for the prediction of ICD-10 codes of an Italian test set. The obtained results show that the use of the novel XLM multilingual neural language architecture is very promising and it can be very useful in case of low resource languages.
Exploit Multilingual Language Model at Scale for ICD-10 Clinical Text Classification