Neural Signals Generate Clinical Notes in the Wild
Abstract
A large-scale clinical EEG dataset was curated and used to develop CELM, the first clinical EEG-to-Language foundation model for automated summarization and report generation from long-term EEG recordings.
Generating clinical reports that summarize abnormal patterns, diagnostic findings, and clinical interpretations from long-term EEG recordings remains labor-intensive. We curate a large-scale clinical EEG dataset with 9{,}922 reports paired with approximately 11{,}000 hours of EEG recordings from 9{,}048 patients. We therefore develop CELM, the first clinical EEG-to-Language foundation model capable of summarizing long-duration, variable-length EEG recordings and performing end-to-end clinical report generation at multiple scales, including recording description, background activity, epileptiform abnormalities, events/seizures, and impressions. Experimental results show that, with patient history supervision, our method achieves 70%-95% average relative improvements in standard generation metrics (e.g., ROUGE-1 and METEOR) from 0.2-0.3 to 0.4-0.6. In the zero-shot setting without patient history, CELM attains generation scores in the range of 0.43-0.52, compared to baselines of 0.17-0.26. CELM integrates pretrained EEG foundation models with language models to enable scalable multimodal learning. We release our model and benchmark construction pipeline at https://github.com/Jathurshan0330/CELM.
Get this paper in your agent:
hf papers read 2601.22197 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper