(with Carola Binder)
Central banks have increased their official communications. Previous literature measures complexity, clarity, tone and sentiment. Less explored is the use of fact versus emotion in central bank communication. We test a new method for classifying factual versus emotional language, applying a pretrained transfer learning model, fine-tuned with manually coded, task-specific, and domain-specific datasets. We find that the large language models outperforms traditional models on some occasions, however, the results depend on a number of choices. We therefore caution researchers from depending solely on such models even for tasks that appear similar. Our findings suggest that central bank communications are not only technically difficult but also subjectively difficult to understand.
Date | Attached files |
---|---|
29 April 2024 | Baerg_Binder_2024 |