Original title: Keeping Users Engaged During Repeated Administration of the Same Questionnaire: Using Large Language Models to Reliably Diversify Questions
Authors: Hye Sun Yun, Mehdi Arjmand, Phillip Raymond Sherlock, Michael Paasche-Orlow, James W. Griffith, Timothy Bickmore
In research and healthcare, using the same questionnaires repeatedly can tire out respondents, affecting data quality. To tackle this, the team explores a new approach: using large language models (LLMs) to create diverse versions of these questionnaires while keeping their reliability intact. They tested this idea in a two-week study, where participants answered either a standard depression questionnaire or one of two versions generated by LLMs. Results showed that all questionnaires measured depression consistently, proving the LLM-generated ones were just as reliable. But here’s the kicker: people found the standard questionnaire more repetitive than the LLM-generated ones. This suggests that these new versions sparked more interest while still gathering accurate data. It’s like jazzing up routine questions to keep respondents engaged without compromising the questionnaire’s effectiveness.
Original article: https://arxiv.org/abs/2311.12707