The Effects of Expressing Empathy/Autonomy Support Using a COVID-19 Vaccination Chatbot: Experimental Study in a Sample of Belgian Adults
Oglądaj/ Otwórz
Data
2023-05Autor
Wojciech, Trzebiński
Toni, Claessens
Jeska, Buhmann
Aurélie, De Waele
Greet, Hendrickx
Pierre, Van Damme
Walter, Daelemans
Karolien, Poels
Metadane
Pokaż pełny rekordStreszczenie
Background:
Chatbots are increasingly used to support COVID-19 vaccination programs. Their persuasiveness may depend on the conversation-related context. Objective: This study aims to investigate the moderating role of the conversation quality and chatbot expertise cues in the effects of expressing empathy/autonomy support using COVID-19 vaccination chatbots.
Methods:
This experiment with 196 Dutch-speaking adults living in Belgium, who engaged in a conversation with a chatbot providing vaccination information, used a 2 (empathy/autonomy support expression: present vs absent) × 2 (chatbot expertise cues: expert endorser vs layperson endorser) between-subject design. Chatbot conversation quality was assessed through actual conversation logs. Perceived user autonomy (PUA), chatbot patronage intention (CPI), and vaccination intention shift (VIS) were measured after the conversation, coded from 1 to 5 (PUA, CPI) and from –5 to 5 (VIS).
Results:
There was a negative interaction effect of chatbot empathy/autonomy support expression and conversation fallback (CF; the percentage of chatbot answers “I do not understand” in a conversation) on PUA (PROCESS macro, model 1, B=–3.358, SE 1.235, t186=2.718, P=.007). Specifically, empathy/autonomy support expression had a more negative effect on PUA when the CF was higher (conditional effect of empathy/autonomy support expression at the CF level of +1SD: B=–.405, SE 0.158, t186=2.564, P=.011; conditional effects nonsignificant for the mean level: B=–0.103, SE 0.113, t186=0.914, P=.36; conditional effects nonsignificant for the –1SD level: B=0.031, SE=0.123, t186=0.252, P=.80). Moreover, an indirect effect of empathy/autonomy support expression on CPI via PUA was more negative when CF was higher (PROCESS macro, model 7, 5000 bootstrap samples, moderated mediation index=–3.676, BootSE 1.614, 95% CI –6.697 to –0.102; conditional indirect effect at the CF level of +1SD: B=–0.443, BootSE 0.202, 95% CI –0.809 to –0.005; conditional indirect effects nonsignificant for the mean level: B=–0.113, BootSE 0.124, 95% CI –0.346 to 0.137; conditional indirect effects nonsignificant for the –1SD level: B=0.034, BootSE 0.132, 95% CI –0.224 to 0.305). Indirect effects of empathy/autonomy support expression on VIS via PUA were marginally more negative when CF was higher. No effects of chatbot expertise cues were found.
Conclusions:
The findings suggest that expressing empathy/autonomy support using a chatbot may harm its evaluation and persuasiveness when the chatbot fails to answer its users’ questions. The paper adds to the literature on vaccination chatbots by exploring the conditional effects of chatbot empathy/autonomy support expression. The results will guide policy makers and chatbot developers dealing with vaccination promotion in designing the way chatbots express their empathy and support for user autonomy.