M.A. Mathilde Bechdolf
M.A. Mathilde Bechdolf (ehem. Dräger)
Current projects
Something Always Sticks – How Strategic Humbugging Causes Harm
Duration: 04.12.2025 to 31.12.2027
People frequently follow advice from sources they recognize as lacking expertise. This study examines whether individuals rely on information from sources they know to be uninformed, what we call "humbugging," in a common pool resource game with a tipping point. Using a three-phase experimental design, we investigate how uninformed messages affect extraction decisions even when participants are explicitly informed about the source's lack of expertise. We vary the payoff alignment of experts and pseudo-experts and use AI voice synthesis to ensure uniform message presentation. We test whether a known uninformed source can still systematically influence behavior.
Seen One, Seen Them All
Duration: 27.11.2025 to 31.12.2027
How does a single, statistically irrelevant piece of news about one group member affect cooperative behavior toward the entire group? This project examines whether individuals systematically over- or underweight positive and negative news about individual in-group and out-group members, and whether this leads to biased behavior toward the group as a whole.
Giving a Voice – Increasing Individual Self-Expression to Enhance the Resilience to System Discontent
Duration: 01.09.2023 to 31.12.2026
Individuals in a group who repeatedly experience that their group's policy selection system does not decide in their favor may develop system discontent and system disbelief. System discontent reflects individual dissatisfaction with the decision-making process, while system disbelief captures the perception that the system does not benefit the group as a whole. In this experimental study, I investigate whether allowing individuals to express and explain their preferences affects the development of system discontent and system disbelief. I examine three different group policy selection mechanisms, each combined with two communication modes: with and without voice. Decisions are made either by a single decision maker (dictator), by AI (ChatGPT), or by Borda Count (automated). Voice reduces system discontent and system disbelief under the Dictator and Borda mechanisms — but not under AI. These findings offer insights for managers and policymakers on designing mechanisms that build resilience against system discontent and system disbelief.
Feeling Forgotten – The Rise of System Disbelief
Duration: 01.01.2023 to 31.12.2026
Individuals who repeatedly experience that their group's policy selection system does not decide in their favor may feel increasingly forgotten and develop system disbelief — the belief that the decision-making mechanism is unfavorable for one's group. We conduct a laboratory experiment examining the development of system discontent and system disbelief across four policy selection systems: Borda, Committee, Dictator, and Random. We additionally examine how positive and negative framing shapes these dynamics. Our results indicate that unfavorable outcomes lead to system discontent and system disbelief when the subject's involvement with the topic is high, with effects intensifying under negative framing. These findings suggest that positive framing can mitigate system discontent, system disbelief, and ensuing destructive behaviors.