UW Interactive Data Lab
Papers
Yea-Seul Kim, Logan Walls, Peter Krafft, Jessica Hullman
When people view a visualization, do they update their prior beliefs in a manner consistent with Bayesian statistics?
Abstract
People naturally bring their prior beliefs to bear on how they interpret the new information, yet few formal models exist for accounting for the influence of users' prior beliefs in interactions with data presentations like visualizations. We demonstrate a Bayesian cognitive model for understanding how people interpret visualizations in light of prior beliefs and show how this model provides a guide for improving visualization evaluation. In a first study, we show how applying a Bayesian cognition model to a simple visualization scenario indicates that people's judgments are consistent with a hypothesis that they are doing approximate Bayesian inference. In a second study, we evaluate how sensitive our observations of Bayesian behavior are to different techniques for eliciting people subjective distributions, and to different datasets. We find that people don't behave consistently with Bayesian predictions for large sample size datasets, and this difference cannot be explained by elicitation technique. In a final study, we show how normative Bayesian inference can be used as an evaluation framework for visualizations, including of uncertainty.
Materials
Citation
Yea-Seul Kim, Logan Walls, Peter Krafft, Jessica Hullman
ACM Human Factors in Computing Systems (CHI), 2019