On the use and abuse of Bayesian modeling

Review: Jones, M., and Love, B. (2011, in press). “Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition.” Behavioral and Brain Sciences.

In the world of cognitive Psychology, there is a dizzying array of frameworks for building models. For example, to describe a given phenomenon, a researcher could choose to use a “connectionist” or a “Bayesian” model. To an outsider to the field, it might seem these choices are inconsequential: if a theory is ultimately about the nature of human thought, what difference does the mathematical “language” it is expressed with make? Isn’t the more important question to ask if a theory tells us something useful about the mind?

However, as it turns out, the choice of mathematical formalism often does means quite a lot, since it can greatly change what one learns from the model or what the model means.

Over the last couple years, there has been a large movement in the cognitive science community towards developing Bayesian models of cognition. To understand Bayesian probabilistic models and the controversies surrounding their use, it will help to understand a little more about what they are.

Bayesian probabilistic models allow scientists and statisticians to develop models of the “rational” inferences learners should make based on a set of observations, given a mathematically precise description of how the possible states of the world relate to those observations and what the learner’s prior beliefs are about those possible states. These are useful for two reasons:

  1. They can be used to solve ordinary statistical problems (e.g., does smoking cause cancer?)
  2. They can be used as ideal observer models, answering the question of how humans and animals “should” behave when asked to solve a particular cognitive problem.

Recently, a number of papers have proposed Bayesian models of various aspects of cognition, and given close fits to human data, argued that human behavior is therefore “rational”. This approach has generated outcry from some who feel it encourages a preoccupation with rationality and mathematical formalisms, diverting attention away from the interesting psychological questions of how these problems are solved by human and animal brains.

The central tenet of Bayesian Fundamentalism is the belief that human behavior can be explained entirely through rational analysis, given a correct probabilistic interpretation of the task environment.A recent paper in Behavioral and Brain Sciences [see also] by Matt Jones and Brad Love can be considered a comprehensive manifesto for this viewpoint. Jones and Love critique a perspective which they call “Bayesian Fundamentalism.” The central tenet of Bayesian Fundamentalism is the belief that human behavior can be explained entirely through rational analysis, given a correct probabilistic interpretation of the task environment. Under this view, there is no need to make reference to mechanistic explanations to explain behavior: since humans act rationally, a rational model will fully describe their behavior.

Jones and Love’s primary objections to this paradigm can be summarized as follows:

  • Without a careful study of the environment and cognitive challenges that put our ancestors under evolutionary pressure, it is impossible to accurately specify the assumptions that should be built into a model of a cognitive task. Therefore, the predictions of the models are highly unconstrained, and similarity to human behavior cannot be taken as evidence that humans behave rationally.
  • Theories of cognition which have no predictions on an algorithmic or implementational level are fundamentally unsatisfying, and that many of the contributions of cognitive modeling to other fields has been in the form of mechanistic predictions.

The authors call for a turn toward “Bayesian Enlightenment,” in which the algorithmic and implementational aspects of probabilistic models are taken seriously as having Psychological content.

We read a version of this paper in our recent lab meeting and our reactions were resolutely mixed. Some felt that the article did an excellent job pointing out theoretical excesses in the field, while others felt that it was overly dismissive of the usefulness of showing how a problem could be incorporated into a Bayesian framework.

One source of frustration that Jones and Love were able to address effectively is the common conclusion among modelers that a good fit to human behavior by a Bayesian probabilistic model indicated that human behavior is in some sense “rational.” As the authors make clear, a model cannot be considered a “rational” account of a cognitive process without a thorough analysis of the natural environment and the cognitive challenges that our brains were evolved to solve (a level of analysis completely missing from recent Bayesian analyses of cognition).

… the spector of the “Bayesian Fundamentalist” is a straw man. Who are these people? … What Bayesian wouldn’t welcome constraining data from neuroscience that supported or could bear directly on their model?

Another important issue they address was the apparent lack of clarity concerning the psychological content of Bayesian models. Since Bayes’ rule itself is trivial, the content of a Bayesian model rests almost entirely in the setting up of the hypothesis space and (often) the choice of an approximation algorithm. Bayesian theorists often attack process-level approaches (such as connectionist models or other process-level accounts) for making a large number of “arbitrary” assumptions. However to the degree that assumptions about priors and the hypothesis space in a Bayesian model are also arbitrary (i.e. not set based on an analysis of the evolutionary environment), then there is no real advantage to either approach. One way to ease this tension is to say that the key psychological contribution of Bayesian probabilistic models is their specification of the hypothesis space, prior, and approximation/optimization algorithm (Jones and Love advocate this approach as “Enlightened Bayes”).

On the other hand, there was a real sense that the best Bayesian modelers, who in fact have greatly contributed to the wide-spread interest in these types of models, are interested in process-level models (e.g., Vul, Daw, Steyvers, Griffiths, Goodman, etc…). What Bayesian wouldn’t welcome neuroscientific data that supported or could bear directly on their model? One real risk from the article is confusing people about what current Bayesian models are actually about, by aligning them with this non-existant bogeyman. Oddly, everyone we could think of who might be a “Bayesian Fundamentalist” had also written compelling papers that Jones and Love would called “Enlightened Bayes”. Is this a paper stirring controversy with no real target?

Ultimately though, a great paper for debate, and will hopefully encourage everyone who works with cognitive models of every kind to think a little bit harder about what their models really mean. It’s also pretty clearly written and might be a great place to start if you are interested in learning more about the value of cognitive models.

Melody Dye also has a fun post up about this article with a lot of colorful quotations.


Jones, M., and Love, B. (2011, in press). “Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition.” Behavioral and Brain Sciences.

Cartoon by Daniele Quercia.

(This article was written with input and ideas from our lab)

  1. i hate bayesian fundamentalists too. who are they exactly?

  2. I am a Bayesian statistician who proposes normative Bayesian models for inference and prediction. I hate to step into a cross fire between Bayesian Fundamentalism and Enlightenism in psychology, but two points came to mine when reading your excellent discussion.

    First, Bayes theorem is “trivial” only because Thomas Bayes and Richard Price worked it out for us in the 18th century, and many great minds have elaborated it since then. I doubt that if we were alive at the time, any of us would have beaten Price and Bayes to publication.

    Second, Bayesian statisticians use the term “rational” in a very limited sense to describe a person’s preferences for a set of actions. If these preferences follow the von Neumann-Morgenstern-Savage axioms, then a mathematical psychologist (do they still exist?) could assign numbers to a utility function on the space of consequences and to a probability function of the space of events such that the subject’s preference orderings of actions corresponds to the mathematical psychologist’s ordering based on (numerical) expected utility for those actions.

    “Irrational” preferences, such as intransitive orderings, violate these axioms. Consequently, there does not exist utility functions and probability measures that can reliably and consistently measure this subject’s preferences. The mathematical psychologist will not be able to predict confidently this subject’s behavior in any choice scenario. Normative Bayesians are not making a value judgment about this subject’s cognitive capacity; we merely note that this person falls outside our expertise. The good news for marketing managers and financial advisors is that this person makes an excellent target for “extracting” consumer surplus.

    A more relevant example for academic psychologists is that classical hypothesis testing violates the likelihood principal. Consequently, researchers who use p-values in decision making are irrational, and journals that publish their results are also irrational.

    Other uses of “rational” or “irrational” seem to be post hoc attempts to either defend or attack a person’s choice with value judgments that originate outside of the von Neumann-Morgenstern-Savage axioms. This use of “rational” is good fun for kibitzers, especially those with grant money and undergraduate subject pools.

    The downside is that normative Bayesians have been stigmatized for being small minded or mechanistic for positions about “rationality” that they have never taken. It is hard to certify a person as rational without an extensive study of his or her preferences. Even then, one runs into major measurement error and models specification problems that can cause the rational/irrational result to break either way. Even the most careful study makes critical assumptions, such as time homogeneity of preferences or method invariance. Relaxing these assumptions can turn the irrational into rational.

    So what? Well, if I were King of Psychology (if I had one wish, it would not be that), I would decree that “rational” only refers to the axioms. Other value judgments should be called what they are. Oh, I would also reject all articles that rely on p-values. :)

    Peter Lenk