In today's world, the flow of information has increased to a torrent. News stories, social media posts, advertisements - they all compete for our attention, each proclaiming to have an insight on the truth. But how can we safely navigate through this sea of information? How do we distinguish fact from fiction, signal from noise? Understanding how our brains process information and make decisions may offer a useful framework for charting a safe passage through the sea of information.
The Brain as a Predictive Machine
Our brains are locked in silence and darkness within the confines of our skulls, much like the old thought experiment of a brain in a vat.
Our brains are nearly constantly making predictions about the world around us. These predictive processes are known by many names, but here we will use the British philosopher Andy Clark’s term: predictive processing. This involves processing information to anticipate what comes next, allowing us to interpret our surroundings and act quickly in response to changing circumstances. Predictive processing is essential for our survival and ability to navigate reality.1
Our brains are locked in silence and darkness within the confines of our skulls, much like the old thought experiment of a brain in a vat.2 They don’t experience the outside world directly. Instead, they have to interpret electrochemical signals generated from our sensory apparatus into a model of the world. This model is what we use to make predictions. If the predictions are accurate, we are not surprised. If they are better than expected, we experience what is called a positive reward prediction error. If they are worse than expected, it is a negative reward prediction error.
Imagine a hunting dog that notices movement in some bushes. The dog goes to investigate and finds it was just the wind. The dog might be a little disappointed but not particularly surprised, resulting in minimal updating of its model. If the movement turns out to be a wounded bird, the dog is greeted with a positive reward for its surprise, or prediction error.3 This positive reward prediction error will result in an updating of the dog’s model of the world to include that movement in the bushes could mean an easy meal. If it turns out to be a larger and more fierce dog in the bushes, this results in a negative reward prediction error and a sharp update that movement in a bush might be a cause for extreme caution.
This process of modeling and making predictions is not just constrained to physical environments. When we encounter a news story or a social media post, our brains try to fit it into our internal model. If the new information fits with our existing model, it’s no surprise and we might accept it. If it seems like a nugget of information that helps to make sense of our internal model, we might get a sensation similar to the dog finding the wounded bird. It updates our model and we might share and spread this nugget of information, even if it’s not true. When new information doesn’t fit our internal model of the world or disconfirms what we identify with, we can experience a negative reward prediction error, which may result in us avoiding that source of information in the future.
Predictive Processing in Digital Environments
From an evolutionary perspective, predictive processing has served us quite well. Our brains’ complex information processing abilities enabled humans to be agile problem solvers, adapt to many different climates, and spread across the world as nomadic hunter-gatherers for the majority of human history (or, as we call most of this “human history”, prehistory). Now we are living in extremely information-rich digital environments, wherein our predictive processing skills are met with more of a challenge. Ideally, our predictive processing could still serve us quite well, but what might an ideal scenario look like?
Assuming one has adequate media literacy skills, including how to trace claims back to their original sources, a basic understanding of how to distinguish reliable sources from unreliable sources, a basic understanding of how to fact-check, and a skeptical eye for deep fakes, scams, disinformation and the like, an ideal scenario would be for one to have a perfectly unbiased feed of news information on any given topic. In a sense, media literacy skills tune our brains’ predictive processing so as to expect some level of mistakes and inaccuracies in news reporting, which is inevitable even if one has access to an unbiased feed of information. But that’s the catch, bias is unavoidable. Our internal model and the internal models of the producers of information guarantee bias. This is why stepping back and thinking about our thinking and the contexts in which we take in new information is important.
In the information age, not only must we recognize the limits of our knowledge, namely the near impossibility of taking in a comprehensive amount of information on any one topic in order to develop a holistic perspective, we must also recognize the imperfections of how we source information. As any online feed of news information is bound to have some level of biased reporting, even if that’s just a matter of what news is chosen to be reported on, in what order, and to what extent.
To get more concrete, let’s pretend you’re a little bit more sophisticated than people who use social media as their main source of news.4 Instead of getting your news from social media, you use some sort of news app, like the Apple News app. This is likely to reduce the amount of inaccurate news and biased content you come across, but even still, most news apps use algorithms that use your past activity and what you choose to read to determine what to suggest you read. Using what you’re interested in and what you’ve read in the past, these apps will suggest to you content that is likely to be of interest to you based on prior engagement with other articles. The problem is, that our brains didn’t evolve to thrive in these sorts of media environments that are being tuned to satisfy our interests and biases.
The Relevance of Evolutionary Psychology
Brains evolved to process information from the environment that wasn’t changing to fit the organism’s interests but just was as it was. Past experiences served as pretty good predictors of what was really out there enabling our prehistoric ancestors to form more or less accurate predictions for the practicality of survival, and when predictions failed them, they’d either pay the price of survival or adapt and adjust. Moving from the physical landscapes of survival to our personalized media landscapes of today throws a wrench in the gears of our brains’ predictive processing abilities.
The predictive processing of our prehistoric ancestors was geared toward survival, with failed predictions having a cost, like a missed meal or an injury, either of which could be life-threatening. But in our modern media landscape, when one is being fed a stream of biased news information forming and perpetuating particular perspectives, their perspectives are unlikely to be perturbed by occasional counter-points and contradictory evidence, especially when there are no immediate or serious consequences for being wrong.
Breaking Free from Echo Chambers
One of the biggest challenges to critical thinking in our information age is the phenomenon of echo chambers online. Echo chambers are online spaces where people are exposed mostly to information that confirms their existing beliefs.5 This can lead to a reinforcement of biases and a decreased willingness to consider alternative viewpoints. Echo chambers occur when there is conformity and polarization in a network, which makes communities with similar beliefs cluster together and move away from those who differ from them.6 Social media algorithms that use engagement data to determine what content to show to users play a key role in creating echo chambers, forming feedback loops of content that conform to a user’s biases.
Well, there’s no silver bullet solution, this is not due to our inability to manufacture silver bullets, it’s because werewolves don’t exist.
Echo chambers constantly reinforce our model of the world by only exposing us to supporting evidence. By not showing us content that contradicts our views or biases, when we’re in an echo chamber we’re not getting any negative prediction errors, meaning we don't see that there is anything wrong with our internal model of the world. This can become a real problem.
If you find yourself unlucky enough to be in an echo chamber of disinformation and propaganda that convinces you that Russia’s invasion of Ukraine was a justified offensive and not wrong, you’re likely to ignore the prediction error signals coming from most other sources that tell you that the Russian invasion of Ukraine was unjust and wrong. So long as you continue to engage in the echo chamber, your brain's predictive processing functions will continue to lead you to believe information that conforms to the claims being propagated and recycled within the echo chamber and it will be difficult for you to break free from this feedback loop.
So, how do we adapt to our digital environments? How do we avoid the pull of echo chambers and properly tune the predictive processing powers of our brains to accurate and reliable information? Well, there’s no silver bullet solution, this is not due to our inability to manufacture silver bullets, it’s because werewolves don’t exist. Most real-world problems are much more complex than mythical creatures. But these problems can have real-world solutions, in addition to media literacy skills, a deeper understanding of predictive processing, in particular, understanding Bayesian reasoning can go a long way to revising our internal models to better equip ourselves for our voyage through the rough seas of our modern information environment.
Up Next…
We will dive deeper into predictive processing and introduce Bayesian reasoning in our next essay.
In the meantime, if you’d like to learn more about some of the media literacy skills we’ve alluded to in this essay already, check out our What What Works to Build Mental Immunity series.
Did you find any errors in this essay?
If so, please send us a message to let us know.
Predictive Processing, Science Direct, also Predictive Coding, Wikipedia.
See also “Active Inference,” which builds upon predictive processing, emphasizing the fact that we are not just passive observers of the world but that we actively interact with the world, even if that interaction is just a matter of what we direct our attention to. Here’s an academic review about active inference: Active inference as a theory of sentient behavior by Giovanni Pezzulo, Thomas Parr, and Karl Friston.
Perhaps we will write more about active inference in our next essay.
Brain in a vat, Wikipedia
The word “surprise” can almost always be used interchangeably with “prediction error.”
Namely, Gen Z. See How Gen Z gets its news, from Axios
Diaz Ruiz, C., & Nilsson, T. (2023). Disinformation and Echo Chambers: How Disinformation Circulates on Social Media Through Identity-Driven Controversies. Journal of Public Policy & Marketing, 42(1), 18-35. https://doi.org/10.1177/07439156221103852
See the subsection on Social Media in Using Psychology to Understand and Fight Health Misinformation, an APA Consensus Statement