Anchoring Biases
We have a lot of shortcuts and heuristics in our brain to keep us from just being overwhelmed.
Those can sometimes lead us astray. There are times when it triggers subconcious biases.
Having subscribed to numerous newsletters recently and having access to new tools like ChatGPT (among many others), my biggest learning recently is being aware of potential bias in what I read and consume.
If you simply hear a number or fact about something, whether or not it's true or relevant to the question at hand, your brain will latch on to that and skew subsequent thinking (whether we realize it or not).
This is something I do all the time when I open up a newsletter in my Gmail account. My daily routine involves reading newsletters, clicking on links, downloading links, and looking at charts. I read all or some of my newsletters on Sunday mornings, cherry-picking my favourite articles as I read along.
Occasionally, however, I struggle with inaccurate beliefs about differences that don't exist.
It's weird. As an example, let's say that a particular article was written by an Englishman and another was written by an Australian, and I most certainly will lean toward the article written by an Englishman, more often than not.
The truth is that these beliefs prevail in pretty much everyone, and they can be harmful if not acknowledged and confronted.
There are a lot of books available on this subject. There are a number of studies that cover weaponized information and disinformation, while others focus more on cognitive psychology or behavioral economics, and in some cases even classical economics is biased. For example, you can take a look at the classic book, 'Thinking Fast and Slow' by Daniel Kahneman, which is a valuable resource for understanding cognitive psychology and behavioral economics.
Yet, with new ChatGPT-like tools, my approach to consuming information has changed, and for the better.
While I wouldn't say it has helped me check my biases when it comes to reading newsletters, I will say it has definitely helped me when it comes to reading academic papers and scientific research in general.
As a result, I am now able to download papers, extract insights, compare insights with insights from other papers of a similar nature, reference other papers based on citations and find other papers of the same nature.
In spite of the fact that I am interacting with the system using natural language, which is incredible, I am able to aggregate large amounts of text in a more logical and programmatic manner.
I have found this to be very helpful in the process of breaking down complex subjects and fact-checking both the content as well as my biases on the subject. As well as helping me extract information about the culture of a particular academic institution, it has also been very helpful in trying to identify a correlation between the research itself and the institution that the authors belong to.
Anyway, this post is just a thought experiment. In light of the latest advancements of Google Search and Bard, I wondered this morning whether there is a chance to address problems associated with bias through the use of these tools? I was wondering whether a tool like Google's Bard would have been helpful during the pandemic or not?