When we work with our clients to create data strategies or custom software, we often find ourselves facilitating and prioritizing a multitude of viewpoints, debating skepticism, and building consensus. In this context, we have often found that understanding one variable, more than others, is central to navigating the complexities of stakeholder positions: professional bias.
Biases are natural tendencies in our thinking that may slant our decision making towards certain conclusions without critical evaluation. At best they can serve as an unconscious way for us to quickly process information, at worst they can lead to logical fallacies or forms of prejudice.
Professional bias is a condition wherein you view the world through the particulars of your own job. Since our job (and the education that went into it) makes up a significant portion of our life experiences, it makes sense that our broader world view would be greatly influenced by it.
When confronted with a problem, our natural instinct is to go back to the methodologies we were trained to follow in order to evaluate possible solutions. For example, depending on your profession, you may look at an unused plot of land and treat it as a design problem, a safety hazard, a marketing opportunity, or a way to make money in the future. Each of these viewpoints may be simultaneously correct, but one of them may need to be prioritized above all others. The challenge then becomes getting the other viewpoints to buy in and participate.
However, what if a professional is confronted with an innovative concept that can potentially transform their enterprise or industry? What happens when they are confronted with data that contradicts their own professional upbringing? Or technology that may disrupt, in whole or in part, the need for some of their expertise?
This is where professional bias can take the form of change-resistant thinking that can keep transformational or innovative ambitions at bay. In their book, The Future of the Professions, Richard and Daniel Susskind further describe three kinds of bias that often emerge when a professional is confronted with new technology:
1. Status Quo Bias
The “status quo bias” refers to the tendency of a professional to prefer “continuing to do things as they are done today.” This bias may lead a professional to assert that their field is immune from change (what the Susskinds refer to as “immutable”). In the building industry, it is easy to see the status quo bias assert itself in some kind of phrase akin to “this doesn’t address the reality of making a building today” or “this solution doesn’t account for [insert fringe design case here].”
At best, the biased professional will adopt a technology purely as a means to enhance the status quo (using 3D BIM tools to produce 2D documents slightly faster, for example). At worst, they will dismiss a technology wholesale with very little critical evaluation. In an organization, this may result in uneven adoption of a technology that are embraced by some, but resisted by others. If you have ever overseen a construction business’ transition from CAD to BIM, you have probably experienced some form of this type of status quo bias, and you are probably still confronting it today!
2. Technology Myopia
“Technology myopia” refers to a professional’s inability to imagine future systems, platforms, and tools as being “radically more powerful than those of today”. It may also refer to the inability to recognize that the early adopters of a new technology may quickly become a mainstream skill set. This bias manifests itself as what can be characterized as shortsighted thinking.
For example, early on in my explorations with computational tools, I would sometimes find myself confronted by a manager that would quickly dismiss my efforts as being too niche. “It’ll never go beyond of a few special projects,” they would often tell me. Now, less than eight years later, that same company is making concerted efforts to hire computational designers for all of their design studios.
3. AI Fallacy
The third bias that the authors highlight is set in the context of the current hype surrounding artificial intelligence. The “AI Fallacy” is the tendency for a professional to, 1) wrongly categorize AI (and affiliated technologies) as an attempt to replicate human thinking processes, and 2) conclude that because a computer cannot think like a human that it will never outperform a human expert.
In the building industry, I have often heard this type of bias present itself in discussions surrounding the use of genetic algorithms for design optimization or in applications of machine learning to predict building performance. “These tools can’t undertake the same tasks or solve problems like an experience architect”, is a refrain I have heard many times.
But to equate these capabilities directly with human intelligence is, in fact, the fallacy. The aim of these technologies is not to copy expert thinking. Rather it is to exploit new capabilities only the technology could provide, such as the processing large amounts of complex design data that would be impossible for a human.
Overcoming these biases is sometimes the greatest challenge we encounter as supporters of industry transformation. As such, many of our strategic approaches involve finding ways to facilitate discussions which allow stakeholders to openly confront their biases and make more informed judgments on the future of their work. Specifically, we have found that giving seasoned professionals some hands on experience with new tools and data can go a long way to expose the potential for new systems.
For example, one of my personal favorite activities is creating workshop scenarios that allow skeptics to gain direct hands-on experience with the technology irrespective of whether their responsibilities will result in their being end users. A curated computation workshop for project managers, for instance, goes a long way to greatly alleviate the fear of the unknown, demonstrate the value-add potential for emerging technologies, and shift debate toward future opportunities.
After all, as the Susskinds articulate very well, “The least likely future of all is that nothing much will change.”