This is a time of high uncertainty. Charles Schwab’s chief investment strategist Liz Ann Sonders tells Yahoo Finance’s Robert Powell:
“It’s overused to talk about uncertainty and the impact it has on markets, but I think this is a uniquely high amount of uncertainty.”
This can be considered from all manner of angles, but I’m interested in the technological impact. One example: I recently learned about Amazon’s massive supercomputer (Project Rainier), to be used by Anthropic. AWS vice president of compute and networking services Dave Brown tells the Wall Street Journal that this should come online in 2025 and may be the world’s largest for training AI.
In a report from the MIT SMR-BCG Artificial Intelligence and Business Strategy Global Executive Study and Research Project, it’s estimated that 91% of leadership in surveyed organizations expect generative AI to be a core element in strategy, across at least some units, within the next three years.
But to what extent is GenAI just the current shiny thing, dazzling us with predictive capabilities, with greater impact coming from the next AI-driven business innovation?
Things are moving fast, and this puts managers in a highly uncomfortable position of having to plan for an AI transformation in business without being exactly clear where it’s going, or even what it means right now.
At PTP, we have our own propriety AI offerings, in addition to providing AI talent and supporting ongoing AI transformation for our clients, putting me in the eye of this storm myself.
Today I want to consider how we can not only live with such uncertainty but even learn to thrive in a world of ever more unknowns.
About AI Uncertainty: The Unknown Unknowns
I hear terminology from the so-called “uncertainty matrix” thrown around a lot today, sometimes credited to Donald Rumsfeld, who credits NASA’s William Graham. It breaks our awareness and understanding into sections like this:
- Known Knowns: Facts, proven, that we also understand. (We see it and know it.)
- Known Unknowns: Those things we know exist but don’t yet fully understand (or a problem arena we know is coming, even if we don’t have the details yet).
- Unknown Unknowns: Allotted space for those things we don’t know by name so obviously can’t yet understand (or those lurking problems out there we can’t fully anticipate).
A recent piece in Wired by David Spiegelhalter adds another, I think prudent, category:
- Unknown Knowns: Assumptions we take for true, made without thinking, which may, in fact, be wrong.
With AI in business operations in flux, and experts in almost every field disagreeing about impacts and futures, we may all currently be wrestling with all these unknowns.
How soon is AGI coming (or is it at all)? How safe is our data? How wild will hallucinations be, and how soon can they be tamed? When do I invest, and where?
And what are some practical, everyday AI use cases for managers? How do we truly maximize the return on investment?
And of course: what’s next?
A great piece by Oguz A. Acar and Bob Bastian in the Harvard Business Review breaks this AI uncertainty down into three types:
1. State Uncertainty: Where we lack certainty about AI’s current capabilities and future potential. Here we get conflicting takes and flawed benchmarks, making it difficult to predict how AI solves our own specific goals.
2. Effect Uncertainty: Adding to capability, what do we know about the real financial impacts and speed of adoption? In such an R&D phase, with new models and systems coming monthly, this can be very difficult to track for certain, and at any scale.
3. Response Uncertainty: How do we go about leveraging AI for business growth? When should a company move and on what systems? And is the focus automation or amplification, new capabilities or cost-cutting? Should you develop custom solutions or adopt existing ones?
Navigating AI Uncertainty
This uncertainty around adapting to AI change isn’t going away anytime soon, so in general, fighting against it is likely futile. To the extent possible, I believe in facing this variable future with a prudent and humble excitement.
Easier said than done, but this begins by preparing for worst-case scenarios, leaving ample space for unknown unknowns. With such contingencies accounted for, it’s easier to strike a balance between the kinds of radical optimism and pessimism that have been dominating AI headlines over the past several years.
Learning (see below) is essential.
Also key is acknowledging our ignorance and challenging the assumptions we’re making.
Acar and Bastian break this into discrete, complementary categories:
- Thinking: Internal and reflective, this is about studying the past and building awareness from your own thinking. This means identifying where you succeeded and perhaps more importantly, where you failed and why. This helps prevent repeating the same mistakes but also is a process whereby we look for unknown knowns, or those assumptions we may be making from a flawed grounding.
- Seeing: This is external and reflective and about anticipating the unknown unknowns. The MIT SMR-BCG study gives the example of the Estée Lauder Companies (ELC), who use AI to identify the faster-than-ever changes in fashion trends by scanning social media at scale. This anticipation is critical in reading markets as well as for AI uncertainty management in general.
- Doing: Active and internal, this is refers to getting your hands dirty or putting the rubber to road. Trial and error is essential when facing uncertainty. Experiments of course must be safely sandboxed, moved to manageable trial pilots, backed by clear metrics for success, and evaluated properly to ensure effective learning. Using these results to refine and scale initiatives can help tame the unknowns with greater understanding.
- Shaping: Finally, active and external. This is where you develop your strategy and align with goals. Going further, it stands that, rather than just being reactive, we must be proactive, working to shape the trajectory of AI within our fields. If you can, identify strategic AI partners to help you stay ahead of the curve, helping you shape while bringing the other areas up to speed.
AI Uncertainty and the Opportunities
At PTP, AI assists us with the very uncertainty it brings about. It helps maintain and attain new knowledge, process our data, and administer solutions.
Of course, this all begins with exploration.
The MIT SMR-BCG study gives this a label: “augmented learning.” And they found that only 15% of the companies they surveyed are making use of it now.
Still, they identified several tangible benefits:
- These companies are 1.6 times more likely to manage environmental and company-specific unexpected uncertainties (including regulatory, workforce, and technological).
- They are more than two times more likely to manage talent-related disruptions.
- They are 60–80% more likely to manage unknown external uncertainties.
Simply put: they are far better prepared to handle the unknown unknowns, across the map.
The study also points to organizational learning, or learning through experiments, tolerating failure, supporting employees bringing their own ideas, learning from failed projects, and gathering and sharing info among employees.
While this is distinct from AI-driven learning (using AI to lead new learning, analyze performance, create new solutions, and give employees hands-on experience), the two work best hand-in-hand.
[Take a look at this PTP Report on innovation which also profiles additional steps in this direction.]
Most companies, according to the study, currently do neither. And while they found that nearly all organizations (99%) profited from AI in some capacity, combining these two greatly increased revenue gains from AI integration.
In other words: harness innovation and failures, learn continuously, and share those lessons.
And do so with the help of AI.
Conclusion
Uncertainty is here to stay, and things are changing fast. One of the key challenges for leaders is knowing when, and how, to move, while simultaneously managing AI risks.
But this isn’t like many prior technological breakthroughs: the unknowns are greater, the change more persistent and at a greater scale.
Embracing this in a shrewd way is key—considering the worst-case scenarios alongside the best—and this requires a clear focus on what, and how, we’re learning.
As it is already doing with its own inefficiency, data shortage, and error correction, AI can help us here, if we harness it to learn and prepare for our variable future ahead.
References
Amazon Announces Supercomputer, New Server Powered by Homegrown AI Chips, Wall Street Journal
Learning to Manage Uncertainty, With AI, MIT Sloan Management Review
A Toolkit to Help You Manage Uncertainty Around AI, Harvard Business Review
An Uncertain Future Requires Uncertain Prediction Skills, Wired