I first became interested in artificial intelligence (AI) in the early 1990s when I had an opportunity to work on the development of an expert system. Although that project was a failure it got me interested in AI and I kept an eye on it ever since. Over the years “corporate America” played with various AI technologies, in particular knowledge management (KM) systems, but nothing much came of it. In the early 2010s, we started seeing AI take off, IBM Watson was applied for more than just playing Jeopardy and audio assistants such as Alexa and Siri started to gain traction. Then in 2022 things got very interesting with the release of AI-based art generation, music generation, and chat systems. Organizations such as Microsoft, Google, Amazon, IBM, and Meta are now ramping up their already considerable investments in AI in an effort to gain market leadership in bringing AI technologies to their customers.
AI Functionality is Increasing Exponentially
In short, we’re seeing an exponential increase in the functionality offered by AI that is applicable for business. The following chart summarizes this growth rate, making what I hope to be a reasonable estimate for 2023. The coloured area on the graph represents the extent of the functionality, or perhaps behaviour is a better term, that can be addressed by AI-based solutions. It’s being measured on two scales:
- Cognition. This is a measure of the amount of thinking, including experience, that a person requires to perform an activity. An AI-based solution addresses the need for cognition through functionality implemented by its model. The AI itself isn’t cognitive (at least not yet, and that’s years away).
- Complexity. This is a combination of the intricacy and variability of an activity. Of the two factors, variability tends to trump intricacy. An activity can be very intricate or difficult, but if it’s followed consistently it can be fairly straightforward to automate. A relatively simply activity with significant variability will be much harder to implement due to the need to determine which variation is applicable for your context. And of course a very intricate activity with significant variability tends to be problematic.
There is a Wide Range of Functionality AI Doesn’t Yet Implement… Yet
A critical observation of the chart is that it’s mostly white space still. On the surface this appears to be good news, and it is, but don’t fool yourself. The threat lies in the fact that the ability of AI to implement various activities within your organization is increasing exponentially. This implies that much of this whitespace will disappear over the next few years. A frog in a pan of water won’t notice that the water is gradually heating up, and will remain in the pan until it is boiled to death. Similarly, the fear is that organizations will use AI-based technologies to whittle away at their business processes and reduce their employment needs gradually to the point where you’re no longer needed.
AI Will Be Used to Implement Your Business Processes a Bit at a Time
AI-based functionality will be deployed throughout your organization a bit at a time, and frankly it’s happening now whether you notice it or not. This is happens via multiple ways:
- Vendors are adding AI-based functionality to their existing offerings. The big news in AI right now is how vendors are rushing to embed AI into their products. Examples include Co-pilot in MS Office (similar to the Co-Pilot functionality in GitHub), and Bard in Google search. Intelligent assistants such as Bixby, Siri, Alexa, and Cortana are also very interesting.
- Individuals will use AI-platforms to augment their work. We’re seeing people use platforms such as Chat-GPT, Copy.AI, and Anyword to write initial versions of their work. This includes marketing copy, emails, portions of articles, and other text-based artifacts. Similarly people are using platforms such as DALL-E and MidJourney to produce images. Disclaimer: I use DALL-E for much of the artwork on this site.
- Your organization will add AI-based functionality to existing systems. Just like vendors are adding AI into their existing products, your organization will add AI into their existing systems to better automate business processes. They will also automate operational decisions via AI, something that is often critical to address regulatory concerns. And of course AI will be applied to extend/augment existing functionality.
- Your organization will develop or purchase new AI-based applications. Organizations will choose to implement new AI-based solutions to support their new strategies.
None of these strategies are job killers, for the most part, on their own. But taken together they add up, and as a result AI will eat away at the amount of work required to be done by people a bit at a time.
Challenges With This Chart
I want to be clear that this chart isn’t perfect for several reasons:
- The lines are approximations. The chart communicates a concept, it’s not meant to represent exact measures. Furthermore, 2023 is an outright guess given that we’re not even half-way through the year yet.
- The lines aren’t that smooth in reality. Instead the lines should be ragged given that the areas really represent the sum total of functionality on offer by a multitude of organizations, each one of which is focused on a small part of the overall whole.
- It depends on business domain. There could be more AI-work going on for the financial industry than for the mining industry, for example. Or, as we’ve seen recently, the domain of artwork generation is much further along than technical diagram generation.
- It misses empathy. A future, perhaps near future, addition to this chart would be the addition of a third axis for empathy required by people to perform an activity. I’d love to hear your thoughts on this.
There’s a Reason Why AI Thought Leaders Are Afraid
On March 22 2023 the Future of Life Institute published a letter requesting a 6-month moratorium to pause experiments with generative AI until we better understood the implications of what we’re doing. Thought-leaders within the AI community are rightfully concerned about generative AI and the societal implications of it. The letter states:
“Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Shall we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.”
Although they list several concerns, one of them is the automation of jobs. Will all jobs be automated away? I doubt that. Will many jobs be lost? Yes, I believe that AI will automate away many jobs piece by piece.
This letter was written by people who have been working in AI for years, and in many cases decades. These are people who have deep experience in building AI-based technologies and in using them. They are also people who have been thinking very deeply for a long time about the ethical and societal implications of AI. These are people who know this topic well, and they’re afraid. We are well advised to pay attention.
8 Comments
Mikhail
Scott, what would you say about the formation of a common language in project management and AI? Did it seem to me that the latest PMBOK guide came closer in terms of language to the idea of Eric Evans?
Scott Ambler
It’s an interesting idea. An “AI glossary for PMs” would likely be useful. That in turn would open up the idea of “[DOMAIN] Glossary for PMs” in general. For example, if it makes sense for AI, why not for IT? Or Building Construction? Or ethics? Or…
Mikhail
I looked at the frequency of use of terms in different editions of PMBOK. Despite the fact that quite a few authors of the standard participated in 5-7 editions, the language has definitely changed. For example, the word “area” has clearly changed to “domain”. In general, the PMBOK-7 language has received many terms from IT, perhaps this is due to agile. But it also makes project management closer to AI.
Mikhail
One more observation:
The concept of “value” from the point of view of mathematics is a universal function for evaluating something. This is very convenient from the point of view of expected utility hypothesis and game theory. In PMBOK-7, an increase in the frequency of use of this term and its related “outcomes” is clearly visible – 678 and 1.49% vs 250 and 0.48% in PMBOK-6.
Scott Ambler
This is definitely due to the increased focus on supporting agile ways of thinking (WoT) and ways of working (WoW) in new editions of the PMBoK. I suspect you’ll see this being even more obvious in the next edition.
Mikhail
Yes, it’s obvious. But besides this, there is another effect that I noticed above – the management language is getting closer to programming languages. In particular, to object-oriented and domain-driven design, DDD.
WoW – отличная концепция, интуитивно понятная.
Disciplined Agile включат в следующую редакцию PMBoK?
Mikhail
Sorry, I switched to Russian.
WoW is a great concept, intuitive.
Will Disciplined Agile be included in the next edition of PMBoK?
Scott Ambler
I hope that DA will be included in the next release of the PMBoK. But that will be the decision of the team writing it.