Lately, the business blogs and magazines are all frothing at the mouth about how Artificial Intelligence (AI) is going to transform business. For executives and decision makers, systems that automate business processes are becoming more available, and available for new applications every day. Unlike previous revolutions in automation that automated "menial" work, this revolution is automating tasks that previously required employees with specialized education and training. From here on I will use the terms "AI" and "automation" interchangeably and will be referring specifically to the automation of business processes.
While there are notable exceptions, traditionally executives do not come from highly technical backgrounds. They often defer judgment on IT investment to subordinates or, especially in smaller organizations, outsource IT and rely on the advice of their IT contractor. In the case of deciding whether to use cloud-based or on-site servers for email, taking the advice of a contractor may not be in the best interests of the organization but it probably isn't a mission critical decision. In the case of choosing whether or not to adopt an AI system, it very easily could be.
While the evolution of IT in the workplace has been dizzyingly fast from the perspective of an executive working over the last few decades, the basic processes businesses rely on from day-to-day were fairly similar over the same span of time. Businesses use email to communicate, they track revenue and expenditures with accounting software, they create documents, etc. Automation using AI systems stands to change business processes by orders of magnitude, and over a much shorter period of time.
The inclination at the executive level is to see the "AI" buzzword and say, "everyone's talking about AI, let's do that." It's true, automation can replace many highly paid workers with few; it can accelerate processes that take large amounts of time; and it can be used to analyze organizations and to eliminate inefficiencies. You'll note that all of those things can be used as euphemisms for eliminating jobs.
Within any given organization, it is almost guaranteed that deploying AI systems to replace manual processes will lead to downsizing of the workforce, but for the first time since the Industrial Revolution these will be highly skilled workers. Furthermore, once an AI tool with a specific application becomes available it won't simply be replacing those jobs within your organization, it will be replacing those jobs across the entire economy. Once one organization adopts such a tool, other organizations will have to adopt similar methods to remain competitive. While the rational incentive for an individual leader will be to adopt the technology as soon as it becomes available, the aggregate effect could be catastrophic.
When a corporation deploys a technology that puts many low-skilled workers out of work, it can ruin a local economy. When an entire economy deploys a technology that renders and entire class of highly-skilled workers obsolete worldwide, which could very feasibly destroy the global economy. That's the broader scope of the dilemma beyond what any individual executive has the power to control, but since most firms rely on the overall economy for their continued existence, executives should be aware the short-term gain of deploying AI tools could easily be eclipsed by their long-term destruction. This gets into territory likely to be considered and regulated by nation states fairly quickly, the term describing the intersection of AI and public policy is "AI alignment" and refers to aligning the direction of AI with the larger needs of society.
Beyond automating repetitive complex tasks, AI can also be used to analyze processes for redundancy and inefficiency. There are a few challenges this presents to any given executive.
First, there is the cognitive dissonance that comes from learning your underlying assumptions are or may be wholly inaccurate. Such analysis may reveal that much of what an executive believes to be going on in their organization is in fact not the case. Executives are often none too fond of being told they are wrong, and even less so when they are told they are grossly wrong.
Second, such analysis may reveal uncomfortable ontologies about their organization itself. Highly popular employees who are believed to be effective may be shown to be useless or counterproductive, even criminal. Again, the cognitive dissonance of being shown through analysis that a particular employee or organizational unit is dysfunctional compared with what they "know" to be true may be difficult for many executives to handle.
Both of these examples are likely to cause cognitive dissonance in leaders, who are trained specifically to make difficult decisions with confidence, because they represent direct evidence that the executive is not performing well in their role. This is unlikely to be received well by any leader, and it is highly likely to be the case that their assumptions vary widely from the insights they will be shown. Most executives are not nearly as effective as they believe themselves to be, they don't like being reminded of it.
The most important thing for any executive before embarking on this process is to accept that they are going to be shown things they don't want to see. They must accept that once presented with this information they will be forced to address the necessary changes. They must accept that this could mean terminating loyal employees they have worked with for years or even decades. They must, by any means necessary, set aside their ego and realize this is going to be a painful process.
Another factor to consider is that the companies developing AI tools for licensed use could very easily supplant the firms who bankroll their early success. Once an AI tool becomes an industry standard for a specific task, it's very easy to see the company that owns it eliminating the middle-man and taking over the industry. The IT sector has already shown a propensity to absorb other industries, I see no reason why this range of tools would be any different.
Companies who are developing AI tools and marketing them to business clients are not in the habit of advising their potential clients of these factors. Clients want solutions; they want "guaranteed results;" they want magic bullets. In the absence of much technical expertise, they are flying blind and relying on salesmen to make decisions that could greatly impact their enterprise. AI does have the potential to revolutionize many business processes, but executives must consider the possible catastrophic outcomes from adopting these systems without careful consideration.
Comments