top of page
Search

The Economics of Artificial Intelligence

The Economics of Artificial Intelligence

Originally Submitted by Brian T. Clark, University of Chicago - Booth School of Business

Information contained here is shared as originally published in 2017


Abstract

Artificial intelligence is playing an increasingly large role in the automation of traditionally human tasks. It has been discussed in many public works, but our understanding of economics surrounding the application of these technologies is in an infancy. The impact of automation on knowledge work will be significant, and the creation of a Cognitive Assembly Line (a digital representation of a knowledge process) will allow us to automate work that previously required significant human effort.


A series of discrete tasks, arbitrarily segmented by humans, can be separated into a “Cognitive Assembly Line” and automated in a way that simulates human cognition. The resulting AI Pipelines will quickly and ruthlessly transform knowledge and intelligence work, and the corresponding impact that parties should expect as machines take over the creation of the “intelligence widget” will be pronounced. The primary economic discussion points will center on the fixed cost/variable cost automation point; similar to what the manufacturing industry experienced in the mid to late 20th century. Lastly, I will comment briefly on potential long-term impacts of artificial intelligence.


After reviewing the prevailing viewpoints on the impact of job automation (or “routinization)”, I will introduce two key ideas that will enable the mass implementation of AI: (1) how the Cognitive Assembly Line makes AI technically possible, and (2) how cost structures make AI economically possible.


This paper will not serve as a background on the history of routinization and automation, nor the contemporary technologies available and applications thereof; please see seminal work like The Future of Employment for this context[1].


Introduction and Background


At its core, artificial intelligence is an aggregation of automated steps. In a talk at the 2017 SaaStr Annual Conference on January 25, Greylock Partners Investor Sarah Guo called AI “enabling technology[2].” AI, like any new technology, will create new efficiencies, eliminate some jobs, and create new ones. The economic impact of implementing AI (discussed below) is even more important: the technology will be introduced incrementally at each step in a process where it is economically feasible, rather than holistically (admittedly, when an algorithm can eliminate multiple steps in a process, it will replace large parts of a process as well).


Additionally, a combination of cloud services, user design, big data, increased processing power, robotics, sensorics, GPUs, and a variety of other technologies, are the building blocks that allow for higher-order technology like artificial intelligence to take hold (this is likely the reason that AI, which has existed in some form for nearly 50 years, has not accelerated in adoption until recently).


If algorithms can sort, organize, and restructure a set of data and apply it to a customer’s use case, firms will incrementally replace lower levels of human intelligence with algorithms. Citadel’s Ken Griffin noted that the next decade of companies will be those that turn raw data into useful information. If the organization of data into ordered sets represents useful information (i.e. the composition of words into sentences) and the organization of information into complete ideas represents knowledge (i.e. the organization of sentences into paragraphs and pages), then using this knowledge set to solve a human problem looks similar to human intelligence. In professional disciplines, this is known as advice, counsel, or recommendation. Data scientists embed knowledge into predictive models that can sometimes mimic this level of intelligence.


In 1959, Arthur Samuel defined machine learning, a form of AI, as a “Field of Study that gives computers the ability to learn without being explicitly programmed.[3]” This technology underpins much of the initial stages in the automation of knowledge.


In order to illustrate the power of machine learning in automation, consider the construction of an igloo. Each block had to be carefully and specifically crafted at just the right point to maintain the structural integrity of the larger system. Intelligence is similar: each word chosen has a semantic, syntactical, and structural meaning that is crafted to fit into the structural integrity of the sentence. Previously, each “rule” that transformed a snippet of data into intelligence would require a line of code, just as each block of ice in an igloo must be carved and placed. Machine learning and AI now allow us build the entire igloo in one step. In other words, the minimum construction size (or the size of the step), has increased greatly.


Human knowledge is not unique; it can be replicated by machines. While it may often seem far too complicated and high-dimensional for algorithms to mimic, the sophistication of our mathematical and computational tools is accelerating: we can now automate more complicated and unstructured problems. Knowledge work in areas like law, accounting, and medicine are increasingly susceptible to automation. The increasing complexity of our algorithms and decreasing cost of providing these functions are a recipe for significant disruption. The conclusion is obvious: knowledge is a widget, and it can be automated.


The Automation of Jobs


The introduction of technologies like conversational agents, machine learning, and other forms of artificial intelligence has revived concerns over mass unemployment and job disruption. A 2013 paper by Frey and Osborne estimated that “47 percent of total US employment is at risk[4].” Websites like https://willrobotstakemyjob.com/ (based on the Frey and Osborne paper above) display the probability of holistic job disruption. Conversely, a 2016 comparative analysis on the risk of automation from the Organization for Economic Cooperation and Development (“OECD”) concluded that, on average across the OECD, “9% of jobs are automatable[5].”


This analysis focused on a “task-based approach,” in determining full-job automation. A related study by McKinsey found that the number sat at 45%[6], while the World Bank estimated up to 57% of jobs could be automated over the next decade[7]. In a thorough review of automation and job loss in the industrial robot market, Acemoglu and Restrepo concluded that “large and robust negative effects of robots on employment and wages across commuting zones occurs[8].” The report noted that for industrial robots, “one more robot per thousand workers reduces the employment to population ratio by about 0.18 – 0.34 percentage points…” The impact on knowledge workers may be more significant, as industrial automation required physical and mental efforts, while knowledge work only requires mental efforts.


Our analysis focuses on the incremental time savings of human work once computers automate this knowledge work. Whether this is analyzed from a “task-based” approach or “whole-job automation” is somewhat irrelevant. Instead, we should analyze the introduction of AI like any other technology: based on (1) increased productivity and (2) the corresponding reduction in the cost of labor to produce a similar output after automation.


Understanding how to mathematically “create” knowledge will allow us to incrementally improve it over time. Even now, AI assembly lines are digital: a collection of all the data points and processes, connected together and transformed via a pipeline into a useful output: intelligence. This is accomplished via a new form of assembly line that replaces cognitive thought.


Introduction to the Cognitive Assembly Line


The productivity gains captured by the move from human assembly lines to automation in the manufacturing industry is a useful case study. On December 1, 1913, Henry Ford installed the first modern assembly line, reducing the time to build a car from 12 hours to 2.5 hours, a reduction of nearly 80%[9]. Cars now take around 17-18 hours to build beginning to end[10], but given the relative quality increase, this is still a rather impressive amount of time. Digital services and intelligence widgets will follow the same path.


Figure 1: The Cognitive Assembly Line



Ford “broke the Model T’s assembly into 84 discrete steps” and “trained each of his workers to do just one[11].” He later “built machines that could stamp out parts automatically,” as he optimized each of the 84 steps in his assembly line.


The industry evolved materially over the next 100 years. Looking now into a modern car factory, “BMW has been aiming to make 30% more vehicles with the same number of workers while trying to reduce production costs per vehicle by raising economies of scale in components, drive systems and modules[12].” As reported by the Independent Mail (part of USA Today), BMW production facilities are now an astounding “98% automated”[13]. Facilities with the economies of scale necessary to produce these goods were able to effectively shift variable costs (human hours per car manufactured) to fixed costs (the research, development, and implementation of manufacturing robots).

In the next few years, this will take hold in professional services in the form of “intelligence widgets.” This new type of service/good will be built exactly the same as cars: via a series of automated steps that transform data, via a series of computationally solvable problems, into a clear output available for purchase by consumers. The rapid evolution of intelligence widgets differs than the manufacturing industry in one key area: the “machines,” available are now algorithms, freely distributed via the internet, meaning lower research and development and more immediate implementation. Intelligence widgets require a new type of assembly line: the “Cognitive Assembly Line.”


Job automation, or routinization, is another way of describing classic operations problems, and AI pipelines are no different. Intelligence will be built and automated in exactly the same manner as a physical “widget,” though the evolution in knowledge automation will likely be faster.


The Cognitive Assembly Line represents a systematic approach to modeling human-level intelligence via a series of discrete steps. Modeling a series of routine steps necessary to create our desired output is a function of (1) the number of stepwise tasks to produce an output, (2) the complexity of a task, and (3) the maximum sophistication of an algorithm to automate that task.


[Mathematical Analysis removed for Simplicity - Please contact me for more information - Brian@quantitativejustice.com]


The recent boom in machine learning, deep learning, and other forms of artificial intelligence has increased the value of Smax. Because Cn and Smax are inversely related, we can, perhaps not surprisingly, conclude that the more sophisticated the algorithm, the smaller the number of steps necessary to automate an intelligent process.


For example, the automation of a human level task 15 years ago may have required an “n-value” of 37,000, perhaps due to the dimensionality of the data present. As we reduce our “n-value,” complexity raises, which causes an equilibrium value higher than 1. However, the invention of more sophisticated algorithms pushes our equilibrium value down. The shorter the cognitive assembly line for a given process, the more sophisticated the algorithms must be.


This is why Machine Learning is so powerful: it enables us to shorten the number of steps needed to automate human knowledge into an economically feasible number of steps. It is far easier to write code for 30 steps than it is to write code for 37,000 steps.


Accordingly, AI will be implemented incrementally and on the margins for every n-step operational function in a cognitive assembly line. Work will be focused on transposing a human thought process into a digital n-step process roadmap. Defining the process will be far more difficult than solving the process, and data design and data process architects will be professions in high demand in the years to come.

Economics of Artificial Intelligence


If a task is automatable, the remaining question is whether it is economically viable? Frey and Osborne utilize a manual and holistic approach, surveying 70 occupations and requiring experts in machine learning to assign a binary label of “1” or “0” depending on whether something is automatable (they exempt jobs with engineering “bottlenecks,” those related to perception and manipulation, especially in “unstructured situations[16].” Conversely, the OECD report analyzes automation on a “task-based” approach, an unbundled method for deconstructing a job into a series of unique steps, prior to comparing it to the survey results from Frey and Osborne to conclude a different automation risk. In July of 2017, Oxford University published yet another study utilizing surveys of AI experts as to when they thought jobs would be automated[17].


Given that these are all survey-based approaches, each present some bias due to the reliance on data from human survey results. This top-down analysis presents an inherent analytical bias based on the knowledge and imagination of professionals surveyed. As those in this field know, predicting future occurrences not represented in existing training data is notoriously difficult, if not impossible at this juncture. Here, the usage of experts to predict whether a job is automatable based on their own beliefs (or training set if you will) is ironic. The most fundamental flaw is the neglect any economic analysis when predicting if a job will be disrupted automation.


Instead, we must focus on and the individual functions that make up each step in a routine process (a deconstruction of a “task”) and build our automation model from the ground up. This approach connects to the small picture to the big picture, much like other emergent phenomena (“an emergent behavior or emergent property can appear when a number of simple entities {agents} operate in an environment, forming more complex behaviors as a collective.” [18]) Artificial intelligence emerges from a series of simply automatable knowledge tasks built from the ground up, but resembles a system of complicated intelligent outcomes and significant complexity when viewed from the top down. This is what the Cognitive Assembly line connects.


Whether a task can be automated is a function of (1) technical capability, and (2) economic viability. The calculation of technical capability is identified above: the complexity of the task, divided by the sophistication of the algorithm. The economic viability of each step will ultimately drive commercial usage and adoption.


The assembly line is the production process, and can be optimized by analyzing the fixed cost and variable cost tradeoffs in the pipeline.


Automation will occur when the fixed costs of automating a step in the cognitive assembly line outweigh the variable costs of the existing process over a fixed period of time. The costs associated are (1) the fixed cost necessary to create an intelligent model capable of the level of sophistication needed, and (2) the variable costs necessary to retrain, maintain, and update the required models[19].


Each step in the cognitive assembly line will be analyzed like any operations problem: automation will occur when the Fixed Costs over the life cycle of the machine (algorithm) are lower than the variable cost (hourly wage rate) of the marginal worker for the same production level. These variables are as follows:


[Mathematical Analysis removed for Simplicity - Please contact me for more information - Brian@quantitativejustice.com]


Digital Process Automation (Routinization)


Automation is not a technological choice, but an economic choice. While the Cognitive Assembly Line defines when a firm is technologically capable of automating a step, the economics are what govern actual adoption, and such calculations are only possible when the Intelligence Quotient is above a score of “1.” Anything else indicates technical impossibility, and no economic analysis should be conducted.


The marginal product of labor will remain constant (or slightly diminishing due to fatigue) when delivering knowledge services, while the high fixed costs and low variable costs of an automatic process for building knowledge indicates increasing returns to capital. This capital elasticity on productivity increases total economic productivity (though admittedly, returns to labor will continue to decrease in such automated solutions). Those who can invest capital and design superior systems will build competitive advantages quickly and effectively. The key decision point lies in determining when to automate.


The chart represents the marginal intelligence levels of labor needed for a specific task. The vertical distance between each data point and zero represents the marginal economic opportunity for AI. This distance is also indicative of the fixed-cost/variable-cost tradeoff of automation: while on the margin, it is always cheaper to automate, there may be instances where it is cheaper for humans to execute a task given the fixed costs required to implement automation. The routinization of all forms of tasks will allow us to shift up and down the marginal cost of intelligence curve based on whether it is appropriate to employ human labor (tasks with very high fixed costs and very low recurrence), or automation (anything else).


Figure 2: The Marginal Cost of Intelligence Curve



[Mathematical Analysis removed for Simplicity - Please contact me for more information - Brian@quantitativejustice.com]


The Economics of Artificial Intelligence:


Observations


The above equation represents a new approach to analyzing the cost structures of artificial intelligence, and is rooted in classic operations and process cost analysis. It includes the practical of technical capability, with economic factors for adoption (for example: if a step is technically possible but still very expensive, it may be broken down into steps regardless of the technical feasibility).


There are a variety of exogenous factors that will influence the efficacy of any automated process.


Firstly, domain expertise is paramount. A process can only be automated if it is properly structured and designed, and the implementation of digital processes requires a flexible and on-point data model. Data model design will become the purview of the domain experts working in close proximity with data engineers to digitally and faithfully recreate an analog process. Because of this, data models built faithfully from a specific domain will become the new competitive advantage. Already, investment companies and publications alike are referring to the benefits of AI as “intelligence moats[23],” or sustainable competitive advantages based on AI. The new data model is why: it drives transformation from big data to intelligence, it learns to become more effective over time, and results in increasing contribution margins and fast and incredible economies of scale.


Long-Term Impacts of Artificial Intelligence


The conclusion above that marginal costs of intelligence are approaching zero will have significant implications. In competitive marketplaces, intelligence will become ubiquitous and approach equilibrium as prices decrease. Given that one individual with an efficient algorithm could distribute intelligence throughout the world via the internet, it may well be that “intelligence” as we know it will approach a cost of zero.


Firms should be wary of entering markets where close substitutes exist. Markets will increasingly be horizontally competitive rather than vertically competitive. Firms will not compete in different sectors (financial services, consumer goods, etc.), but rather substantively as a result of their data models and automation (legal, finance, risk management, accounting, etc.). This paradigm shift will be the result of the cost of transferring into new markets approaching zero.


For example, if a firm automates the manufacturing of a computer hardware machine, the fixed costs required to move into the manufacturing of a microprocessor would be the same as any other company. In knowledge automation, if a firm automates conducting accounting analysis for a computer hardware company, there is little to no fixed costs to move into automatically conducting accounting analysis for the microprocessor company. The horizontal applicability of AI will reduce the transfer cost to new verticals, and create a new form of dominant AI firms: those that work across industries to automate entire disciplines.


The speed and power of disruption will increase over time, and the implementation of AI will cause a disruption to skilled workers. The increase in overall total factor productivity to the economy is skewed by efficiency gains, as knowledge workers will increasingly be disrupted by intelligent processes. Most importantly, the rate at which those workers are able to be retrained is far lower than the rate at which new intelligence can be designed and implemented.


Remaining Questions


There is a great deal of work left to be done in order to fully build out the statistical methods for automating knowledge work. First, a standardized approach to determining the complexity of a “step” in a process must be determined[24]. Secondly, a uniform method for determining algorithmic sophistication must be implemented. While it is possible to use “time complexity” analysis, a baseline must be determined and normalized in order to create useable forms of Smax. Lastly, the approach above must be tested rigorously against a variety of disciplines to ensure a standardized scale for the Intelligence Quotient of a step.


Conclusion


When analyzing a process, AI will be adopted if (1) the technical capability exists (i.e. the Intelligence Quotient calculation required is less than one), and (2) the economics induce action (the fixed costs outweigh the time-dependent variable costs).


As the complexity of our algorithms increases, they will increasingly encroach on disciplines previously considered immune to automation. We have seen a potential new way to analyze the process and economics that result from the use of artificial intelligence. As barriers to entry fall, whether through the release of open-source software and algorithms, the costs of big data and cloud computing decrease, and the information generation proliferates, new processes and systems will disrupt industries and institutions previously thought untouchable.


This paradigm shift will result in more people with access to superior professional services than ever in history, democratizing intelligence in industries like law, medicine, accounting, and other knowledge-based services. Production will be changed forever, but consumers will be the key beneficiaries.


Citations

[1] http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf [2] https://www.slideshare.net/sarahguo/saastr-2017-aienabled-saas-4-models-for-ml-as-competitive-advantage [3] [4] http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf [5] The Risk of Automation for Jobs in OECD Countries; A Comparative Analysis. DELSA/ELSA/WD/SEM(2016)15. [6] http://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/where-machines-could-replace-humans-and-where-they-cant-yet [7] http://www.g20-insights.org/wp-content/uploads/2017/03/1899-2.pdf [8] Robots and Jobs: Evidence from US Labor Markets. Acemoglu, Daron and Restrepo, Pascual. 2017. [9] http://www.history.com/this-day-in-history/fords-assembly-line-starts-rolling [10] http://www.toyota.co.jp/en/kids/faq/b/01/06/ [11]http://www.history.com/this-day-in-history/fords-assembly-line-starts-rolling [12] http://time.com/3912739/see-inside-bmws-largest-manufacturing-plant-in-south-carolina/ [13] http://www.independentmail.com/story/news/local/2017/06/23/robotics-part/394124001/ [14] Complexity is simply a function of the statistical dimensionality of the n-step. [15] Examples of n-step forms of automation includes prediction (numerical or text transformation), summarization, classification, and other forms of probabilistic interpretation. [16] http://www.ifuturo.org/sites/default/files/docs/automation.pdf [17] http://amp.weforum.org/agenda/2017/07/how-long-before-a-robot-takes-your-job-here-s-when-ai-experts-think-it-will-happen [18] https://en.wikipedia.org/wiki/Emergence [19] This equation imputes that Unsupervised learning methods are simply cheaper by the amount of training data they no longer require, thereby decreasing variable cost even further. [20] This is common practice by internet giants like Facebook and Google. [21] This is why humans are sometimes still involved in manufacturing processes; i.e. when a machine is too expensive to install, and a human is kept “in-the-loop.” [22] Intn = (ARCn + AICn + AICn + APCn + Time (ARMn + ETn)), [23] https://news.greylock.com/the-new-moats-53f61aeac2d9 [24] A variety of approaches to measuring complexity for business processes already exists.

524 views0 comments

Recent Posts

See All

A Quick Assessment on the Impact of Generative AI

While the second coming of AI hype from generative AI has been even higher than the first hype cycle in the mid-teens, the story is similar: more automation for simple tasks (and some more complicate

bottom of page