The trinity standing before AI advancement
Other sectors Healthcare

AI and machine learning (ML) in their existing states have powerful capabilities to optimize processes and produce efficiency gains. Applications of AI are already integrated into everyday life, from analyzing health data to transporting people and cargo. Yet, businesses are currently far from welcoming AI into their operations with open arms. A trinity of regulation, public capabilities and ethical considerations must be navigated if this is to significantly change by 2030.

First, a definition. ML is a branch of artificial intelligence in which systems carry out a specific task by learning from patterns with minimal human intervention. Meanwhile, AI is typically spoken of in terms that are more aligned with artificial general intelligence: the level at which systems can perform any task a human can with no intervention. This isn’t expected to occur until well after 2030, if at all. Rather, the AI of today has the ability to train itself but within a define scope and specific tasks, such as facial recognition.

Currently there are four approaches to ML, which create a spectrum moving from traditional ML to the beginnings of AI:

  • Supervised: supervised learning concerns itself with pattern matching and recognition, prediction, and automation.
  • Transfer: transfer learning resembles supervised learning but applies these capabilities across related problems.
  • Reinforcement: Reinforcement learning is goal-oriented, continual learning AI, as seen in game-playing AIs.
  • Unsupervised: Unsupervised learning resides in the realm of information synthesis, such as deep fakes and automated text creation.

As new complimentary technologies evolve, AI will continue to display performance and accuracy gains. The capability for significant disruption to society will be there but the uptake of the technology may not.

As of today, the US and China have surpassed 50% penetration of AI in various industries like drug development, where ML excels at pattern recognition and optimizing the process of design1. However, nearly 60% of organizations have no plan to introduce AI or ML capabilities in their operations, while only 28% have a plan for the next 12-36 months2.

This may in part be due to varying regulation scopes in different industries leading to equally varied uptake, such as the in the US where the FDA has very stringent requirements around the application of AI in healthcare treatment, slowing its adoption in the segment- although it is noteworthy that the FDA has sharply increased its approval of medical algorithms over the past two years. Meanwhile, other markets lag for a variety of reasons including expertise gaps, development costs, and trust in AI.

A multi-trillion dollar business

Despite the slow start, a rapid adoption of AI and ML across all segments is expected over the coming years. The disruptive factor of AI has earnt it a forecast added value of $15 trillion dollars to the world economy by 20304 and the world’s predicted frontrunner in largest economy, China, has set its eyes on being a world-leader in AI by 2030.

Yet, these developments depend on gaining expertise, accommodating regulations and winning over public perception. On the one hand, the availability of data scientists, machine learning experts and business stakeholders with the ability to understand, articulate and execute novel applications of the technology will to a large degree determine the extent to which AI is successfully implemented into business operations. On the other hand, even with the right talent, the ethical implications of AI replacing humans, privacy and algorithmic bias must be navigated, and legal and social expertise will be needed to understand both the positive and negative impacts of AI.

But perhaps the most immediate limiting factor is regulation. Just as regulation of the web has led to different markets being pulled to adopt other region’s regulations in order to participate in the market, such as occurred with GDPR, regulation of AI and ML may limit adoption in certain markets, while allowing it to flourish in others. One significant regulatory path underway is the EU’s High Level Expert Group on Artificial Intelligence (AI HLEG), which earlier this year released its Ethics Guidelines for Ethical AI,5 covering oversight, robustness, privacy, transparency, bias, accountability, and society and environmental impacts of AI. AI HLEG has place a human-centered approach at the core of its guidance for AI in Europe, though what this means in practice for development and deployment is yet to be illustrated.

One thing is clear: AI is advancing. It is human regulation, ethical compass and technical capabilities that will determine how fast.

Contributors

Main author: Chris Pelsor

Editor: Tiffany Hildre

Technology Outlook 2030 report cover
Download the Technology Outlook 2030 summary Click here
Delve deeper into the next decade Read more technology and trend articles here