Five Tips for Skating Along the Maturity Curve to Everyday AI


There is barely a living room or a board room in the GCC, where AI is not intensely discussed daily. Lately, discussions have morphed from “Should we use it?” to “How do we use it?”. This will inevitably set many on a path to becoming part of a new era of Everyday AI, in which the organisation’s maturity is maximised, and AI becomes part of its lifeblood.  

From education and initial steps to scaling up and transforming the enterprise, care must be taken. The maturity path begins by exploring options and uses cases and how AI might add value. We move on to experimentation, where we build awareness through quick wins and less ambitious projects. Through the evaluation of these early projects, we establish a baseline for the evaluation of future ones.

Now we are in a position to expand AI usage across the organisation and accelerate time to value for all business functions. As such, we will set about embedding AI in all processes and operations. When we reach this stage, we will have delivered Everyday AI, where every employee from the C-suite on down is not only using AI regularly but thinks in terms of AI when they encounter an operational challenge. It sounds simple, but, in truth, it is easy to become derailed. Here are five tips to ensure you do not. 

Cost optimisation

There are many ways to avoid overspending regarding financial outlay and time when moving through each AI maturity layer. First, code snippets and cleaned datasets can be reused. At an early stage, program leaders should explore the use of data catalogues (library-like repositories built around findability), data as a product (in which datasets carry with them concepts like security, clarity, accuracy, and discoverability), and data mesh, which brings in domain-specific metadata to organise data in ways that allow different business units to find the data that means most to them more easily.

We can also optimise costs by ensuring different teams are not duplicating work. Here, we return to the concept of reuse. The right governance and management must be in place to promote information sharing if such labour duplication is to be avoided.  

The right data for the right use case

Cost optimisation can continue in specific areas, such as use case identification and uniting the right data with the right project. The ideal AI tools will offer capabilities to clean data and improve its quality. They will offer the means to label data appropriately and ensure the right level of access and connectivity. One approach that is increasing in popularity is that of “business translators” — professionals skilled in translating business requirements into data requirements. They are also capable of managing data workstreams and acting as intermediaries between data teams and business teams. With one foot in each world, the business translator can accelerate AI maturity.

Replication and scalability in MLOps

If the long-term goal of becoming an Everyday AI enterprise is to be realised, project teams must consider scalability and cost of ownership. AI platforms have become adept in these areas by providing pre-built frameworks for moving models into production and accommodating MLOps strategies. They cover the entire data pipeline and model maintenance and offer a range of tools to apply governance standards to the AI program, including those that monitor model decay. 

It is important to remember that one of the main reasons AI programs fail is that they do not account for model drift. ML models are built on data. If that data changes, then the discoveries of the model become invalid. MLOps and the tools that support it will ensure business impact remains positive. Teams must be able to keep a tight rein on versioning and measure the business outcomes of newer models to ensure scalability is happening in the right direction. 

Robust governance

AI governance is not the same as data governance. When business and technology leaders visualise Everyday AI, they see an enterprise where everyone is “clued in”. But how does this square with regulatory compliance? Nowadays, especially given the intense scrutiny of AI, we must move beyond simple security, quality, and architecture (data governance) issues to include ML model drift and responsible AI. 

Data governance is a subset of AI governance. If this point is realised too late in the AI maturity cycle, it becomes more expensive to rectify. Still, assuming true AI governance is built in from the outset, organisations can execute end-to-end model management at scale, driven by risk-adjusted, compliant value delivery. 

Mitigate change management pitfalls

So much is involved in moving up the AI maturity curve. What is the right operating model? What is the appropriate budget? What talent do we need, and how do we attract, retain, and upskill them? It is an iterative journey that does not happen in a day. There will be resistance from those who think their current tools and teams work efficiently enough. To lubricate this friction, businesses can implement training that imparts soft skills (sensitises all stakeholders to the positions of others) and hard skills (which allow non-technical leads to control the initiation and delivery of AI projects).

Extracting value

When reaching Everyday AI, the business has a toolbox of value juicers that can be applied at will. The maturity associated with Everyday AI means decision-makers already know which tool works best for each type of business problem. Therefore, the route to a solution is faster than if the enterprise had been overcoming a challenge in isolation of any ongoing AI strategy. 

For AI to add value, organisations have to live it, breathe it, and own it. Establish a foundation for reuse, know your data, and ensure that everyone who may encounter a challenge is well versed in the past successes of others. Learn lessons, govern well, and grow.