Enterprise AI Isn’t A Magic Bullet

Artificial Intelligence (AI) is a fundamental organisational asset to reboot business in today’s world — to find more efficiency, business opportunity, and speed-to-value, says Sid Bhatia, Regional Vice President – Middle East & Turkey, Dataiku. In this freewheeling interview, Bhatia discusses AI from bias in the algorithm and technologies critical to implementing responsible, end-to-end machine learning to Responsible AI and challenges in AI governance in the Middle East.

Excerpts from the interview: 

Companies increasingly rely on AI to help them make decisions using data. But, is everyday AI the future?

Everyday AI is the future. But not everyone has the same level of technical expertise, and it can be easy to get intimidated by the processes and seemingly never-ending nuances of Data Science. Getting bogged down on necessary but tedious or complex tasks in which you don’t have expertise can consume valuable time. However, there are vendors on the market that facilitate using pre-built components and automation wherever possible, the streamlining of work processes and consistent management and governance across teams and projects to create transparent, repeatable, and scalable AI and analytics programs. 

How can enterprises overcome the fear of data and AI?

Our brains tell us to fear the “black box” of AI, but our brains often act more like “black boxes” themselves. When a business leader makes a decision based on intuition, people tend not to blink twice. Yet, for making one based on AI, people tend to feel less comfortable. Much like the processes of the human brain, AI is becoming more transparent. The “black box” of AI is exaggerated because of our instinctive fear of the unfamiliar; we glorify seemingly “aha” moments in ourselves but demonise them in machines. 

The main concern of AI is undetected bias in the algorithm, but bias comes from the data that the machine is given, and we can see and analyse that data. It’s much easier to minimise the bias in this data than to unravel all the past experiences and potential biases of a human. 

Furthermore, many interfaces nowadays have built-in checking points which keep you in the loop by allowing you to assess the validity of a step-in model construction. 

We have many established data science interfaces in the Middle East. How is Dataiku different? 

Dataiku supports agility in organisations’ data efforts via collaborative, elastic, and responsible AI — all at enterprise scale. At its core, we believe that in order to stay relevant, companies need to harness Enterprise AI as a widespread organisational asset instead of siloing it into a specific team or role. 

To make this vision of Enterprise AI a reality, Dataiku provides one simple UI for the entire data pipeline, from data preparation and exploration to machine learning model building, deployment, monitoring, and everything in between. 

It’s built from the ground up to support usability in every step of the data pipeline and across all profiles — from data scientist and cloud architect to the analyst. Point-and-click features allow those on the business side and non-coders to explore data and apply AutoML in a visual interface. At the same time, robust coding features — including interactive Python, R, and SQL notebooks, the ability to create reusable components and environments, and much more — make data scientists and other coders first-class citizens as well.

Share a use case of a successful machine learning and AI deployment in an enterprise.

Airlines are a good example of organisations that have implemented AI solutions that have empowered them to take a true data-driven approach in their daily operations. Some of the use case examples include the following:  

Ticket pricing optimisation 

Flight ticket prices are based on multiple factors such as oil prices, flight distance, time of purchase, competition, seasonality, and more. Some of these parameters change daily, which means that companies need to continuously adapt ticket prices to these changes. Thanks to AI and ML, companies can not only analyse past data but also predict the demand based on multiple indicators. In addition, they can increase sales revenue long-term by incorporating a more balanced flight booking system.

Crew management 

Airline crew managers have to manage complex employee networks, including pilots, flight attendants, engineers. A number of factors affect day-to-day crew management, such as availability, credibility, certifications, and qualifications. Rescheduling any of the crew members can be a cumbersome task. However, by implementing an AI-based crew rostering system, airlines can optimise and partially automate the process, thus reducing costs and errors, and leveraging crew members’ full potential.

Customer service 

By using AI, companies can optimise their operational and labour costs at the same time. AI-based tools can provide information on future flights, assist with check-in requests, and resolve basic customer queries.

Predictive maintenance 

Aircraft maintenance is a tough task, and if done incorrectly, it can cost a fortune. Thanks to AI and machine learning, companies can now predict potential failures of maintenance on aircraft before they actually happen, with higher accuracy. The use of AI with predictive maintenance analytics can lead to a systematic approach on how and when aircraft maintenance should be completed. Nowadays, airline maintenance teams are dealing with huge amounts of data produced by newer aircraft and the necessity to generate quick insights and implement accurate predictive models is essential. Centralised data platforms could facilitate better and more efficient data governance and effectively manage model lifecycles.

Also Read: The AI Arms Race

What challenges do you encounter in collaboration and governance with machine learning and AI in the Middle East?

Going from producing one machine learning model a year to thousands is well within the average company’s reach, and operationalisation has made it possible for a single model to impact millions of decisions (as well as people). On the surface, there, of course, aren’t any businesses that plan to do irresponsible AI. But on the other hand, they aren’t doing anything to explicitly ensure they are responsible either — and therein lies the problem.

In practical terms, responsible AI matters because for some industries (financial services, healthcare, human resources, etc.), it’s a legal requirement and under growing scrutiny from regulators. Even if compliance with requirements asking for white-box solutions, interpretability, and proving efforts to eliminate bias isn’t required, it’s good business for anyone because it lowers risk. 

But responsible AI is also important because it’s what will make organisations’ AI systems stand the test of time and ensure that they are not shattered by inevitable future developments in the space (like legislation, but also new technologies). 

Collaboration and governance are key topics here in the Middle East. 

Here are some recommendations to have a Responsible AI strategy for your organisation:  

Accountability: Ensuring that models are designed and behave in ways aligned with their purpose. Accountability comes down to eliminating potential bias and making AI human-centred. Introducing unintended biases into models that spiral into PR disasters is a huge risk for enterprises who don’t take responsible AI into account.

Sustainability: Establishing the continued reliability of AI-augmented processes in their operation as well as execution. Sustainability includes introducing at a minimum:

  • Model maintenance
  • Introducing an elasticity that allows for the incorporation of the latest technologies
  • Processes to recreate and reuse work to increase efficiency

Governability: Centrally controlling, managing, and auditing the Enterprise AI effort.

Today’s enterprise is plagued by shadow IT; that is the idea that for years, different departments have invested in all kinds of different technologies and are accessing and using data in their own ways. So much so that even IT teams today don’t have a centralised view of who is using what and how. This is an issue that becomes dangerously magnified as AI efforts scale.

If ML is the future of AI, what technologies are critical to implementing responsible, end-to-end machine learning?

Succeeding in AI initiatives, now and in the future, requires fostering a culture of data creativity at the individual level. However, organisations also need to find a way to harness that individual data creativity for collective company progress and purpose.

The future of scaling AI is through getting all skills in the organisation together. A centralised project workspace allows everyone to contribute their unique talents to each project and to help each other. For example, data analysts or knowledge workers can prepare data using their domain knowledge and then data scientists can develop models using this data. Each person applies their unique knowledge to create a higher quality output, faster.

Data scientists create project bundles that IT operators can easily test and roll into production environments. Data scientists create and share visualisations and applications that empower business stakeholders to understand data and make better decisions.

The data science platforms marketplace is complex. What advice do you give prospective customers when they’re looking for their data platform?

In 2021, one of the biggest questions is: how could organisations even think right now about making an investment in AI? The reality is that lots of businesses get easily disrupted by big change, and arming the business to deal with these kinds of changes and to face the challenges ahead via Enterprise AI makes sense.

AI is no longer a nice-to-have or something that mad scientists experiment with in isolated teams, it’s a fundamental organisational asset to reboot business in today’s world, finding more efficiency, business opportunity, and speed-to-value.

Enterprise AI isn’t a magic bullet and it won’t make businesses immune to the world’s changes, but good AI implementation can be harnessed to improve organisations’ ability to adapt to the world around them.

I have three recommendations for prospective customers when they’re looking for their data platform:

Democratisation of Data Science: Today, democratisation of Data Science across the enterprise and tools that put data into the hands of the many and not just the elite few (like data scientists or even analysts) means that companies are using more data in more ways than ever before. And that’s super valuable; in fact, the businesses that have seen the most success in using data to drive the business take this approach. Data democratisation is the path forward to eventually enabling AI services. The idea is deeply intertwined with the concepts of collaboration as well as self-service analytics

Collaboration is the key to Scaling AI: Collaboration is about making AI more widespread and relevant through access to a wider population within the enterprise. Part of the reason that collaboration is used a lot (but also why there is lack of specificity around it’s exact definition) is because it actually has two distinct parts:

Responsible use of AI: The use of data across roles and industry has become (and will continue to become) increasingly restricted. But that doesn’t have to mean a pause or paralysis in data use; it simply means tighter processes around the use of data in the enterprise and a sense of responsibility down to the individual.

These same principles that guide careful use of data in a privacy-concerned world as well as responsible AI will also ensure that AI processes and products are interpretable. While there is no process or technology that can ensure ethical outcomes when it comes to AI, training people to do responsible AI is the way to start.

Also Read: Using AI effectively

Recently, Dataiku made an important announcement about the relationship with Snowflake. Please elaborate. 

On June 9, we announced an integration with Snowflake’s Snowpark and Java user defined functions (UDFs), a new developer experience for Snowflake.

With the Java UDF integration, Dataiku users can now take better advantage of Snowflake for more operations –– visual data preparation users can now push down the computation of more visual recipes to the Snowflake engine, and scoring can also take place directly in Snowflake. Both areas can then take advantage of the elastic scale of Snowflake, so organisations can operate on larger datasets, work faster, and only pay for what they need. Pushdown also minimises data movement so data stays in Snowflake, which enhances data security and relieves compliance concerns. 

With Snowpark, users will also see an increase in functionality and more access for developers — so that more developers in the enterprise can build services to interact with Snowflake services, while making the most of Dataiku’s platform for advanced analytics at scale.

Dataiku recently raised $400 million. Tell us about that. 

In early August, we announced $400 million in Series E investment led by Tiger Global, with participation from several existing investors. This capital, which brings the company’s valuation to $4.6 billion, will power Dataiku’s mission to systemise the use of data for exceptional business results. 

We are on a mission to enable Middle East organisations to use data by removing friction surrounding data access, cleaning, modeling and deployment. This funding is further will allow us to continue supporting agility in organisations’ data efforts via collaborative, elastic, and responsible AI, all at enterprise scale.