Sandeep Dutta, Chief Practice Officer at Fractal, shares his insights on the challenges of harnessing data and analytics, predictive modelling and forecasting, and the interpretability and explainability of AI models.
“Companies need to look at AI, engineering and design together to power the right solutions. The right data engineering platform then runs it. And it has to inculcate human behaviour and design thinking to make the solution successful,” says Sandeep Dutta, Chief Practice Officer at Fractal.
Fractal Analytics is an AI company that empowers global Fortune 500 companies to make informed decisions through AI and analytics, scalable data engineering, and human-centred design. In this interview, Dutta shares his insights on predictive modelling and forecasting, the interpretability and explainability of AI models, and the challenges of harnessing data and analytics.
Excerpts from the interview:
What challenges do you see in harnessing data and analytics for businesses in this region, and how do you address them?
Data and analytics for business in the Middle East region are evolving quickly. Over the past few years, many organisations have established mature data analytics practices. However, there are some challenges:
- Data privacy and security regulations in the region impact the collection and use of personal data. Compliance with these regulations while leveraging data for business purposes is complex. Adhering to different data protection and industry-specific regulations adds complexity, requiring a thorough understanding of the legal landscape.
- Cultivating a data-driven culture: The solution will require a mindset shift and a change in management efforts. Emphasising data-informed decision-making and incorporating analytics into various business functions are key.
- Deploying data and analytics solutions ethically, considering issues like bias and transparency, is also a key challenge; a Responsible AI framework will help address the same.
To overcome these challenges, a comprehensive strategy is required to invest in data governance, establish partnerships, and seek expert guidance. It will help organisations unlock the potential of data and analytics for success.
How do you approach the ethical considerations surrounding data collection, privacy, and usage in the context of data and analytics projects?
We strongly advocate that organisations use a responsible AI framework and consciously incorporate that into developing AI and analytics solutions. There are various frameworks available. I think it’s not important which one you choose, but you must follow one consciously and create a user toolkit.
AI is a general-purpose tool. How do you see it altering the future of work? How can human resources prepare for this shift?
AI, especially Gen AI models, are promising early business results. Research shows that productivity, augmentation, upskilling and efficiency will be key themes altering the future of work.
- 15% of all worker tasks in the US are to be completed much faster at the same level of quality (Productivity)
- 50% of human tasks could be accelerated with access to tools built using foundation models (Augmentation)
- 3x faster experience curve (Upskilling)
- 35% productivity increase for lowest skill quantile (Efficiency)
This shows that Gen AI won’t replace AI models but will augment AI use cases. We firmly believe that AI and humanity will work together in the future. Humans with AI will outperform humans who do not use AI, and companies that use AI will significantly outperform those that don’t.
AI is already transforming areas like customer interaction, content generation, creative creation, Knowledge management, computer coding and testing, to name a few. We believe that in the coming year, we will see many new use cases impacting every key industry. This will also cause a shift in the tasks that humans will focus on.
To prepare for this shift, humans will have to
- Become better in what they are doing; AI will deliver a basic level of quality, but human expertise will be difficult to replace.
- Enhance core human skills that AI can’t deliver- creativity, empathy, critical thinking, inventiveness, and innovation.
- Leverage AI and augment the tasks that humans perform today.
- Learn and upskill as required and be ready for constant unlearning and relearning.
- Anticipate and prepare for new job roles emerging from an AI-led world.
What advice would you give data leaders to best pilot new solutions and get maximum business value from the investment?
Companies must look at AI, engineering and design together to power the right solutions. The right data engineering platform then runs it, and it has to teach human behaviour and design thinking to make the solution successful.
What key factors contribute to building a strong data-driven culture within an organisation, and how do you navigate any barriers to implementation?
Culture has to start from the top. So, organisation design and where we place this, the chief data officer, for example, in the organisation, really determines how seriously people within the organisation view data-led decision-making.
It has to start from the leadership and go to other important elements like setting the right KPIs, training the organisation, ensuring the right tools are available, and creating data champions. The list goes on, but I think the most important factor is to show the intent by having the leadership inculcate data into their decision-making.
How does Fractal address the interpretability and explainability of its AI models, especially in industries where decision-making transparency is crucial?
Our teams address the interpretability and explainability of AI models through a comprehensive approach encompassing several techniques and methodologies. By focusing on three main aspects – data explainability (XAI), model XAI, and business XAI – Fractal ensures that decision-making transparency is achieved, particularly in industries where it is crucial.
- Utilising interpretable models: We use inherently interpretable and easily understandable models for humans. By selecting interpretable models, the decision-making process becomes more transparent, as stakeholders can comprehend the reasoning behind the predictions.
- Applying post-hoc interpretability techniques: In cases where black box models are used, we apply post-hoc interpretability techniques. By doing so, they can explain the model’s predictions and decisions. Techniques such as Partial Dependency Plots, Accumulated Local Effects (ALE), Information Value, and Sensitivity Checks enable stakeholders to understand how the model arrived at specific outcomes.
- Business XAI and quantile-based analysis: We understand the importance of business XAI, where they use techniques like split and compare quantile plots to explain how the model’s predictions impact business objectives. This approach provides a granular risk assessment and customisable binning, allowing stakeholders to make more effective decisions based on the model’s outputs.
- Counterfactuals: We also utilise counterfactual explanations, which provide causal insights by describing “what if” scenarios. This helps stakeholders understand the influence of certain variables on the model’s predictions and offers a clearer picture of causality.
- Error analysis and refinement: We employ error analysis and visualise errors in its AI models by collecting and labelling diverse, representative data and identifying patterns and features contributing to model errors. By addressing these issues, such as bias, noise, or missing data, Fractal refines the model to improve its accuracy and transparency.
- Model agnostic post-deployment analysis: A salient feature of our approach is that it is model agnostic, which means the techniques for interpretability and explainability can be applied to various AI models. This ensures that post-deployment analysis remains consistent and transparent across models used in different industries.