If It Works Great. Else, We “Fail Fast”

Velocity UAE Interview with Ines Ashton - Director of Advance Analytics - Mars

Datatechvibe spoke to Ines Ashton, Director of Advance Analytics at Mars, about scaling AI projects past proof of concept.

“Mars has a very strong culture of innovation, so this really helps unblock AI research at Mars. It uses an agile approach to experiment, and build POC,” said Ines Ashton, the Director of Advance Analytics at Mars. She speaks about how data leaders can scale ML projects past proof of concept, including creating a common language, addressing financial blockers, and overcoming data challenges.

Excerpts from the interview;

What advice would you give data leaders to scale AI projects past proof-of-concept?

In my opinion, there are three blocks to scaling AI & ML. 

  • Creating a common language: It is worth beginning this conversation by pointing out that there is a difference between “true” AI, which is a machine that can pass the Turing Test in which the AI entity can hold a conversation with a human behind a closed door and the human is unaware Versus the “current/2023” AI definition in which AI is an “umbrella” term for computers performing perform tasks commonly associated with humans by leveraging simple or sophisticated mathematical models.

    Also, it is worth noting that when the word AI is used, 0.1%< of the cases would refer to the “true” AI, with the rest as the umbrella term. So to avoid unnecessary confusion, we need to try to encourage common definitions. This is not easy, especially in a large organisation such as Mars, but you can start small, like an immediate team, then a D&A community, then a business community, etc. For example, in my team, we never use the word AI as we acknowledge the true meaning of it, so you are more likely to hear us use the term ML or Deep Learning or even reference the specific model names versus using the umbrella term.

    In cases where definition alignment is not possible, I recommend avoiding all complex terminology but instead using specific examples that illustrate the topic that is easy to understand. For example, at Mars, we have an X-ray image classification model that helps radiologists identify 53 thorax/abdominal X-rays findings in cats and dogs.

Removing financial blockers impacting development or scaling: At Mars, I use three methods to deal with this issue:

  • Fail fast: Mars has a very strong culture of innovation, so this really helps unblock AI research at Mars. It uses an agile approach to experiment, and build POC, if it works great else, we “fail fast”.
  • Strength in numbers: Mars is also very large, so it has the benefit that it can spread the cost of little benefits when a benefit is repeated millions of times (e.g. x-ray model). In other words, every ML has a huge potential at Mars, so cost is less of a blocker.
  • Bolt-on: at Mars, we “bold on” machine learning during the standard upgrade process. As we have been digitising the workspace, we have used this opportunity to build models on top of existing data sets, which means the cost is shared. ML is not a standalone expensive deliverable.
  • Overcome any data challenges that might arise during build or scaling: Few techniques that I use with my team are:
  • Understand the business problem and assess if the available data has any limitations. Only progress if the data limitation is understood or can be mitigated. Ensure that the data does not hold a known bias.
  • Use proxy data from missing data points. We don’t live in a perfect world, so don’t strive for perfection, but good enough and proxy data can play a very powerful role in this space.
  • Be clear about what conclusions you can or can NOT draw from the data. AI is not a miracle worker to make sure your stakeholders what business decisions can be drawn from the data.
  • Don’t cut corners and do extensive testing of the model at go live.
  • Don’t hand your hat on one number; put ranges/ confidence intervals on the results.
  • Sometimes waiting and collecting more data and then running the models is the best option, if you have only one chance to convince the business, don’t waste it by using flawed data.
  • Does the human oversight piece pass the ‘does this outcome make sense in the real world test?’

In summary, to scale AI projects past proof of concept, data leaders must start using a common language, lean into the financial blockers and anticipate the data obstacles that will come as part of scaling.  By doing so, data leaders can ensure that their AI projects deliver value to the organisation and are adopted across the enterprise.

What advice would you give data leaders when it comes to investing in a technology stack and choosing partners?

At Mars, IT oversees the technology stack hence I have not spent too much time reflecting on the topics. However, my advice is to pick a tech stack and stick with it for a min of three to five years. At Mars, there are some disjointed legacy systems which can make data orchestration and data analysis challenging. In addition, upskilling your workforce onto a new system more frequently than every three to five years becomes very hard after you reach a certain size.

When it comes to choosing a partner, my advice is:

  • Choose partners who understand the business: You should look for partners who have experience working in their industry and who understand their business needs. Partners with a deep understanding of the business can help identify the right technology stack and provide valuable insights into using data to drive business value. For example, I specifically work in the Supply Chain space, and many vendors say they can do it, but very few have hands-on experience.
  • Treat your vendors as extensions to your team: To succeed, you need to work as one team, and you are both in it together, onboard your vendors expensively in the business and take them with you in the critical meeting. Treat as an extension of your team to build long-lasting relations that are built on respect and responsibility. 

How can business leaders ensure trust in predictive analytics to prepare to build future resilience?

To ensure trust in predictive analytics and prepare for future resilience, business leaders should consider the following steps:

  • Define clear objectives: Define clear objectives for predictive analytics initiatives and ensure that they align with the organisation’s strategic goals. Objectives should be specific, measurable, achievable, relevant, and time-bound (SMART).
  • Data is your currency: Predictive analytics is only as good as the data it uses. You should ensure that the data used for predictive analytics is of high quality, accurate, and reliable. Data should be regularly audited and monitored for quality and completeness.
  • Be transparent: Document the methodology, assumptions, and limitations used in the analysis. This can help build trust among stakeholders and increase their confidence in the results.
  • Co-create with stakeholders: involve stakeholders in the predictive analytics process to ensure that they understand how the analysis is being conducted and to gather feedback on the results. This can help build trust and increase the adoption of the results.

Continuously improving the models: Predictive analytics models should be continuously evaluated and refined to ensure they remain accurate and relevant. New data becomes available every day, so the model can always be improved. Use an MLOps team to ensure the models are supported and operating at their full potential.

For more information and registration, visit Velocity UAE.