Microsoft carried out a study titled ‘Adversarial Machine Learning – Industry Perspectives’ to better understand ML systems security.
Artificial intelligence systems take inputs in the form of visuals, audios, texts, etc. As a result, filtering, handling, and detecting malicious inputs and behaviours have become more complicated. Cybersecurity is one of the top priorities of companies worldwide. The increase in the number of AI Security papers from just 617 in 2018 to over 1500 in 2020 (an increase of almost 143 per cent as per an Adversa report) is a testament to the growing importance of cybersecurity.
Microsoft has recently announced the release of Counterfit – a tool to test the security of AI systems – as an open-source project. The tech giant’s latest move can potentially become a huge step towards building a robust AI ecosystem.
The tool comes in handy at penetration testing and red teaming the AI systems. With pre-installed published attack algorithms, the tool is already trained on the worst possible attacks. Furthermore, security professionals can connect to Counterfit from existing offensive tools using the target interface and built-in cmd2 scripting engine.
‘This tool was born out of our own need to assess Microsoft’s AI systems for vulnerabilities to proactively secure AI services, in accordance with Microsoft’s responsible AI principles and Responsible AI Strategy in Engineering (RAISE) initiative’, Microsoft blog read.
The tool aims to make attacks publicly available to the security community for prompt corrective actions while also providing an interface for building, managing, and launching attacks on models. Moreover, it uses the terminology and workflows similar to Metasploit or PowerShell Empyre – the offensive tools already in use. Microsoft suggests using Counterfit in conjunction with the Adversarial ML Threat Matrix, an ATT&CK-style framework developed by MITRE and Microsoft to help security analysts detect AI threats.
Counterfit can be used to scan AI models. For comprehensive vulnerability coverage of an AI model, security professionals may use the default, set random parameters, or customise. Organisations can use the tool to look out for attacks, and the vulnerabilities, once fixed, will further train it to prepare for future attacks. Counterfit also has logging capabilities for recording attacks on a target model. Data insights gained from the analysis could aid data scientists and engineers better understand failure modes in AI systems.
Earlier, Microsoft carried out a study titled ‘Adversarial Machine Learning – Industry Perspectives’ to better understand ML systems security. Tech giants such as Google, Amazon, Microsoft, Tesla have invested heavily in ML systems, the study noted.
‘Through interviews spanning 28 organisations, we found that most ML engineers and incident responders are unequipped with tactical and strategic tools to secure, detect and respond to attacks on industry-grade ML systems against adversarial attacks’, said Microsoft.
AI researchers discovered that adding small black and white stickers to stop signs could mess with computer vision algorithms. The study showed even the most advanced deep neural networks are susceptible to failure even from small perturbations in the input. This can lead to dangerous consequences.
With AI-powered machines increasingly replacing humans in multiple roles, the security of AI systems is of utmost importance for reliability, safety and fairness. AI and ML systems are typically made of a mix of open-source libraries and code written by people who aren’t security experts. Furthermore, there are no industry-accepted best practices for developing secure AI algorithms. Counterfit is the need of the hour, especially in the wake of solarigate and rising cybersecurity breaches.