The EU AI Act is poised to impact the utilisation of specific AI tools by enterprise businesses, particularly those employed in decision-making processes
62% of marketers hold the belief that generative AI will enhance human creativity, augmenting distinct human qualities like intuition, emotion, and context understanding, according to a recent report.
As the world increasingly relies on generative AI for nearly every innovation, the tech and industry anxiously awaited for the other shoe to drop. Enter The European Union AI Act.
The Act is poised to impact the utilisation of specific AI tools by enterprise businesses, particularly those employed in decision-making processes across various domains such as housing, cybersecurity, workforce management, and advertising.
What does the Act entail?
The Parliament and Council negotiators reached a provisional agreement on the EU AI Act (European Union Artificial Intelligence Act). This regulation protects the nation’s fundamental rights, democracy, and environmental sustainability from high-risk AI innovations.
The EU AI Act has established obligations for AI models based on their potential risks and level of impact. Enlisted below are a few of them:
Prohibitions of AI Applications
After recognising the potential threat to citizens’ rights and democracy posed by a few AI models, the co-legislators have agreed to ban the following AI applications:
- Using biometric categorisation systems that leverage sensitive characteristics, such as political, religious, philosophical beliefs, sexual orientation, race, etc.
- Scraping untargeted facial images from the internet or CCTV footage for creating facial recognition databases
- Installing emotion recognition systems in the workplace and educational institutions
- Incorporating social scoring systems based on social behaviour or personal characteristics
- Leveraging AI systems that manipulate human behaviour to circumvent their free will, exploiting people’s vulnerabilities (due to age, disability, social or economic situation).
Obligations for General AI Systems
General Purpose AI Models will have to adhere to the transparency requirements initially proposed by the European Parliament. For example, drawing up technical documentation that complies with EU copyright law and disseminating detailed summaries about the content used for training.
High-impact GPAIs with systemic risk will be evaluated based on certain criteria, such as reports on serious incidents, cybersecurity, and energy efficiency.
How are the global economies and businesses reacting to the EU AI Act?
Ian Liddicoat, Chief Technology Officer and Head of Data Science at Adludio, said:“The formulation of regulations in the EU has been caught short by the stellar growth in LLM models trained on vast amounts of data. Policymakers did not anticipate the number of use cases, the useability or the sign-up rate.”
“Tech companies will resist this if it is seen to stifle innovation. Suppose the regulations restrict or limit the application of foundational models. In that case, the EU will push the ability to manage such regulation into the hands of the US, where many of the largest tech companies interested in AI are domiciled.”
There is a need for policy and an ability to manage the inherent security risks in LLMs and AGI models. But if this is draconian and not thought through or seen to stifle innovation – which will lessen the appetite for investment in the EU, it will likely backfire, he concluded.
When other countries like Britain, Japan, China, and the USA are all set to implement AI regulations after the newly imposed EU AI Law, the Indian government isn’t charging any specific law to regulate AI development.
Pierre Pinna, founder and CEO of IPFConline Digital Innovations, also expressed his thoughts in a thread, saying, “Regarding the regulation of open-source AI models, the EU AI Act legislation has finally included restrictions for foundation AI models, but has granted broad exemptions to open-source ones, which are developed using code freely available to all developers so that they can modify the native models to create their ones, and also improve the original models. It may sound like a good idea, but it just “sounds like” a good idea; it’s a false good idea!”
Bernd Greifeneder, SVP & CTO at Dynatrace, said, “In the search for greater transparency around general purpose AI, it will be important to highlight how these can be combined with other, more explainable forms of AI – such as causal and predictive. These types of AI are fundamentally different from general-purpose models as they do not follow a probabilistic approach. Instead, they are built for precision, using graph-based and statistical models that leverage domain-specific, contextualised data. This makes them better suited to specialised use cases and more resistant to hallucination and bias. They are also built to be transparent, so users can trust the answers they generate to drive reliable automation rather than that insight being hidden within the ‘black box’ of the AI. The EU AI Act will get off to a great start if it can clarify these key differences between AI models.”
In conclusion, the EU AI Act emerges as a pivotal force shaping the landscape for technology and business, provoking a delicate balance between regulatory control and fostering innovation. As industry experts voice concerns over potential stifling effects, the nuanced impact on both sectors remains to be seen, with the potential for transformative changes and challenges ahead.