AI Risks We Should Know About 

AI-Risks-We-Should-Know-About

Humans have viewed artificial intelligence (AI) with a bit of trepidation. Most AI depictions involve killer robots or all-seeing, all-knowing systems bent on destroying humanity. Similarly, these sentiments are pervaded in the news media, which tends to treat AI breakthroughs with more alarm than measured analysis. However, the real issue is whether these overly dramatic, dystopian visions divert attention from the more nuanced — but equally dangerous — risks posed by the misuse of AI applications already available or developed today.

AI affects all areas of our lives, including how we work, what we consume, and what we buy. From automating routine office tasks to solving urgent challenges like climate change and hunger, AI technologies continue disrupting our world. AI is also already having some negative effects — the wrongful arrests in the US and the mass surveillance of China’s Uighur population. During a time when companies, governments, and data scientists are focused on pushing the limits of what’s possible, they may not notice the social problems their breakthroughs can cause until it’s too late.

Therefore, it is now time to be more intentional about how we use and develop AI. It’s important to integrate ethical and social impact considerations into the development process from the onset. Also, we need to realise that even seemingly benign algorithms and models can be used negatively, even if we are a long way from Terminator-like AI threats.

The Effects of Deepfakes on Discord and Doubt

Deepfakes are artificial images, audio, and videos that appear to be real. They are typically created using machine learning methods. Such “synthetic” media are being produced with sophisticated tools now easily accessible even to non-experts. Malicious actors use such content to damage reputations and commit fraud-based crimes, and there are no doubt other harmful applications.

Deepfakes create two risks: the fake content will mislead viewers into believing that fabricated events or statements are real and that their increasing prevalence will undermine trust in trusted sources of information. Although detection tools exist today, deepfake creators have demonstrated their ability to learn from these defences and quickly adapt. Considering social media’s ability to propagate fraudulent information rapidly, even unsophisticated fake content can cause substantial damage.

Deepfakes are a prime example of how AI technology can have subtly insidious effects on society. Recently, the UAE National Program for Artificial Intelligence launched the Deepfake Gide, to raise social awareness on the harmful and useful uses of deepfake technologies, their effect on social wellbeing, and the means to face the various challenges resulting from the harmful applications of such technologies.

Also Read: Using AI for Video Marketing

Disinformation Models Of Large Languages

A large language model is an example of AI technology developed with non-negative intentions that still deserves careful consideration from the perspective of social impact. In order to write human-like text, these models employ deep learning techniques that are trained by patterns in datasets, often scraped from the internet. The latest AI model from OpenAI, GPT-3  boasts 175 billion parameters, ten times more than its predecessor models. Using this massive knowledge base, GPT-3 can generate almost any text with minimal human input, including short stories, email replies, and technical documents. 

However, there are potential downsides. GPT-3, like its predecessor models, can produce sexist, racist, and discriminatory text because it is trained on Internet content. Furthermore, in a world where trolls already influence public opinion, large language models like GPT-3 could create an atmosphere of division and misinformation online. As a result of the potential for misuse, OpenAI restricted access to GPT-3, first to selected researchers and later to Microsoft. Google unveiled a trillion-parameter model earlier this year, and OpenAI acknowledges that open source projects are on track to recreate GPT-3 soon. 

Also Read: Is AI Disrupting BI?

Toward Ethical, Socially Beneficial AI

Even though AI will take time to reach the nightmare sci-fi scenarios, it doesn’t mean we can ignore the real social risks AI poses today. Researchers and industry leaders can develop procedures for identifying and mitigating potential risks by collaborating with stakeholder groups. This technology has many potential benefits for society — we need to be careful and responsible for developing and deploying it.

AI’s risks go way beyond purely technical aspects. Mitigation efforts must also go beyond purely technical methods. It is also essential to establish norms and shared practices around AI like GPT-3 and deepfake models, such as standardised impact assessments and external review periods. In turn, the industry can enhance countermeasures, such as the detection tools that Facebook has developed for its Deepfake Detection Challenge and Microsoft‘s Video Authenticator. It will be necessary to continually engage the public in educational campaigns around AI to make them aware of its misuses.