Home ›› 30 Jun 2022 ›› Opinion

Immediate risks of AI

Jake Harfield
30 Jun 2022 00:00:00 | Update: 30 Jun 2022 00:45:51
Immediate risks of AI

Artificial intelligence is disrupting and revolutionizing almost every industry. With advancing technology, it has the potential to improve so many aspects of life drastically.

But, it isn’t without risk.

And, with swathes of experts warning of the potential danger of AI, we should probably pay attention. On the other hand, many claim that these are alarmist views and that there’s no immediate danger from AI.

So, are concerns about artificial intelligence alarmist or not? This article will cover the five main risks of artificial intelligence, explaining the currently available technology in these areas.

AI is growing more sophisticated by the day, and this can have risks ranging from mild (for example, job disruption) to catastrophic existential risks. The level of risk imposed by AI is so heavily debated because there’s a general lack of understanding (and consensus) regarding AI technology.

It is generally thought that AI can be dangerous in two ways: The AI is programmed to do something malicious and the AI is programmed to be beneficial but does something destructive while achieving its goal.

These risks are amplified by the sophistication of AI software. The classic hypothetical argument is the facetious “paper clip maximizer.” In this thought experiment, a superintelligent AI has been programmed to maximize the number of paper clips in the world. If it’s sufficiently intelligent, it could destroy the entire world for this goal.

But, we don’t need to consider superintelligent AI to see that there are dangers already associated with our use of AI. So, what are some of the immediate risks we face from AI?

From mass production factories to self-serve checkouts to self-driving cars, automation has been occurring for decades—and the process is accelerating. A Brookings Institution study in 2019 found that 36 million jobs could be at a high risk of automation in the coming years.

In 2020, the UK government commissioned a report on Artificial Intelligence and UK National Security, which highlighted the necessity of AI in the UK’s cybersecurity defenses to detect and mitigate threats that require a greater speed of response than human decision-making is capable of.

The problem is that the hope is that as AI-driven security concerns rise, so do AI-driven prevention measures. Unless we can develop measures to protect ourselves against AI concerns, we run the risk of running a never-ending race against bad actors.

Autonomous weapons—weapons controlled by AI systems rather than human input—already exist and have done for quite some time. Hundreds of tech experts have urged the UN to develop a way to protect humanity from the risks involved in autonomous weapons.

Government militaries worldwide already have access to various AI-controlled or semi-AI-controlled weapon systems, like military drones. With facial recognition software, a drone can track an individual.

What happens when we start allowing AI algorithms to make life and death decisions without any human input?

It’s also possible to customize consumer technology (like drones) to fly autonomously and perform various tasks. This kind of capability in the wrong hands could impact an individual's security on a day-to-day basis.

Facial reconstruction software (more commonly known as deepfake tech) is becoming more and more indistinguishable from reality.

The danger of deepfakes is already affecting celebrities and world leaders, and it’s only so long until this trickles down to ordinary people. For instance, scammers are already blackmailing people with deepfake videos created from something as simple and accessible as a Facebook profile picture.

And that’s not the only risk. AI can recreate and edit photos, compose text, clone voices, and automatically produce highly targeted advertising. We have already seen how some of these dangers impact society.

As artificial intelligence increases in sophistication and capability, many positive advances are being made. But unfortunately, powerful new technology is always at the risk of being misused. These risks affect almost every facet of our daily lives, from privacy to political security to job automation.

The first step in mitigating the risks of artificial intelligence will be to decide where we want AI to be used and where it should be discouraged. Increasing research and debate into AI systems and their uses is the first step to preventing them from being misused.

 

makeuseof

×