How might AI Regulations evolve internationally?
Artificial intelligence (AI) enables computers to think and learn independently of human intervention.
It simulates human intelligence and can perform tasks previously allotted to humans.
Stronger versions of AI enable the technology to teach itself to understand and emulate intellectual tasks humans can do.
Statista reports that the current value of the AI market of $100 billion US dollars is expected to increase twentyfold by the decade’s end, reaching $2 trillion US dollars.
Ubiquity with AI is a genuine possibility and a near reality as we speak. Approximately 77% of devices currently use a form of artificial intelligence.
With this nearing ubiquity comes many questions. It can be a murky area because such pervasiveness can be invasive and potentially catastrophic. As such, expect AI to become stringently regulated on the global stage as it grows in prevalence and use cases.
Crucial advantages of AI
There’s a reason for AI’s growing ubiquity. Namely, it’s an exceptionally useful tool that makes our lives easier. Below are some primary advantages related technologies offer.
Limiting human error
AI provides a level of accuracy and precision for specific tasks that humans plainly can’t emulate. The related technologies make decisions based on previously gathered information set by unique algorithms. When properly programmed, errors related to the learned and assigned tasks are virtually non-existent when performed by AI.
One example is how robotic surgery procedures improve patient safety.
Approximately 77% of devices currently use a form of artificial intelligence.
Productivity that never fades
Humans need rest. In fact, studies indicate we’re only productive for about three to four hours daily.
AI doesn’t need any breaks to keep performing at peak efficiency, thinking faster than we do and performing multiple tasks simultaneously and effectively. Artificial intelligence-based tools don’t get bogged down by repetitious work–such is the unflappable strength of their algorithms.
We see this level of productivity with online chatbots, which provide 24/7 support through AI and natural language processing.
AI spearheads innovations in many industries, such as helping doctors detect breast cancer in patients earlier than ever.
The above advantages only scratch the surface but have been highlighted to show why AI is making such monumental waves.
Why must AI be regulated?
The benefits of AI discussed above might make it seem like we’ve discovered a benevolent innovation that will take society to new, utopian places. Unfortunately, we live in a grey-shaded world, and there’s another side of the coin.
With great power comes an even greater potential to do wrong. Thus, AI must be regulated stringently and here are a few reasons why:
AI Can Be Employed For Malicious Purposes
The following malicious actions can be bolstered via AI:
- Infrastructure and framework attacks
- Adversarial machine learning
AI is jaw-droppingly powerful in many ways. Thus, it presents a significant threat when it gets into the wrong hands.
With great power comes an even greater potential to do wrong.
A question of ethics
Existing digital laws (such as ones pertaining to user and data privacy) should also apply to AI.
Regulations should revolve around mandated robust cybersecurity capabilities for anyone using AI and handling data-heavy algorithms.
Tech companies should also remove AI algorithm biases and discrimination if the related tech deals with sensitive issues. This means ensuring no purposeful biases and that all naturally occurring ones are eliminated (when flagged) to avoid at-scale reproduction.
Regulating ethical AI facets will foster transparency, trust, and accountability amongst developers, users, and stakeholders.
Protecting human rights and safety
Many AI-related safety risks exist, such as generative AI being used to create malicious deepfakes or spread misinformation.
AI can be used to help build questionable weaponry, too (like building a dirty bomb). There are also autonomous weapons and AI-driven warfare to consider.
Regulations are needed to prevent these potential threats to our safety. AI should not be weaponised to put us in harm's way.
Preventing negative societal and economic impacts
There’s a strong possibility that AI could displace 800 million jobs by 2030. This could lead to decades of despair for many. A plan should be implemented to protect those who might be replaced by AI (e.g., through subsidised training).
Our economy also might face threats from AI-driven monopolies. Catching up will be impossible due to network effects–the industry heavyweights will have all the data, preventing competition.
There’s a strong possibility that AI could displace 800 million jobs by 2030.
The EU has set an example for global AI regulation
The EU has put forth its first regulation act to garner a handle on AI’s growing ubiquity.
The core of the EU’s AI Act revolves around risk. Undoubtedly, many related programs present little risk (if any), but assessment is still needed.
Some crucial takeaways from the AI Act include:
- Behavioural manipulation, social scoring, remote biometrics, and other forms of AI presenting unacceptable risks will be banned.
- AI systems linked to education, law enforcement, and education will be classified as high-risk. They’ll be stringently assessed before reaching the market and during their lifecycle.
- Minimal transparency requirements and compliance will apply to limited-risk AI systems. These regulations will enable users to make informed decisions.
The EU’s proposed regulations shed specific light on generative AI.
Anyone using such tools will have to disclose that their content is AI-generated. They’ll also need to establish preventative safeguards against illegal content generation. Lastly, copyrighted data summaries must be published for training purposes.
Companies that don’t comply may lose 6% of their annual turnover in fines. Plus, they might lose the right to operate in the EU.
Expect these laws to be firmly entrenched by 2024.
AI systems linked to education, law enforcement, and education will be classified as high-risk.
Will the rest of the world follow suit?
The EU is very much leading the charge on the world stage for AI regulation. While the US is doing their best to keep in lockstep, having introduced a bill this past June, it’s moving at a comparatively snail’s pace. No signs stateside speak to anything being passed in the near future.
Due to the US’s slow crawl on regulation, the EU will likely be the one who sets the global standard in how AI is regulated. This mirrors how the EU set the bar with its 2018 General Data Protection Regulation (GDPR).
These regulations will likely be a wide-scale boon because they’ll provide extra impetus for AI to be harnessed for its positives while mitigating the negatives–much to the betterment of society.