[ad_1]
Welcome to Neural’s beginner’s guide to AI. This long-running series should provide you with a very basic understanding of what AI is, what it can do, and how it works.
In addition to the article you’re currently reading, the guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, artificial general intelligence, the difference between video game AI and real AI, the difference between human and machine intelligence, and ethics.
In this edition of the guide, we’ll take a glance at global AI policy.
The US, China, Russia, and Europe each approach artificial intelligence development and regulation differently. In the coming years it will be important for everyone to understand what those differences can mean for our safety and privacy.
Yesterday
Artificial intelligence has traditionally been swept in with other technologies when it comes to policy and regulation.
That worked well in the days when algorithm-based tech was mostly used for data processing and crunching numbers. But the deep learning explosion that began around 2014 changed everything.
In the years since, we’ve seen the inception and mass adoption of privacy-smashing technologies such as virtual assistants, facial recognition, and online trackers.
Just a decade ago our biggest privacy concerns, as citizens, involved worrying about the government tracking us through our cell phone signals or snooping on our email.
Today, we know that AI trackers are following our every move online. Cameras record everything we do in public, even in our own neighborhoods, and there were at least 40 million smart speakers sold in Q4 of 2020 alone.
Today
Regulators and government entities around the world are trying to catch up to the technology and implement polices that make sense for their particular brand of governance.
In the US, there’s little in the way of regulation. In fact the US government is highly invested in many AI technologies the global community considers problematic. It develops lethal autonomous weapons (LAWS), its policies allow law enforcement officers to use facial recognition and internet crawlers without oversight, and there are no rules or laws prohibiting “snake oil” predictive AI services.
In Russia, the official policy is one of democratizing AI research by pooling data. A preview of the nation’s first AI policy draft indicates Russia plans to develop tools that allow its citizens to control and anonymize their own data.
However, the Russian government has also been connected to adversarial AI ops targeting governments and civilians around the globe. It’s difficult to discern what rules Russia‘s private sector will face when it comes to privacy and AI.
And, to the best of our knowledge, there’s no declassified data on Russia‘s military policies when it comes to the use of AI. The best we can do is speculate based on past reports and statements made by the country’s current leader, Vladmir Putin.
Putin, speaking to Russian students in 2017, said “whoever becomes the leader in this sphere will become the ruler of the world.”
China, on the other hand, has been relatively transparent about it’s AI programs. In 2017 China released the world’s first robust AI policy plan incorporating modern deep learning technologies and predicted future machine learning tech.
The PRC intends on being the global leader in AI technology by 2030. It’s program to achieve this goal includes massive investments from the private sector, academia, and the government.
US military leaders believe China’s military policies concerning AI are aimed at the development of LAWS that don’t require a human in the loop.
Europe‘s vision for AI policy is a bit different. Where the US, China, and Russia appear focused on the military and global competitive-financial aspects of AI, the EU is defining and crafting policies that put privacy and citizen-safety at the forefront.
In this respect, the EU currently seeks to limit facial recognition and other data-gathering technologies and to ensure citizens are explicitly informed when a product or service records their information.
The future
Predicting the future of AI policy is a tricky matter. Not only do we have to take into account how each nation currently approaches development and regulation, but we have to try to imagine how AI technology itself will advance in each country.
Let’s start with the EU:
- Some experts feel the human-centric approach to AI policy that Europe is taking is the example the rest of the world should follow. When it comes to AI tech, privacy is analogous to safety.
- But other experts fear the EU is leaving itself wide open to exploitation by adversaries with no regard for obeying its regulations.
In Russia, of course, things are different:
- Russia‘s focus on becoming a world leader in AI doesn’t go through big tech or academia, but through the advancement of military technologies – arguably, the only relevant domain it’s globally competitive in.
- Iron-fisted rule stifles private-sector development, so it would make sense if Russia kept extremely lax privacy laws in place concerning how the private sector handles the general public. And there’s no reason to believe the Russian government will affect any official policy protecting citizen privacy.
Moving to China, the future’s a bit easier to predict:
- China’s all-in on surveillance. Every aspect of Chinese life, for citizens, is affected by intrusive AI systems including a social credit scoring system, ubiquitous facial and emotional recognition, and complete digital monitoring.
- There’s little reason to believe China will change its privacy laws, stop engaging in government-sponsored AI IP-theft, or cease its reported production of LAWs technology.
And that just brings us to the US:
- Due to a lack of clear policy, the US exists somewhere between China and Russia when it comes to unilateral AI regulation. Unless the long-threatened big tech breakup happens, we can assume Facebook, Google, Amazon, and Microsoft will continue to dictate US policy with their wallets.
- AI regulation is a completely partisan issue being handled by a US congress that’s divided. Until such a time as the partisanship clears up somewhat, we can expect US AI policy beyond the private sector to begin and end with lobbyists and the defense budget.
At the end of the day, it’s impossible to make strong predictions because politicians around the globe are still generally ignorant when it comes to the reality of modern AI and the most-likely scenarios for the future.
Technology policy is often a reactionary discipline: countries tend to regulate things only after they’ve proven problematic. And, we don’t know what major events or breakthroughs could prompt radical policy change for any given nation.
In 2021, the field of artificial intelligence is at an inflection point. We’re between eurekas, waiting on autonomy to come of age, and hoping that our world leaders can come to a safe accord concerning LAWS and international privacy regulations.
[ad_2]
Source link