The myth of artificial intelligence Western lawmakers are wrong

This story was originally published in The Algorithm, our weekly newsletter on AI. Sign up here to get stories like this in your inbox first.

While the US and EU have differing views on how to regulate the technology, lawmakers seem to agree on one thing: The West should ban AI-powered social scoring.

Social scoring, as they understand it, is a practice where authoritarian governments—especially China—rate people’s credibility and punish them for undesirable behavior, such as stealing or failing to repay loans. Essentially, it’s seen as a dystopian superpoint assigned to every citizen.

The EU is currently negotiating a new law, called the AI ​​Law, that would prohibit member states and perhaps even private companies from implementing such a system.

The problem is, “it actually forbids thin air,” says Vincent Brussee, an analyst at the Mercator Institute for China Studies, a German think tank.

In 2014, China announced a six-year plan to create a system that rewards actions that build trust in society and penalizes otherwise. Eight years later, a draft law has just been published that seeks to regulate past social credit pilots and guide future practice.

There have been some contentious local experimentsAs with the small city of Rongcheng in 2013, it gave each resident an initial personal credit score of 1,000, which can be increased or decreased according to how their actions are evaluated. People can now opt out and the local government has removed some controversial criteria.

But these have not received wider attention elsewhere and are not true for the entire Chinese population. There is no nationwide all-seeing social credit system with algorithms that rank people.

As my colleague Zeyi Yang explained, “the truth is that this terrible system does not exist and the central government does not seem to have much appetite to build it.”

What is applied is mostly pretty low tech. Zeyi writes, “it’s a mix of attempts to regulate the financial lending industry, get government agencies to share data with each other, and promote government-sanctioned moral values.”

Kendra Schaefer, a partner at Trivium China, a Beijing-based research consultancy that has produced a report on the matter for the US government, has not found a single case in China where data collection led to automatic sanctions without human intervention. In Rongcheng, the South China Morning Post found that human “information collectors” were roaming the town and using a pen and paper to write down people’s inappropriate behavior.

The legend originates from a pilot program called Sesame Credits. It was developed by the Chinese technology company Alibaba. Brussee says this is an attempt to assess people’s creditworthiness using customer data at a time when most Chinese people don’t have credit cards. The effort was combined with the social credit system as a whole in what Brussee describes as the “game of Chinese whispers.” And the misunderstanding took a life of its own.

Ironically, while US and European politicians portray this as a problem stemming from authoritarian regimes, Systems that grade and punish people already exist in the West. Algorithms designed to automate decisions are mass released and used to deprive people of housing, jobs and basic services.

In Amsterdam, for example, authorities used an algorithm to rank youth from disadvantaged neighborhoods according to their probability of being delinquent. They claim the goal is to prevent crime and help deliver better, more targeted support.

But human rights groups claim it actually increases stigma and discrimination. Young people on this list are more often stopped by the police, home-visited by the authorities, and subject to tighter scrutiny by school and social workers.

IIt’s easy to take a stand against a dystopian algorithm that doesn’t really exist. But as lawmakers in both the EU and the US try to forge a common understanding of AI governance, they’d better take a closer look at home. Americans don’t even have a federal privacy law that would provide some basic protections against algorithmic decision making.

There is also a serious need for honest and comprehensive audits of the way governments, authorities and companies use AI to make decisions about our lives. They may not like what they find, but that makes looking all that much more important to them.

Deep Learning

A bot watching 70,000 hours of Minecraft could unlock AI’s next big thing

Research firm OpenAI has created an AI that watches 70,000 hours of videos of people playing Minecraft to play the game better than any AI before. It’s a breakthrough for a powerful new technique called imitation learning, which can be used to train machines to perform a wide variety of tasks by first watching humans do it. It also increases the potential of sites like YouTube to be a vast and untapped source of educational data.

Why is it important: Imitation learning can be used to train artificial intelligence to control robot arms, drive cars or browse websites. Some, like Meta’s chief AI scientist, Yann LeCun, think that watching a video will eventually help you train an AI with human-level intelligence. Read Will Douglas Heaven’s story here.

Bits and Bytes

Meta’s game-playing AI can form and break alliances like a human

Diplomacy is a popular strategy game in which seven players compete for control of Europe by moving pieces on a map. The game requires players to talk to each other and see the others bluffing. Meta’s new artificial intelligence called Cicero has managed to trick people into winning.

It’s a big step towards artificial intelligence that can help with complex problems such as route planning and contract negotiation around heavy traffic. But I won’t lie – it’s also a frustrating thought that an AI could fool humans so successfully. (MIT Technology Review)

We may run out of data to train AI language programs

The trend to build ever larger AI models means we need even larger datasets to train them. The problem is, we may run out of available data by 2026, according to a paper by researchers from Epoch, an AI research and prediction organization. This should propel the AI ​​community to find ways to do more with available resources. (MIT Technology Review)

Stable Diffusion 2.0 is out

An open source text-to-image to AI Stable Difusion big face liftand its output looks much more stylish and realistic than before. can even hands. The pace of development of Stable Difusion is breathtaking. Its first version was only released in August. We will likely see even more advances in productive AI next year.

Leave a Reply

Your email address will not be published. Required fields are marked *