ADVERTISEMENT

Sports

AI learns to create itself, because humans would have had a hard time making intelligent machines, maybe we should let them take care of it themselves

ADVERTISEMENT


Artificial intelligence researcher at Uber, Rui Wang likes to leave the Paired Open-Ended Trailblazer (POET) software, which he helped develop, running on his laptop overnight. POET is a kind of training tool for virtual robots. They wouldn’t have learned to do much so far. These AI agents do not play, pick up signs of cancer, or fold proteins. They try to navigate without falling into a rough cartoon landscape of fences and ravines.

But it’s not what robots learn to be exciting, it’s how they learn. POET generates obstacle courses, assesses the robots’ abilities and assigns them their next challenge, all without human intervention. No, no, robots get better by trial and error.

at some point he can jump off a cliff like a kung fu master, explains Mwang. Every day I walk into my office, open my computer and don’t know what to expect. It may seem elementary for now, but for Wang and a handful of other researchers, POET hints at a revolutionary new way to create super-intelligent machines: getting AI to build itself.

Wang’s former colleague Jeff Clune is one of the strongest supporters of this idea. Clune has been working there for years, first at the University of Wyoming and then at Uber AI Labs, where he collaborated with Wang and others. He now splits his time between the University of British Columbia and OpenAI and enjoys the support of one of the best AI labs in the world.

Clune believes that the attempt to create truly intelligent AI is the most ambitious scientific research in human history. Today, seventy years after the start of serious AI efforts, we are still a long way from creating machines as intelligent as humans, or even more intelligent. Clune thinks POET might show a shortcut. We have to break free from our chains and get out of the way, ”he says.

If Clune is right, using AI to create AI could be an important step on the road to general artificial intelligence (AGI) someday – machines that can outdo humans. In the short term, this technique could help us discover other types of intelligence as well: non-human intelligences capable of finding solutions in unexpected ways and perhaps integrating our own intelligence rather than replacing it.

Clune’s ambitious vision is not based solely on OpenAI’s investment. The history of AI is full of examples where human-designed solutions have given way to those learned from machines. Take the example of computer vision: ten years ago, the great advance in image recognition came when existing manual systems were replaced by self-learning systems. The same is true of many AI successes.

One of the fascinating aspects of AI, and machine learning in particular, is its ability to find solutions that humans don’t have. An often cited example is that of AlphaGo (and its successor AlphaZero), beating the best of humanity in the fascinating ancient game of Go using seemingly alien strategies.

In 2016, after a three-hour match that commentators considered tight, Lee Sedol, the professional player considered the best international player of the 2000s, gave in to the onslaught of the program. After hundreds of years of study by human masters, AI has found solutions that no one had thought of.

Clune is currently working with an OpenAI team that developed bots that learned how to play hide and seek in a virtual environment in 2018. These AIs started with simple goals and simple tools to achieve them: one pair had to find the other, who could hide. behind moving obstacles. However, when these robots were set free to learn, they quickly found ways to exploit their surroundings in ways the researchers hadn’t anticipated.

They exploited loopholes in the simulated physics of their virtual world to jump over walls and even cross them.
This kind of unexpected emerging behavior suggests that AI could find technical solutions that humans would not have thought of on their own, by inventing new types of algorithms or more efficient neural networks, or even abandoning techniques altogether. TO THE.

First you need to build a brain, then teach it. But machine brains don’t learn like ours. Our brains are great at adapting to new environments and new tasks. Today’s AI can solve problems under certain conditions, but fail when those conditions change, even slightly. This rigidity hinders the search for a more generalizable AI that can be useful in a wide range of scenarios, which would be a big step towards true intelligence.

For Jane Wang, a researcher at DeepMind London, the best way to make AI more flexible is to let it learn this trait. In other words, she wants to build an AI that not only learns specific tasks, but also learns to learn those tasks in a way that can be adapted to new situations.

For years, researchers have been trying to make AI more adaptable. Wang thinks that letting AI solve this problem on its own avoids the guesswork of an artisanal approach: “We can’t expect to find the right answer right away.” He hopes that by doing so, we will also learn more about how the brain works. There’s still so much we don’t understand about how humans and animals learn, she says. There are two main approaches to automatically generate learning algorithms, but they both start from an existing neural network and use AI to teach it.

The first approach, invented separately by Wang and his colleagues at DeepMind and an OpenAI team around the same time, uses recurrent neural networks. This type of network can be trained so that the activation of its neurons, which is similar to the activation of neurons in biological brains, encodes any kind of algorithm. DeepMind and OpenAI took advantage of this to train a recurrent neural network to generate reinforcement learning algorithms, which tell an AI how to behave to achieve certain goals.

The result is that DeepMind and OpenAI systems do not learn an algorithm that solves a specific challenge, such as image recognition, but they learn a learning algorithm that can be applied to multiple activities and adapt over time. It’s like the old adage about learning to fish: While a craft algorithm can learn a particular task, these AIs are meant to learn to learn for themselves. And some of them perform better than man-made ones.

The second approach is that of Chelsea Finn, of the University of California, Berkeley, and her colleagues. Referred to as metamodel-independent learning, or MAML, it trains a model using two machine learning processes, one nested within the other.

Here’s how it works:

MAML’s internal process is trained on the data, then tested, as usual. But then the external model takes the performance of the internal model as it identifies images, for example, and uses them to learn how to adjust that model’s learning algorithm to improve performance. It’s like having a school inspector supervising a group of teachers, each offering different learning techniques. The inspector checks which techniques allow students to achieve the best results and modifies them accordingly.

Using these approaches, researchers are building AI that is more robust, more generalized and able to learn faster with less data. For example, Finn wants a robot that has learned to walk on flat ground to be able, with minimal additional training, to walk on a slope, on grass, or carry a load.

Last year, Clune and her colleagues extended Finn’s technique to design an algorithm that learns using fewer neurons so it doesn’t erase everything it previously learned – a major unsolved problem in machine learning known as catastrophic forgetfulness. A trained model that uses fewer neurons, called a “sparse” model, will have more unused neurons that will dedicate new tasks during training, which means fewer “used” neurons will be squashed.

Clune found that challenging his AI to learn more than one task led him to create his own version of a scattered model that performed better than man-made models. If we want to let AI create and teach itself, it should also create its own training environments: schools, textbooks and lesson plans.

Over the past year, we’ve seen a number of projects where AI has been trained from auto-generated data. Facial recognition systems are trained with AI-generated faces, for example. AIs also learn by training each other. In a recent example, two robotic arms worked together, one of which learned to face the increasingly difficult challenges of stacking blocks, allowing the other arm to practice grasping objects.

If AI starts generating intelligence on its own, there is no guarantee that it will be human-like. Instead of teaching machines to think like humans, machines could teach humans new ways of thinking.

And she ?

What is your opinion on the subject?

See also:

Blockchain, cybersecurity, cloud, machine learning and DevOps are among the most requested technology skills in 2022, according to a Comptia report

Footage shows an AI-driven robotic tank detonating cars, reigniting fears of the proliferation of lethal autonomous weapons

The transcript used as proof of the sensitivity of Google’s LaMDA AI has been modified to make it easier to read, according to notes from the Google-licensed engineer.

Google engineer was fired after claiming that Google’s AI LaMDA chatbot has become sentient and expresses thoughts and feelings equivalent to those of a human child

ADVERTISEMENT

Leave a Comment