What Google’s AI-designed chip tells us about the nature of intelligence

0

[ad_1]

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

In a paper published in the peer-reviewed scientific journal Nature last week, scientists at Google Brain introduced a deep reinforcement learning technique for floorplanning, the process of arranging the placement of different components of computer chips.

The researchers managed to use the reinforcement learning technique to design the next generation of Tensor Processing Units, Google’s specialized artificial intelligence processors.

The use of software in chip design is not new. But according to the Google researchers, the new reinforcement learning model “automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area.” And it does it in a fraction of the time it would take a human to do so.

The AI’s superiority to human performance has drawn a lot of attention. One media outlet described it as “artificial intelligence software that can design computer chips faster than humans can” and wrote that “a chip that would take humans months to design can be dreamed up by [Google’s] new AI in less than six hours.”

Another outlet wrote, “The virtuous cycle of AI designing chips for AI looks like it’s only just getting started.”

But while reading the paper, what amazed me was not the intricacy of the AI system used to design computer chips but the synergies between human and artificial intelligence.

Analogies, intuitions, and rewards

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More