The speed of innovation increases when new knowledge or new technologies are themselves used to discover the next round of new technologies. A canonical example is in computer processors — where engineers use the latest processors to help them design and optimize the next generation of processors. This is essentially what enables “Moore’s law” — the observation that computer capability increases exponentially over time. (This is how today’s smartphones became a hundred times more powerful than desktop computers from 20 years ago.)
By contrast, if we were still using paper and pencil to design the latest processors (as was necessary before computers existed), we would expect computer capability to increase only linearly, as we worked out improvements at the same rate that was achievable by engineers back then.
Most of the recent press and hype about “AI” — which really means machine learning with deep neural networks — focuses on direct applications such as self-driving cars and workplace automation. But I think a much more profound possibility lies in the ability of deep learning to increase the speed of innovation itself.
This is not a vague notion about “intelligence” or even a discussion about the extent to which computers can replace humans. Rather, it’s a specific capability that’s well suited to at least some types of scientific research. As David Rotman describes one such application in Technology Review:
Human researchers can explore only a tiny slice of what is possible. It’s estimated that there are as many as 1060 potentially drug-like molecules—more than the number of atoms in the solar system. But traversing seemingly unlimited possibilities is what machine learning is good at. Trained on large databases of existing molecules and their properties, the programs can explore all possible related molecules.
This by itself is not a revolution in chemistry; it’s a tool like any other. But increases in the speed of innovation build on each other. An advance aided by machine learning could very well lead to faster computer processors which themselves support even more complex machine learning — and the cycle continues.
Rotman also makes a compelling point about the compounding effects of faster research in the context of business and academia:
It takes an average of 15 to 20 years to come up with a new material, says Tonio Buonassisi, a mechanical engineer at MIT who is working with a team of scientists in Singapore to speed up the process. That’s far too long for most businesses. It’s impractical even for many academic groups. Who wants to spend years on a material that may or may not work? This is why venture-backed startups, which have generated much of the innovation in software and even biotech, have long given up on clean tech: venture capitalists generally need a return within seven years or sooner.
“A 10x acceleration [in the speed of materials discovery] is not only possible, it is necessary,” says Buonassisi, who runs a photovoltaic research lab at MIT. His goal, and that of a loosely connected network of fellow scientists, is to use AI and machine learning to get that 15-to-20-year time frame down to around two to five years by attacking the various bottlenecks in the lab, automating as much of the process as possible.
In other words, if the time needed for materials discovery can be decreased below the roughly 5-year threshold, it would kick off an explosion in investment because the payoffs finally align with human time scales.
Futurists like Ray Kurzweil have been writing about this type of acceleration for many decades. But Rotman’s article resonated with me as an antidote to the more common narratives about “AI” as a vague long-term utopia/dystopia or a narrow short-term technological advance. Far more interesting to me is how it fits into the broader story of accelerating scientific advancement.