Since Alan Turing first raised the question “Can machines think?” in his pioneering paper “Computers and Intelligence” in 1950, the development of AI has not been smooth and has not yet achieved its goal of “universal AI”.
However, incredible progress has still been made in this field, such as IBM Dark Blue Robot defeating the world’s best chess player, the birth of autonomous vehicle, and Google DeepMind’s AlphaGo defeating the world’s best Go player… The current achievements show the best research and development achievements in the past 65 years. It is noteworthy that during this period, there was a “AI Winter” with detailed records, which almost completely overturned people’s early expectations of AI. One of the factors that lead to AI winter is the gap between hype and actual fundamental progress.
In the past few years, it has been speculated that another AI winter may be coming. What factors may trigger the AI ice age?
The periodic fluctuation of AI “AI Winter” refers to the period when the public’s interest in AI gradually decreases with the investment in these technologies in business and academic fields.
AI initially developed rapidly in the 1950s and 1960s. Although many advances have been made in AI, most of them are still academic. At the beginning of the 1970s, people’s enthusiasm for AI began to fade, and this dark period lasted until about 1980. In this cold winter of AI, activities devoted to developing human-like intelligence for machines began to lack funds.
In the summer of 1956, a group of mathematicians and computer scientists occupied the top floor of the building where the Mathematics Department of Dartmouth College was located.
In eight weeks, they jointly imagined a new research field. As a young professor at Dartmouth University at that time, John McCarthy coined the term “artificial intelligence” when designing proposals for the seminar. He believes that the seminar should explore the hypothesis that “every aspect of human learning or any other feature of intelligence can be accurately described in principle, so that it can be simulated by machines”.
At that meeting, researchers sketched out the AI we know today. It gave birth to the first camp of AI scientists. “Symbolism” is an intelligent simulation method based on logical reasoning, also known as logicism, psychology school or computer school. Its principles are mainly physical symbol system hypothesis and finite rationality principle, which has been in a leading position in AI research for a long time.
Their expert system reached its peak in the 1980s.
In the years after the conference, “connectionism” attributed human intelligence to high-level activities of the human brain, emphasizing that intelligence was generated by a large number of simple units through complex interconnection and parallel operation. It starts with neurons and then studies neural network models and brain models, opening up another development path of artificial intelligence.
For a long time, these two methods have been considered mutually exclusive, and both sides believe that they are on the way to universal artificial intelligence. Looking back on the decades since that meeting, we can see that the hopes of AI researchers have often been dashed, and these setbacks have not prevented them from developing AI.
Today, although AI is bringing revolutionary changes to the industry and may overturn the global labor market, many experts are still thinking whether today’s AI application has reached the limit. As Charles Choi described in “Seven Revealed Ways AI Fail”, the weakness of today’s deep learning system is becoming increasingly obvious.
However, researchers are not pessimistic about the future of AI. In the near future, we may usher in another AI winter. But this may also be the moment when inspired AI engineers finally lead us into the eternal summer of machine thinking.
An article by Filip Piekneewski, an expert in computer vision and artificial intelligence, titled “AI Winter is Coming”, has caused heated discussion on the Internet. This article mainly criticizes the hype of deep learning, and believes that this technology is far from revolutionary and is facing development bottlenecks.
The interest of major companies in AI is actually converging, and another cold winter of AI may come.
Will AI winter come?
Since 1993, the field of artificial intelligence has made more and more remarkable progress. In 1997, IBM’s Deep Blue System became the first computer chess player to defeat the world chess champion Gary Kasparov.
In 2005, a Stanford autonomous robot won the DARPA Autonomous Robot Challenge by driving 131 miles on a desert road without “stepping on”. At the beginning of 2016, Google’s DeepMind’s AlphaGo defeated the world’s best Go player.
Image source: DARPA Grand Challenge 2005
In the past twenty years, everything has changed.
In particular, the vigorous development of the Internet has enabled the artificial intelligence industry to have enough pictures, sounds, videos and other kinds of data to train neural networks and carry out extensive applications. However, the success of the expanding field of deep learning depends on increasing the number of layers of neural networks and increasing the GPU time for training them.
An analysis by AI research company OpenAI shows that the computing power required to train the largest AI system doubles every two years, and then doubles every 3-4 months. As Neil C. Thompson and his colleagues wrote in the book “The diminishing returns of deep learning”, many researchers worry that the computing needs of AI are on an unsustainable track.
A common problem faced by early AI research is the serious lack of computing power, which is limited by hardware rather than human intelligence or ability.
In the past 25 years, with the significant improvement of computing power, we have made progress in artificial intelligence.
However, in the face of surging massive data and increasingly complex algorithms, the world adds 20ZB of new data every year, and AI computing power demand increases by 10 times every year, which is far faster than Moore’s Law’s cycle of doubling performance.
We are approaching the theoretical physical limit of the number of transistors that can be installed on a chip.
For example, Intel is slowing down the pace of the introduction of new chip manufacturing technology, because it is difficult to continue to reduce the size of transistors while saving costs. In short, the end of Moore’s Law is coming.
Image source: Ray Kurzwell, DFJ
There are some short-term solutions that will ensure the continuous growth of computing power, thus promoting the progress of artificial intelligence.
For example, in the middle of 2017, Google announced that it had developed a special artificial intelligence chip called “cloud TPU”, which optimized the training and execution of deep neural network
Amazon developed its own chip for Alexa (personal assistant of artificial intelligence).
At the same time, there are many startups trying to adjust chip design to adapt to specialized AI applications. However, these are only short-term solutions.
What happens when we have exhausted the solutions that can optimize the traditional chip design? Will we see another AI winter? The answer is yes, unless quantum computing can surpass classical computing and find a more solid answer.
But until now, there is no quantum computer that can realize “quantum hegemony” and is more efficient than traditional computers. If we reach the limit of traditional computing power before the real “quantum hegemony” comes, I’m afraid there will be another winter of artificial intelligence in the future.
The problems that AI researchers are trying to solve are becoming more and more complex, and driving us to realize Alan Turing’s vision of AI. However, much remains to be done. At the same time, without the help of quantum computing, we will be able to realize the full potential of AI.
No one can say for sure whether the AI winter is coming. However, it is important to be aware of the potential risks and pay close attention to the signs so that we can be prepared when it does happen.