Oh, I see!
Deep Learning itself was a rebranding of Neural Networks that have failed to live up their promise in the late 50s (or to bemore precise, the Perceptrons). We were promised an “embryo of a computer” https://www.cs.ucf.edu/~lboloni/Teaching/CAP5636_Fall2023/homeworks/Reading%201%20-%20Perceptron-NYTimes-1958-07-13.pdf that will soon become a sci-fi robot. To be fair, having a large machine “learn” to distinguish between photos of men and women without having to code the rules into it was really impressive.
After all the unfullfillled promises, funding (from DARPA) went dry and even going near the subject would have burnt you.
Later on people came up with the idea of expert systems: if you can just encode all knowledge into a decission tree then why would anyone need experts in the first place? Turns out you do, because knowledge needs to be updated and in some cases not all information is available to simply make a yes / no decision following a branch of questions.
After that, people needed ways to work with massive amounts of data automatically (for instance to find spam mail) so the field of machine learning started to born. But noone called it AI because of the bad reputation. Yet, they provided a lot of value and helped companies like Google to power.
The closest thing we called AI in the 2000s were GPS systems, because they even talked! Nowadays they are just graph algorithms with some really basic speech synthesis.
Nevertheless, working on “AI” was still hushed upon. Deep Mind’s Demis Hassabis often talks about his experience of also learning neuroscience just to be able to work on AI. His teachers would roll their eyes at MIT thinking he’s crazy and is jsut wasting his time. In the end, he had to work on Videogames instead, because Videogame AI was far ahead in the field than academia: https://www.youtube.com/watch?v=GtNvEna6bxc
In the end, he was right. So was Geoff Hinton who never gave up working on Neural Networks, despite everyone thinking it was a dead end. We all saw what happened in the late 50s early 60s, and even though we had the maths figured out to train deeper models by the 80s, no one dared to touch them. They were expensive, slow, we didn’t have enough data nor compute and you could get similar results with far less compicated algorithms. In the end, he was also right to stick with it, although rebranding it to Deep Learning was probably the right choice (but nowadays Wide Learning would be probably more accurate). When people saw how effective they were, beating 30+ years of carefully hand crafted computer vision solutions with ImageNet, they all started to jump ship which initiated the current AI boom we are in.
I am not sure if another AI winter would be coming similiarly to what we had in the 60s and 80s, because Machine Learning still provides value and people enjoy interacting with LLMs as if they was definitely not Scarlet Johansson. It is entirely possible that they will be far less hyped and we would stop calling the tech AI. But industries rely on a lot of these technologies already and they will rely more in the future.
In fact, I think the next big thing will be robotics. So even if LLMs run out of steam, a humanoid robot folding your laundry will continue generating hype.
I can’t find the article on the history of AI winters that was a go to for me, but I will link it if I find it.
Update: I can’t find it anywhere, it might have become a book The First AI Winter (1974–1980) — Making Things Think: How AI and Deep Learning Power the Products We Use
Update2: this post is an oversimplification, there have also been failed hype circles based on chatbots before too. For instance, the failure of machine translation during the cold war and the recent (less than 10 years) failure of recurrent network based chatbots (that were supposed to take over Facebook Messenger for IVR tasks)