Looking Back at Artificial Intelligence



  • Good from experience – and well developed

    I have already revealed my enthusiasm for artificial intelligence (AI) and what I personally have in common with it here on the blog. In simple terms, the technology is based on the four pillars of compute, data, algorithms and humans. Its efficiency and knowledge have risen to such an extent that even critics have to admit that AI will profoundly change our everyday lives. I say that AI will improve it. Let’s just think of the opportunities that arise in healthcare, the automotive sector and production. But we are only at the beginning of a rapid development. Looking back, we can see who paved the way.

    Born in a conference room

    “We propose to hold a two-month seminar on artificial intelligence with ten participants at Dartmouth College during the summer of 1956.” With these words began the project proposal that the US logician and computer scientist John McCarthy and three colleagues submitted to potential donors in 1955. The Dartmouth Conference is considered the birth of artificial intelligence. The term AI describes an attempt to program computers in such a way that they can work on problems and make human-like decisions independently. The “intelligence” of the machine is created – as with humans – through learning. In order for machines to learn, they use collected data to make decisions. One branch of machine learning is Deep Learning, which is based on artificial neural networks. These were already a focal point of the Dartmouth Conference, for which some preparatory work had been done.

     

    The hypothetical nerve cells go back to the logician Walter Pitts and the neurophysiologist Warren McCulloch, who succeeded in connecting them in 1943. The threshold logic they used is similar to that of today’s neural networks. Marvin Minsky used the findings and built a neurocomputer called SNARC (Stochastic Neural Analog Reinforcement Calculator). The computer consisted of 40 tube transistors as artificial neurons and several cables that held them together. In 1958, after Dartmouth, Frank Rosenblatt expanded the concept of artificial neurons into an adaptive network, the perceptron. Twelve years later, Seppo Linnainmaa published the backpropagation algorithm with which a neural network learns independently and efficiently. This was implemented by Geoffrey Hinton and his colleagues in 1986. Their neural network recognized simple image properties faster than other systems. The principle that the machine learns from experience is still valid today.

    Cooling and resurgent interest

    The progress of artificially intelligent software systems, however, fell short of expectations. The storage systems did not have the necessary capacity and I/O performance, so the large amounts of data needed to train the systems were not available. In addition, the computers were not fast enough for the complex calculations. At the beginning of the 1980s, interest in machine learning waned and the much-quoted “AI winter” began. Only the IBM system Deep Blue, which defeated the chess world champion Garry Kasparov in six games in 1997, made the headlines. Deep Blue is one of the expert systems that lack neural networks.

     

    Yann LeCun finally heralded the beginning of the hype that still persists today. The French computer scientist, who now heads AI research on Facebook, presented the first Convolutional Neural Networks (CNN) in 2006. They treat every part of an image the same. Instead of a large neural network, CNN uses a much smaller one to analyze the entire image. This network always analyzes only a part of the image but is moved step by step over the whole image.

    Off into the depths and into the boom

    Another foundation was laid by a working group of the German computer scientist Jürgen Schmidhuber, who developed Recurrent Neural Networks (RNN) in 1991. In this way, machines learned tasks that they had not mastered previously. The model neurons arranged in layers are not only interconnected within a layer, but also from layer to layer. Recurrent neural networks are used in speech recognition, handwriting recognition and machine translation. In 1997, Jürgen Schmidhuber, together with the German computer scientist Sepp Hochreiter, expanded the system to include the technology “long short-term memory” (LSTM). For Schmidhuber, Recurrent Neural Networks with LSTM are the deep learning networks. Deep Learning has celebrated significant successes since 2016 and is responsible for the current boom in artificial intelligence. And this was prepared long ago in the AI winter.

    The post Looking Back at Artificial Intelligence appeared first on NetApp Blog.



    https://blog.netapp.com/looking-back-at-ai/


© Lightnetics 2024