Understanding the Concepts of Artificial Intelligence (AI)



  • A short swing down to the depths of neuronal nets

    In artificial intelligence (AI) there is sometimes a confusion of language that can be easily prevented. AI in the narrower sense (also called “Strong AI”) aims to develop machines that act as intelligently as humans – but this is an academic vision out of reach. Instead, let’s focus on the “Weak AI” – Machine Learning (ML). Most AI developers would rather describe their specialty as ML. Others may have Reasoning, Natural Language Processing (NLP) or Planning (automated planning) in their email signature. Like ML, these terms can also be understood as AI subareas. How closely the technologies are interwoven is illustrated by the highly acclaimed Google Translator: since the end of 2016, the system has achieved surprisingly good results for some of the 103 supported languages because it translates entire sentences context-sensitively. This is due to the use of neural networks, which have generally improved speech recognition.

    Neural networks play in their own league

    Artificial neural networks, as I have already reported on in a previous post, are the heart of ML systems. This is a mathematical abstraction of information processing, similar to how it takes place in the brain. A neuron is modeled as a function (algorithm), including input, parameters and output. Photos, text, videos or audio files are used as data input. During learning, the parameters are changed. By modifying the weighting, the model learns what is important and what is not, independently recognizes patterns and thus increasingly delivers better results. After the learning phase, the system can also evaluate unknown elements.

     

    In contrast to neural networks, expert systems – another AI subarea – do not teach themselves anything. They process large amounts of data and are connected to databases or datalakes. For data access, experts usually have to program filters. The fact that expert systems also have a learning capacity was demonstrated, for example, in chess: in 1997, the Deep Blue system developed by IBM defeated the legendary chess world champion Garri Kasparow in six games. In the much more complex board game Go, neuronal nets had to be used. It was only thanks to self-learning processes that the AlphaGo system was able to beat Lee Sedol, probably the world’s best Go player, four to one in five games in March 2016.

    From Machine Learning to Deep Learning

    In the past, ML systems used to work with an upstream feature recognition system called Feature Engineering. While the task was to recognize a face, the system initially searched for indispensable features such as eyes, nose or mouth. Today’s Deep Learning (DL), on which the German computer scientist Jürgen Schmidhuber, among others, worked, opens up completely new dimensions. Such a neural network consists of different layers with artificial neurons. From input to output, a query in each layer usually goes through a very simple operation, for example the application of a filter. The neurons may identify image features, as was the case with feature engineering in the past. However, the model is largely left to itself. It can therefore decide for itself which type of elements it best analyzes or extracts to predict the content of the image as well as possible. The layers give the neural networks a greater depth.

     

    Great computers are available for ML today, but until recently the limitation was in reading and writing data (I/O) from storage media. With the introduction of the NVMe mass storage interface and 100 Gigabit Ethernet in 2017/2018, this hurdle fell as well. NVIDIA and NetApp have demonstrated what is now possible by combining a DGX supercomputer with the AFF A800 all-flash storage system as a “converged infrastructure”. The figures for the ONTAP AI Proven Architecture solution are also impressive: in addition to a latency of less than 500 microseconds, users can achieve a throughput of up to 25 GB/s. This allows a 24 node cluster to analyze more than 60,000 training images per second (ResNet50 with Tensor Core). The solution can be an option, innovation or vision. These terms do not all interpret equally, which is a good thing.

    The post Understanding the Concepts of Artificial Intelligence (AI) appeared first on NetApp Blog.



    https://blog.netapp.com/understanding-the-concepts-of-ai/


© Lightnetics 2024