AI & Machine Learning
Artificial intelligence and machine learning
The title of this LernOS guide is "Artificial Intelligence ". In this basic chapter, we want to bring some order to the terminology we encounter. At the same time, we want to clarify what we are dealing with in this guide: With the part of AI applications that are labelled with the term "Machine Learning " and "Generative AI ". However, artificial intelligence as a whole encompasses many more specialisations that we are confronted with in our everyday lives, but which we will not cover here: voice assistants such as Siri and Alexa, automatic translations such as Google Tanslate, facial recognition for unlocking our mobile phones or personal recommendation systems based on our previous consumer behaviour, to name but a few.
In this guide, we will limit ourselves to the AI applications that citizens and employees are likely to come into direct contact with in their everyday lives and that they themselves also use.
In this infographic, we have presented a hierarchical and chronological categorisation of the key stages in the development of artificial intelligence. From its first mention around 1956 to Generative AI in 2021, it can be seen as a journey from theoretical concepts to practical applications with increasing depth and complexity. Since the mid-20th century, the landscape has changed from simple, rule-based algorithms to complex learning systems capable of taking on and performing human-like tasks.
Artificial intelligence
The term artificial intelligence (AI) was first used around 1956. AI was the fundamental concept for the development of "intelligent" machines. The beginnings were characterised by the desire to create machines that could imitate basic human intelligence processes. Early AI systems were able to perform simple tasks such as solving logic puzzles or playing chess. The focus was on programming specific rules that enabled machines to fulfil certain tasks. AI is the most comprehensive term and today represents the entire field of computer science that aims to create intelligent machines that can mimic or surpass human intelligence. It is about systems that are able to perform tasks that normally require human thought, such as visual perception, speech recognition and decision making. This includes everything from simple programmed processes to complex systems that can learn and adapt. Think of this as the outermost circle, the umbrella term under which more specialised concepts and applications can be subsumed.
Machine Learning
Machine learning (ML) has been a specific area within AI since 1997 to enable machines to learn from data. ML marks the transition from rigid, rule-based AI to adaptive systems. Significant progress has been made with the introduction of machine learning, a more specific discipline that enables machines to improve over time to make decisions or predictions. Machine learning encompasses a variety of techniques that enable computers to recognise patterns in data and use these insights for future tasks.
Deep Learning
Deep learning marks a breakthrough in the ability of machines to process and learn from unstructured data such as images and human language. The technology is inspired by the way the human brain works. Here, layers of neural networks are used to process large amounts of data, recognise complex patterns in data and make decisions.
Generative AI
Generative AI represents the current pinnacle of AI development based on deep learning. It goes beyond simply recognising patterns and can generate new content. It is able to create new written, visual and audio content based on specifications or existing data. Generative AI can also generate content that was not yet present in the training data for the model, such as pieces of music, works of art or texts that are almost indistinguishable from human creations.
Large Language Models & Diffusion Models
Within Generative AI, Large Language Models (LLMs), such as the well-known GPT (Generative Pre-trained Transformer from OpenAI), have proven to be crucial. These models specialise in understanding and generating human language and have attracted a lot of attention and widespread use due to their ability to produce coherent and relevant text. LLMs have enabled new applications in translation, summarisation and code generation. Another specialisation within Generative AI is Diffusion Models (DM). These models represent an innovation in image generation and are capable of producing high-quality images that are almost indistinguishable from real ones. They expand the possibilities in image synthesis and offer new tools for designers and creatives.
Each of these steps expanded the possibilities of AI and shifted the focus from rigid, rule-based approaches to adaptive and self-learning systems that are able to deal with a variety of data and demonstrate human-like creativity. AI is the foundation, machine learning is the method by which systems learn from data, deep learning is a sophisticated technique that utilises deep neural networks, and generative AI is the pinnacle of innovation that enables new creations to emerge. Each level builds on the knowledge and techniques of the v
Important milestones in artificial intelligence
The history of artificial intelligence goes back to the 1950s. The following table gives you an overview of the most important milestones:
Year | Milestone |
---|---|
1950 | Alan Turing develops the Turing Test (originally Imitation Game) to test the intelligent behaviour of a machine. |
1956 | The Dartmouth Workshop is the birth of artificial intelligence as a specialised field. |
1959 | Allen Newell and Herbert A. Simon develop the Logic Theorist, the first AI programme. |
1966 | Joseph Weizenbaum develops ELIZA, which enables communication between humans and machines in natural language. |
1967 | Dendral is being developed, a rule-based system for chemical analysis, a major AI achievement. |
1969 | Shakey the Robot will be the first mobile robot that can think logically and solve problems. |
1970er | Expert systems with manually created rules are being developed. |
1973 | The AI winter is beginning due to high expectations and unfulfilled goals in AI research. |
1980er Jahre | Expert systems are gaining in popularity. They use rules to imitate human expertise in narrow areas. |
1997 | The Long Short-Term Memory (LSTM) is published as an important algorithm for machine learning. |
1997 | IBM Deep Blue defeats world chess champion Garry Kasparov, demonstrating the potential of AI. |
2011 | IBM Watson wins the game show Jeopardy! and demonstrates the natural language processing of AI. |
2011 | Apple's voice assistant Siri comes onto the market. |
2012 | Geoffrey Hinton's Deep Learning techniques are reviving interest in neural networks. |
2014 | Google DeepMind is developing a neural network that learns to play video games. |
2016 | AlphaGo from DeepMind defeats the Go world champion Lee Sedol and proves the strategic thinking of AI. |
2017 | The Deep Learning architecture Transformer is proposed, which requires less training time than previous architectures (RNN, LSTM) |
2021 | The term Foundation Model was first used by the Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM). |
2021 | The generative AI DALL-E for generating images from text is published. |
2021 | Ameca is a humanoid robot developed by Engineered Arts. Ameca is primarily intended as a platform for the further development of robotics technologies for human-robot interaction. The interaction can be controlled either by GPT-3 or human telepresence. |
2022 | The chatbot ChatGPT, which uses the Large Language Model GPT-3.5, is published. |