From your search engine to software you use, they all have “intelligent AI” features. The question is: Are these features really useful?
From your search engine to the software you use every day, everything seems to be crammed with “intelligent AI” features these days. The question is: Are these features really useful? Does it add anything, or is it just a sales pitch that makes you think your software is future-proof? In this article, I dive into the reality behind the hype. What can AI actually do, and what not?
To explain this properly, we need to talk about the type of AI that is driving a new generation of tools and AI features. This type of AI is called generative AI, and many people know it mainly from ChatGPT.
ChatGPT, like other generative tools that can create images, seemed to come out of nowhere. Suddenly you could talk to an AI, ask questions and even have it write texts that almost seemed human.
However, it was an interplay of three technical developments that led to the rise of generative AI: better algorithms, increased computing power and the availability of huge data sets brought tools like DALL-E and ChatGPT to life.
The availability of massive amounts of data is one of the three main reasons why AI tools are now breaking through. In this regard, you can see that AI works especially well with images and language. That’s because there is a huge amount of data available for these applications. The fact that we as humans provide feedback on the output, by adjusting text or by indicating whether the output is good or not, makes AI applications better and better.
Without large data sets, AI quickly loses its power. Small and context-specific tasks – things we do effortlessly thanks to knowledge and experience – are not going to work well. You don’t teach an AI to drive a car in a day, something you can teach a 17-year-old. In the process, a 17-year-old will also be able to react to situations that haven’t occurred before. A cyclist crossing the street or a child running into the street are easily recognized by a human as dangerous situations to which you must adapt as a driver. However, an AI needs lots of examples of those specific situations and will get stuck on new situations of which it has not seen thousands, if not millions, of examples.
AI is powerful in the specific domains it is trained for and thus limited in the versatility needed to be truly useful in everyday life.
A bigger problem with AI is that it does not understand the content of the tasks it performs.
Generative AI models are statistically predictive models that learn to recognize patterns in huge amounts of data. They predict, simply put, the next word or pixel. That can often be impressive, but they don’t understand what is being asked of them. All they do is generate an answer that satisfies you.
An AI can recognize a cat or draw a cat, but it has no idea what a cat is. To an AI, cats are nothing more than patterns of data. For an AI to look at the world this way seems harmless, but it has major consequences.
Indeed, generative AI is thus not grounded in logic. And without logic, such an AI is incapable of understanding.
An AI without understanding cannot make logical connections that are obvious to us. Ask an AI to solve a simple moral dilemma, and you regularly get an answer that is incoherent or even dangerous. An AI can also proclaim total falsehoods and present them as facts. This is also known as “hallucinating.”
This lack of understanding is exactly why AI can produce good texts but cannot make complex decisions independently. Moreover, because an AI does not understand what you ask, it is very difficult to convert your questions into concrete actions.
Turning questions into actions is currently the holy grail. If that works, an AI can actually do things for you and perform broad tasks. A term used for this is Large Action Models (LAMs): AI systems that not only process language, but independently attach actions to it. Think of an assistant that not only answers, but actually takes care of things for you.
Imagine an AI that turns the task “Plan the ideal week-long trip for my family in the next vacation” into booked cottages, activities and train tickets that match your family’s personal preferences. That sounds great, but practice is tricky.
LAMs require not only an AI that understands what you ask, but also an AI that can make decisions independently. While this is a wonderful concept, it is virtually unrealizable technically, regardless of the privacy concerns involved.
Because developers are aware of this lack of feasibility, other concepts are currently being devised to train AI to perform actions.
You experience the problems with converting questions into actions especially with the new AI assistant on Android phones: Gemini. As soon as you ask Gemini to do something for you, you are regularly told that “she is a language model and cannot perform actions.” The request not to repeat that every time I ask her something doesn’t seem to have caught on yet. I hear this several times daily. It feels like she doesn’t understand me 😉.
However, there is a language that computers do understand very well that AI can use to turn words into actions.
Computer code is the language that translates words into action. I find computer code generation one of the most fascinating applications of AI. Especially because of the potential implication that you can go from language to action through code. However, it is important to consider how mature this technology is at the moment. Google reported in October of 2024 that AI now writes 25% of their code. At the same time, it published a paper showing that programmers who use AI tools are not significantly more efficient. In addition, AI tools often create more errors and not secure code. So it remains the programmer – the human – who is responsible for the quality of the code.
If the code quality of AI was significantly better – and the code created more secure – I could, as a digital native with relatively little knowledge of code, write programs that turn ideas into action. I could thereby create specific action models myself, for specific questions using AI. However, I can already do this currently with so-called no-code or low-code applications. Here too, unfortunately, with AI I expect major improvements, but no major breakthrough in the short term.
So the current generation of AI tools has many limitations. That doesn’t mean it doesn’t add anything. Claude.ai checks this piece for language and spelling and does it considerably better and faster than me. Indeed, for language processing and language-based processes, AI is extremely useful. Think of summarizing meetings, analyzing large amounts of text and creating overview in complex documents. These are all tasks where AI tools can greatly support you in your daily work. But the point here is the AI tool. The tool. Not about AI in general.
The tool analogy can help you distinguish hype from reality immeasurably.
“If all you have is a hammer, you tend to see every problem as a nail” is an old wisdom that is particularly relevant when it comes to AI. It is tempting to see in every new AI tool the solution to all your problems. The reality is that the AI tool or feature, like a hammer, is simply a tool – powerful for specific tasks, but not for everything.
In my daily work, I use several AI tools, each for its own specific purpose. Perplexity helps me with research, Claude.ai with finding clear formulations for complex ideas and checking my writing. But the real work – thinking, writing, making choices – remains human work. The tools just make the process smoother and more efficient.
So the question is not whether to use AI, but which specific tool best suits your particular need. Some tasks require a hammer, others require a screwdriver. Above all, however, there are many tasks that do not require AI at all.
Pierce the hype, choose your tools with care, and use AI where it actually adds value. Now that you have an even better understanding of what AI can and cannot do, this becomes much easier. Because in the end, it’s not the tools that do the work, but the humans who wield them. And AI doesn’t change that at all.
On our blog we post about a lot of stuff, just go for it and read some posts for your own fun.
Go to the blogfrom 13 March 2025
from 22 November 2024