Myths and Misunderstandings
Do not let anyone sell you illusions.
Many companies claim to have their own AI.
In many cases, these companies simply connect to language models from well-known AI providers.
Even the interface itself is often provided by those external providers.
As soon as a company that supposedly has its own AI charges tokens for its use, you should pay close attention.
This is often a sign that external providers are involved. A third-party provider also means that your data may be logged and is therefore not fully secure.
Sentiment analysis can also be an indication that a third-party system such as ChatGPT is being used, unless the provider can demonstrate that it has a genuinely trained AI of its own.
Companies claim to train their “own” AI.
In most cases, no actual AI is being trained because, as described above, these companies do not own their own language model but rely on third-party providers.
Usually, a RAG system is simply supplied with data.
RAG stands for Retrieval Augmented Generation and is often operated together with a vector database as an add-on to an AI system.
This allows semantic analysis of user input only to a limited extent, and even in this case data is still sent to third-party providers.
What is AI, actually?
There are several ways to explain artificial intelligence (AI).
The most understandable description is that AI is a computer-based system capable of imitating human abilities.
These abilities include creativity, organization or planning, logical thinking and learning.
In addition, an AI-supported technical system can perceive its environment, interact with it and solve problems based on the information available.
This brings us to a core problem: many users assume that AI can do and know everything and can take both work and thinking off their hands.
But that is not the case. This misconception is understandable because, from experience, AI often appears somewhat mysterious and all-knowing: you enter information and the AI always produces an answer.
However, how exactly that answer is produced cannot always be understood. In this sense, AI is comparable to a black box.
An AI used within a company should therefore always be seen as support for employees and as a way to improve internal processes.
Interaction between humans and AI is essential for this.
An AI can do anything and knows everything...
AI is not a universal problem solver and reaches its limits in certain tasks.
For example, AI is not suitable for reliably generating a ticket from a recorded voice message.
There are many reasons for this. The caller may forget to mention their phone number and/or email address.
Even if this information is recorded, the caller may have spoken so unclearly that the AI reproduces the number or email address incorrectly.
Finally, AI cannot reliably recognize proper names or handle different spellings of a name correctly in all cases.
Qualification of tickets
AI can be helpful when qualifying incoming emails.
Let us assume an email with the following text:
“Dear Sir or Madam,
we have an urgent matter.
The end of the year is approaching and we need to use up our annual budget.
Therefore, we need a quotation for support contracts for two additional systems in our company environment.
We also need administrator training for the existing system.
Oh, and we believe we have found a bug. Could we schedule an appointment for that as well?
Kind regards
Margarete Wusterhausen”
There are several requests in this single message:
- The request for quotations
- A request concerning an existing support contract
- A suspected or actual bug
- A request for an appointment
A RAG-based system that has been fed with all previously received emails can only decide on the basis of keywords.
A RAG-based system cannot truly learn from the volume of emails and make independent decisions.
That is because RAG is not AI. RAG-based systems should therefore be seen more as a somewhat more flexible mail filter.
This means that such a system may classify the email as urgent and assign it a high priority simply because of certain words.
An AI capable of performing sentiment analysis and making more flexible decisions may recognize that the tone of the email does not actually indicate an urgent problem.
But for that, an actual AI is required. If that AI is provided by a third party, do you really want your confidential emails to be sent there?
Only a truly independent AI provides real data security.
So how would the AI classify it?
For an AI, it is almost impossible to make a completely clear-cut decision here.
The AI will probably classify the email as important because the sender explicitly indicates this in the text.
Even if the AI recognizes that it is not truly urgent, it still cannot clearly decide which exact category or queue the ticket should be assigned to.
DE