Artificial intelligence has become an inseparable part of our daily lives. These days many people turn not to another person but to a chatbot for answers, and instead of digging through the internet or drafting texts themselves, they hand the job over to a large language model. But what actually happens inside the “brain” of artificial intelligence when it replies to us? This topic is the subject of numerous international research projects. In Hungary, it is also a key focus area of the NRDI Office’s HU-rizon Programme.
The programme, which supports international collaborations of leading Hungarian research centres, gives special emphasis to digital transformation and AI research. Its first showcase event will take place in Szeged, where local researchers will present international projects led by colleagues from the University of Szeged. Among them is the RAItHMA project, headed by Professor Márk Jelasity, head of the Department of Artificial Intelligence at the University of Szeged and researcher at the Centre of Excellence for Interdisciplinary Research, Development and Innovation (IKIKK). As he points out, despite using AI every day, we still know surprisingly little about how it actually works…
What are the most important questions in AI research today?

Professor Márk Jelasity, head of the Department of Artificial Intelligence at the University of Szeged
Large language models and other generative systems such as ChatGPT, Gemini or image generators are being used more and more widely. But the truth is that nobody really understands how they work. We can see that in certain tasks these systems perform remarkably well, yet in some specific situations they behave in very strange ways. It may sound a little shocking, but the odd reality is that we have very little idea of what actually goes on inside them – what is really happening in their "heads".
Why is that?
These models are not built by experts arranging knowledge into a clear, interpretable structure and teaching the system from it. Instead, they are trained on vast amounts of data. Language models are so enormous that their scale defies human comprehension. The largest already have a trillion parameters and are trained on practically the entire content of the internet. But the task they are set is not to learn meaningful concepts. It is simply to continue a given document, to predict the next word in a sequence. What we do not know is whether they actually understand what they are saying or on what basis they produce their answers. There are many signs suggesting they may not interpret or understand the world in the same way we do – or indeed that they interpret it at all.
So the goal of your project is to understand how AI sees the world?
On one level, yes: we want to extract the knowledge embedded in these systems. We have a methodology, based on our earlier work, that seems promising. From the internal representations of these models we can build various dictionaries that describe, for example, how the system assembles the meaning of words from different components.
On a higher level, we are studying the relationships between concepts, and then going a step further to see what kind of world model emerges inside the AI. Humans use world models to solve mental tasks: we create a mental model of the problem and then manipulate it to reach a solution. What is unclear is whether large language models have anything comparable. Do they know where they started, where they are and where they are going? Or are they simply applying basic rules, with the right answer emerging more by chance than by design?
Since this is such a fundamental question, I imagine many research groups around the world are trying to find an answer. The aim of the HU-rizon Programme is to support outstanding Hungarian researchers in gaining leadership experience in international collaborations and consortia. Are you investigating this topic as part of an international partnership as well?
Yes, many groups are working on this, and it is far from an easy task. Within HU-rizon we are collaborating with colleagues at Rutgers University and Nanyang Technological University in Singapore to extract and more precisely define the knowledge that large language models actually acquire. We are exploring whether these AI systems have a world model at all, and if so, how many – since in different contexts they might be using entirely separate, independent models. If such models exist, our task is to make them understandable to humans in concrete cases. To be able to tell users: according to this model, the world looks like this and that. Armed with such insight, people will be able to work with AI far more effectively.
So in principle, everyone who uses AI could benefit from what you discover?
Exactly. We all know the experience: you ask ChatGPT a question and it gives you an answer, but you have no idea how much you can trust it. If, for instance, a doctor asks it to diagnose an illness based on a test result, it will provide one. If the doctor then asks what the diagnosis was based on, the model will also give an explanation. The problem is that this explanation usually has nothing to do with the reasoning that actually produced the answer. It is therefore crucial to find ways of validating such explanations – only then can we obtain reliable and meaningful responses.
And for anyone interested in hearing reliable, meaningful and engaging answers about HU-rizon projects in Szeged or about the programme that supports outstanding Hungarian researchers, more information will be available on 25 September at the first stop of the HU-rizon Roadshow.