Explaining AI for better decision making
With her focus on human-centered AI, Nava Tintarev takes a rather unique approach to studying explainable artificial intelligence. ‘If we design software, we need to think about who is using it on the other end.’ After having studied and worked in Sweden, Australia, England, Scotland, and Spain, Tintarev came to the Netherlands in 2017. Since 2020, she has been a full professor at Maastricht University, where she became Research Director in 2023.
Why did you decide to study computer science?
‘I have always been interested in computers, ever since my father introduced me to them when I was about 10 years old. As a child, I liked programming in BASIC a lot. Besides being intellectually stimulating, I also found it very rewarding to be able to make something that runs on its own.’
And how did you end up in the field of explainable AI?
‘That started during my PhD research in Aberdeen. I am interested both in people and in computers – during my bachelor’s in computer science, I also did a minor in psychology. My master’s project was in computational linguistics, automatically generating reports for people who had difficulty reading. For my PhD, I wanted to go abroad, and I found a nice position in Aberdeen, working with Judith Masthoff on using natural language generation to provide explanations for recommender systems. At that time, large language models were in their infancy, and the group in Aberdeen was one of the few places working in natural language generation.’
Was it always your dream to become an academic?
‘No. Originally, I was not planning to become a scientist at all. I even worked in industry for a short period of time, which I also liked a lot. But over the years, I noticed that I kept on wanting to answer scientific questions. Also, I really like the flexibility of an academic career. Both in terms of location and time, you do not have to be in an office at 8 am each day as long as the quality of your work is good. And I have always been keen to travel and learn from other cultures.
After obtaining my PhD, I wanted to experience what working in industry was like. I moved to Spain, where I got a job as an R&D Engineer at Telefónica Research. Since that company was very research-oriented, aiming for publications and patents, it was a good fit. But a couple of months in, my former group leader in Scotland informed me that a postdoc position would open up as a follow-up to one of the projects he had worked on, aimed at developing a system that generates personal narratives for children with complex communication needs. After careful consideration, I decided my heart was in science and helping people, so I moved back to Scotland.’
How has your research evolved over the years?
‘That postdoc project was aimed at helping pre-verbal children with special needs explain to their parents how their day at school had been. In our research, we investigated what would be the best ways to support them. We built a system that generates the story of their day based on their daily schedule, QR codes their teachers scanned with a mobile phone about additional activities or locations, voice recordings, and embellishments provided by the children themselves, to comment on how they liked something or someone.
In that project, I learned a lot. In the first place about user experiences and interfaces; if you are able to come up with an interface that works for kids with special needs, it most likely is a good interface for everyone. And second, about how to manage a research project. This was my first time supervising a team of five people and being the lead programmer as well.
Where this project was based on natural language processing to generate a story in the form of a text, over time, I moved more toward graphical interfaces. In 2016, I spent some time in California in a group with expertise in interactive communication methods, changing my research focus toward using multimodal and interactive interfaces to provide explanations.
Around 2017, I started working on viewpoint biases, a topic I am still working on today. My ambition is to help people become aware of their own possible biases, and show them that there are also other ways to look at a certain topic.
Nava Tintarev is a Full Professor in Explainable AI at Maastricht University in the Department of Advanced Computing Sciences (DACS) where she is the Director of Research. Tintarev received a magister degree in Computer Science from the University of Uppsala and a PhD degree in the same field from the University of Aberdeen. During her bachelor’s she spent a year in Australia, taking undergraduate courses in Psychology.
We also look at HR and advertising, taking the view of different stakeholders. For job seekers, it is interesting to get an explanation of what jobs might be fitting for them, and why an AI algorithm comes up with specific suggestions. For recruiters, it is insightful to understand why an algorithm comes up with specific top ten candidates for the job. Besides helping both the job seeker and the recruiter make better decisions to end up with the right fit, we also investigate what happens with fairness. Typically, a company assesses a hiring process as ‘good’ when people are hired who resemble employees who did well in the past. But perhaps, there are even better candidates available. That requires recruiters to be aware of biases that are reinforced by a selection algorithm.’
How do you tackle such research questions?
‘My team is on the intersection between human-computer interaction and AI, taking a multidisciplinary approach. We typically develop a task and present users with a mock-up of the system. In some cases, we build a limited functionality version of the software, but we can also provide them with a piece of paper describing the workflow of the system. Then we use qualitative methods like structured interviews or think-aloud protocols to investigate how the users would use such a system and where they might get stuck. Other times we build a system and measure their perceptions and behavior.’
What drives you in your scientific work?
‘Ultimately, I aim to contribute to better decision support. Take the aforementioned case of HR: I want the advice of the AI system to result in the best candidate for the job, through a fair and unbiased recruitment process. What I find fascinating is that although the way that computers store information is not the same, it can be aligned to how people convey, understand, or use information. At the moment, I am happy with my group as it is. I do not need to grow an empire. Topic-wise, I would like to work on longer-term evaluations of decision-making based on our explanations. Another interesting direction we are currently exploring is explanations for videos. For example, videos are generated or summarized by an AI system. What biases play a role there? Take the case of videos with multiple speakers, does the summary pay equal attention to all of them? And if not, how can we show an editor that?’
From January 2026, you will join the IPN Board. How are you looking forward to that?
‘When you progress in your career, the added value of publishing another paper becomes less. Much to my own surprise I also turned out to enjoy developing research strategy. So, besides being a Research Director here in Maastricht, about one and a half year ago, I also joined NWO’s Round Table Computer Science.
These are challenging times, both in a geopolitical and in a financial sense. Now is the time to come together as a field, also in Europe. Through my membership of the IPN Board, I want to contribute to a more unified and also more internationally oriented, computer science community in the Netherlands.