From games to algorithms
As a young boy, Bart Jansen was determined to become a computer-game programmer. But as soon he was introduced to algorithmics, he had discovered his true destiny: Much like a classmate had already foreseen in primary school, he was going to be a scientist. Now, he works as an Associate Professor at Eindhoven University of Technology.
Why did you decide to enroll in a computer science program?
‘When I was a kid, I was convinced that I would become a programmer of computer games. Back then, there were a lot of people who were working office jobs during the day, but tried to get into the gaming industry at night. In their spare time they would work on extensions to existing games to build up a portfolio. When I was about 13 years old, I joined an international group of some ten people working on a Star Wars-themed shooter. That was a lot of fun. We had someone painting the backgrounds, someone doing level design, and I took care of the programming part.
When the time had come to choose a study program, I decided that I only needed to get my diploma in computer science and I would be off to my future in the gaming industry. Since Utrecht University was the only university in the Netherlands offering a master program in game design, I enrolled there.
During my studies however, I got intrigued by the courses taught by Hans Bodlaender on algorithmics. I discovered that in fact, the reason I liked programming computer games, was that it required you to come up with ways to be very efficient with computing power. That is when I decided to move all game design courses into my elective space, and pursue a master in algorithmics instead.’
Seeing that you aimed to become a game developer, becoming a scientist was not always the plan?
‘Initially not, no. But looking back, all the signs were there: independence, curiosity, high grades. Back in primary school, a classmate once crafted me an encyclopedia as a Sinterklaas present, with a poem stating that I would certainly become a professor later in life because I knew so much.
The fact that I ended up pursuing a PhD after obtaining my master’s degree, was the result of the easiest job interview ever. When I was graduating with Hans Bodlaender, he was writing a proposal to study kernelization. He told me that it would be helpful if he could mention that he already had a good candidate for the PhD position, and asked me if I would mind having my name included in the proposal. When his proposal was awarded, he asked me if I was still interested. And the rest is history.’
What type of research have you been conducting over the years?
‘The golden thread in my research is simplifying inputs to algorithmic problems before solving them. The use of such a preprocessing step can make the overall process of finding a solution much faster. For example, you can filter out parts of the data that provably do not affect the ultimate solution. This was the focus of my PhD project. An early result I am still proud of deals with the NP-complete Minimum Vertex Cover problem, in which you have to find the smallest set of nodes of a network that cover all its connections. I developed a fast preprocessing algorithm that can be used to shrink the network without changing the answer. The size of the preprocessed instance is guaranteed to be upper-bounded in terms of a measure of the tree-likeness of the input network. This result showed that preprocessing allows Minimum Vertex Cover to be solved quickly on large but tree-like networks.
In my subsequent Veni project, I looked at the possibility to split up the initial input network into smaller pieces, solve them in parallel, and then combine these solutions into a final answer. This approach was inspired by Alan Turing’s view on reductions between computational problems, and is therefore called Turing kernelization.
The ERC Starting Grant ReduceSearch that I worked on from 2018 to 2023 refined the view of what makes a good preprocessing step. Here, we did not aim to reduce the size of the input, but focused on simplifying its structure. By analyzing which parameters affect the combinatorial explosion in the search for a solution, this can yield significant provable reductions in overall running time. To do so, we for example developed methods to already identify parts of the solution during the preprocessing phase, and to split the search space into parts of bounded interaction that can be solved almost independently.’
As of 2024, you have become a representative of IPN’s new Special Interest Group Algorithmics. Why did you want to be involved in this initiative?
‘Many other subdisciplines of computer science were already represented by a Special Interest Group. For algorithmics, there was no such thing yet. Since IPN is increasingly consulting the SIGs when developing its activities, the field felt a need to join forces as well. During one of the staff meetings here in Eindhoven, the question came up who would want to join the first board of this organization. It seemed a good opportunity for me to broaden my network in the Netherlands, and expand on the managerial experience I am building through my membership of the Eindhoven Young Academy of Engineering.
The Special Interest Group was launched only recently, and we are currently organizing our first events. What I find important is to stress the fact that Large Language Models do not solve all problems in computer science. We need smart algorithms with provable correctness and performance guarantees, to be able to trust the results of our computations. One of the challenges for algorithmics as a field is that there are no dedicated calls for proposals. When it comes to funding, we depend on general programs like the Open Competition and NWO’s Talent scheme.’
What is your personal ambition for the coming years?
‘In terms of my research, I want to get to a rigorous mathematical understanding of the structural limitations that make real-life inputs easier to solve than adversarial scenario’s. Quite often we can find a solution way faster than worst-case analysis predicts. I want to be able to provide a rigorous description of the structure of an input to an NP-complete problem, determine which parameters govern its complexity, and use these insights to design algorithms that are provably fast on real-life inputs.
Apart from research, I am also very passionate about education. One of the current topics close to my heart is how to best deal with artificial intelligence in education. This is a topic we are currently discussing in the Eindhoven Young Academy of Engineering. For example, what does it mean to provide students with homework assignments, when they can use a Large Language Model to solve those? It is naïve to think that we simply need to come up with harder assignments. Homework has a didactic purpose: as a teacher, you are using them to build an understanding of specific concepts. Assignments that cannot yet be solved by artificial intelligence often go beyond what you want to teach your students. I think we should be honest toward our students, and explain to them why they should work on such assignments without the help of AI.
In this time and age, it is imperative to find your added value as a teacher. Personally, I always start by getting to know the group, presenting students with an online quiz about their values and hobbies. I also share private information about myself in terms of what I like to do besides my work, to build mutual trust. And I make an effort to present the teaching material in an interesting way, by providing some background on how and in which context a new algorithm was discovered, and why it was such a breakthrough at that time. Infecting others with our own fascination and enthusiasm, that is where we as humans will outperform AI any time.’
Photo: Bram Saeys