My research primarily focuses on the development of novel deep learning architectures and algorithms. While working with various types of data excites me, my main interest lies in image processing

I am deeply engaged in interdisciplinary machine learning research. In my current position at KU Leuven, I have been working in the area of computational aesthetics - an interdisciplinary field of research dedicated to understanding image aesthetics. My work involves developing deep learning approaches to gain a deeper understanding of aesthetic preferences in images. Utilizing deep neural networks, I aim to uncover the insights into the roles of low-, mid-, and high level features in determining aesthetic preferences. Furthermore,  I am investigating how neural networks perceive the world, aiming to enhance their robustness.

Another aspect of my research involves the exploration of attention mechanisms in neural networks, inspired by the human visual attention system. I have been seeking new approaches to improve existing attention mechanisms and reduce their computational complexities.

Research Interests

I have collaborated on a review paper that explains the main neural networks necessary for generative models, ranging from convolutional neural networks to cutting-edge diffusion models. Our paper highlights how art and computer science interact.

I have designed  the hyper autoencoder architecture, in which a secondary hypernetwork is used to generate the weights for the encoder and decoder layers of the primary, actual autoencoder. Additionally, I have developed a semi-supervised learning model that combines convolutional neural networks and autoencoders with the hypernetwork.

One of my interests lies in the optimization algorithms used for neural networks. I enjoy examining the behavior of various optimizers during training, considering the pros and cons of each.

Since 2021, as a postdoctoral researcher at the GestaltReVision Lab, I have been working in the field of computational aesthetics, an interdisciplinary area focused on the automatic assessment of image aesthetics, with the aim of identifying key elements that influence aesthetic judgements in images. I have developed a multi-task convolutional neural network for predicting image aesthetics and have also applied various machine learning models, in addition to utilizing explainable AI (XAI) techniques. 

I enjoy developing neural networks for various research areas, such as nuclear physics. In our projects, we apply machine learning models, including deep neural networks, for predicting the binding energies of atomic nuclei. Additionally, we utilize explainable AI (XAI) techniques. 

I have been working on attention mechanisms in neural networks and have published a review paper on this topic. This study details their development journey and explains the core attention mechanisms, from the initial milestones to the Transformer.