"Much to learn, you still have."
-Yoda
"Much to learn, you still have."
-Yoda
With a background in statistics, my research focuses on machine learning, particularly the development of novel deep learning architectures and algorithms. While I work with diverse types of data, my main interest lies in computer vision.
During my PhD, I developed neural networks for computer vision, including work on optimization and computational attention mechanisms inspired by the human visual attention system. My research explores ways to improve attention mechanisms while reducing their computational complexity.
I have extended this foundation into vision science, aesthetics, art, and even nuclear physics. I am deeply engaged in interdisciplinary machine learning research. As a postdoctoral researcher at KU Leuven (2021-2024), I worked in computational aesthetics, developing machine learning approaches to better understand aesthetic preferences in images. This research also engaged with art history, artistic image assessment, and AI-generated art.
Currently, I have been investigating the intersection of AI and human perception, with a focus on how humans perceive the world compared to machines. My work includes Gestalt laws of perceptual grouping and I have published research on exploring Closure in neural networks.
Overall, I investigate how neural networks "see" the world, aiming to enhance their interpretability, creativity, and robustness.
The human brain has an inherent ability to fill in gaps to perceive figures as complete wholes, even when parts are missing or fragmented. I have been working on this phenomenon, known as Closure in psychology. I study how this ability differs between humans and machines with the goal of improving computer vision models. Focusing on the Gestalt laws of perceptual grouping, I investigate how humans and AI perceive images and whether deep neural networks exhibit similar perceptual behaviors.
Y. Zhang, D. Soydaner, L. Koßmann, F. Behrad, J. Wagemans (2025). Finding Closure: A Closer Look at the Gestalt Law of Closure in Convolutional Neural Networks, Computational Brain & Behavior, 1-13.
Y. Zhang, D. Soydaner, F. Behrad, L. Koßmann, J. Wagemans (2024). Investigating the Gestalt Principle of Closure in Deep Convolutional Neural Networks, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 9-11 October, Bruges, Belgium.
AI is increasingly present in daily life, and art is no exception, from museum installations to machine-driven aesthetics. I focus on how neural networks generate, evaluate, and manipulate artworks, including aspects such as quality and aesthetics. I work on GenAI exploring how machine learning engages with artistic concepts. My work highlights the many ways in which art and computer science intersect.
O. Strafforello, D. Soydaner, M. Willems, A-S. Maerten, S. De Winter (2025). Have Large Vision-Language Models Mastered Art History?, 4th International Workshop on Fine Art Pattern Extraction and Recognition (FAPER), Workshop at the 23rd International Conference on Image Analysis and Processing (ICIAP).
O. Strafforello*, G. M. Odriozola*, F. Behrad*, L-W. Chen*, A-S. Maerten*, D. Soydaner*, J. Wagemans (2024). BackFlip: The Impact of Local and Global Data Augmentations on Artistic Image Aesthetic Assessment, Vision for art (VISART), Workshop at the European Conference of Computer Vision (ECCV), 30 September, Milan, Italy, arXiv preprint arXiv: 2408.14173.
A-S. Maerten, D. Soydaner (2023). From Paintbrush to Pixel: A Review of Deep Neural Networks in AI-Generated Art, arXiv preprint arXiv: 2302.10913.
This has been the core of my research since the start of my PhD. I am fascinated by fundamental topics such as optimization, neural network architectures, and computational attention mechanisms, with a primary focus on computer vision.
One of my most cited papers on optimization algorithms in deep learning has become a widely referenced resource in the field. My review paper on computational attention mechanisms provides an overview from the early work to implement attention in neural networks to recent advances such as Transformers.
D. Soydaner (2022). Attention Mechanisms in Neural Networks: Where It Comes and Where It Goes, Neural Computing and Applications, 34(16), 13371-13385.
D. Soydaner (2020). Hyper Autoencoders, Neural Processing Letters, 52, 1395-1413.
D. Soydaner (2020). A Comparison of Optimization Algorithms for Deep Learning, International Journal of Pattern Recognition and Artificial Intelligence, 34(13), 2052013.
D. Soydaner (2019). Rolling in the Deep Convolutional Neural Networks, International Journal of Intelligent Systems and Applications in Engineering, 7(4), 222-226.
My work in computational aesthetics combines psychology, vision science, and machine learning to investigate aesthetic preferences in images. This interdisciplinary field focuses on automatically assessing image aesthetics and identifying the factors that shape aesthetic judgments. I developed a multi-task convolutional neural network to predict image aesthetics and applied a range of machine learning models, complemented by explainable AI (XAI) techniques. I introduced the first application of SHAP in this domain, improving interpretability and revealing how individual attributes contribute to aesthetic predictions.
D. Soydaner, J. Wagemans (2024). Unveiling the Factors of Aesthetic Preferences with Explainable AI, British Journal of Psychology, 1-35.
D. Soydaner, J. Wagemans (2024). Multi-Task Convolutional Neural Network for Image Aesthetic Assessment, IEEE Access, 12, 4716-4729.
In collaboration with physicists, I apply machine learning models, including deep neural networks, to predict the binding energies of atomic nuclei. We also integrate explainable AI (XAI) techniques to gain deeper insight into model predictions. Our recent publication was selected as an Editors' Suggestion in Physical Review C (2024). Our work has advanced AI applications for nuclear structure, producing publications that highlight the transformative potential of AI in physics.
E. Yüksel, D. Soydaner, H. Bahtiyar (2024). Nuclear Mass Predictions Using Machine Learning Models, Physical Review C, 109(6), 064322.
H. Bahtiyar, D. Soydaner, E. Yüksel (2022). Application of Multilayer Perceptron with Data Augmentation in Nuclear Physics, Applied Soft Computing, 128, 109470.
H. Bahtiyar, D. Soydaner, E. Yüksel (2021). Nuclear Binding Energy Predictions Using Neural Networks: Application of the Multilayer Perceptron, International Journal of Modern Physics E, 30(03), 2150017.