Haluk Ziya Zorluoğlu Successfully Presented His Master's Thesis

Friday, 31 October 2025

MS Thesis Presentation

Haluk Ziya Zorluoğlu

Intelligent Systems Lab. 

Electrical  & Electronics Engineering,

Date: 31.10.2025  t: 14:00

Location: Fourier Classroom – Kare Blok, North Campus

UNSUPERVISED, CONTINUAL and ACTIVE OBJECTS' LEARNING by MOBILE ROBOTS 

Abstract:

This thesis is concerned with autonomous object cognition in mobile robots. A key issue in open-world environments is that new objects emerge frequently and hence labeled data are unavailable. These conditions call for unsupervised and continual learning methods that enable autonomous object cognition. Moreover, their inherent mobility enables active learning, allowing robots to explore and gather diverse sensory information to improve object understanding. This thesis proposes a method that integrates unsupervised, continual and active learning to address these challenges, enabling mobile robots to autonomously acquire, refine, and maintain object representations in open-world environments. We assume that the robot is able to segment each incoming RGB-D frame,  track the resulting segments across the frames and accumulate the segments associated with each tracked object. We also assume that the robot is able to navigate as it finds necessary to do so. First, object representation is considered and an extensive evaluation of the existing models is conducted to identify those achieving the best performance. Following, the existing one-class classifiers are evaluated with respect to learning and recognition performance. Thirdly, unsupervised and incremental objects' learning is addressed via an approach in which an objects' memory is developed. Finally, active object learning is considered through having the robot be selective with regards to the data it collects in order to learn new objects.  All the proposed methods are tested extensively in both Gazebo simulation environment and real-world scenarios with a mobile robot.  This work contributes to the advancement of open-world robotic perception by enabling adaptive, label-free object understanding in new environments.