Novel Distributional Reinforcement and Ensemble Learning Algorithms.

dc.centroEscuela de Ingenierías Industrialeses_ES
dc.contributor.advisorHendrix, Eligius María Theodorus
dc.contributor.advisorNowak, Ivo
dc.contributor.authorAziz, Vanya
dc.date.accessioned2025-07-10T11:15:01Z
dc.date.available2025-07-10T11:15:01Z
dc.date.created2025
dc.date.issued2025
dc.date.submitted2025-06-11
dc.departamentoIngeniería Mecánica, Térmica y de Fluidoses_ES
dc.description.abstractThis dissertation focuses on Deep Reinforcement Learning (DRL), a neural network-based approach for solving Markov Decision Processes in high-dimensional spaces with unknown transition dynamics. The main contribution of this thesis is the development of a novel state-of-the-art distributional reinforcement learning algorithm within the maximum-entropy Actor-Critic framework. This algorithm, termed ”Cram´er-based Soft Distributional Soft Actor-critic” (C-DSAC), demonstrates superior performance to other RL algorithms, especially in environments with high-dimensional spaces and complex dynamics. Its performance is shown to be partly rooted in a phenomenon arising in Cram´er-metric-based Distributional Reinforcement Learning, referred to as confidence-driven model updates. This mechanism ensures that the value function approximator is updated more conservatively when confidence in its estimates is low. Theoretical justifications for the algorithm are provided, demonstrating its convergence in the policy evaluation setting and, under widely accepted mild assumptions, in the control setting as well. Beyond foundational algorithmic research, this thesis contributes to the practical application of RL in robotics. Given the crucial role of multi-joint robotic systems in modern production technology, a RL meta-algorithm called ”Reinforcement Learning - Inverse Kinematics” (RL-IK) is devised. This approach enhances the applicability of reinforcement learning to robotic control tasks by significantly accelerating convergence to near-optimal policies compared to standard RL methods. An essential prerequisite for real-world RL applications in control systems is machine perception for state identification. To address challenges in this field, this thesis explores novel Supervised Learning (SL) approaches, validated on image classification tasks, with a focus on ensemble learning strategies.es_ES
dc.identifier.urihttps://hdl.handle.net/10630/39287
dc.language.isoenges_ES
dc.publisherUMA Editoriales_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.accessRightsopen accesses_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectRobótica - Tesis doctoraleses_ES
dc.subjectProgramación lineales_ES
dc.subjectAprendizaje automático (Inteligencia artificial)es_ES
dc.subjectRedes neuronales (Informática)es_ES
dc.subject.otherDistributional Reinforcement Learninges_ES
dc.subject.otherSoft Actor-Critices_ES
dc.subject.otherRoboticses_ES
dc.subject.otherLinear Programminges_ES
dc.subject.otherEnsemblees_ES
dc.titleNovel Distributional Reinforcement and Ensemble Learning Algorithms.es_ES
dc.typedoctoral thesises_ES
dspace.entity.typePublication
relation.isAdvisorOfPublication0c3992b1-f2f1-4f53-a186-1dbf6d6cef5a
relation.isAdvisorOfPublication.latestForDiscovery0c3992b1-f2f1-4f53-a186-1dbf6d6cef5a

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
TD_AZIZ_Vanya.pdf
Size:
3.47 MB
Format:
Adobe Portable Document Format
Description:

Collections