Multi-Agent Reinforcement Learning Approaches for Distributed Job-Shop Scheduling Problems

Please use this identifier to cite or link to this item:
https://osnadocs.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2009081216
Open Access logo originally created by the Public Library of Science (PLoS)
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorProf. Dr. Martin Riedmiller
dc.creatorGabel, Thomas
dc.date.accessioned2010-01-30T14:54:39Z
dc.date.available2010-01-30T14:54:39Z
dc.date.issued2009-08-10T12:21:15Z
dc.date.submitted2009-08-10T12:21:15Z
dc.identifier.urihttps://osnadocs.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2009081216-
dc.description.abstractDecentralized decision-making is an active research topic in artificial intelligence. In a distributed system, a number of individually acting agents coexist. If they strive to accomplish a common goal, the establishment of coordinated cooperation between the agents is of utmost importance. With this in mind, our focus is on multi-agent reinforcement learning (RL) methods which allow for automatically acquiring cooperative policies based solely on a specification of the desired joint behavior of the whole system.The decentralization of the control and observation of the system among independent agents, however, has a significant impact on problem complexity. Therefore, we address the intricacy of learning and acting in multi-agent systems by two complementary approaches.First, we identify a subclass of general decentralized decision-making problems that features regularities in the way the agents interact with one another. We show that the complexity of optimally solving a problem instance from this class is provably lower than solving a general one.Although a lower complexity class may be entered by sticking to certain subclasses of general multi-agent problems, the computational complexitymay be still so high that optimally solving it is infeasible. Hence, our second goal is to develop techniques capable of quickly obtaining approximate solutions in the vicinity of the optimum. To this end, we will develop and utilize various model-free reinforcement learning approaches.Many real-world applications are well-suited to be formulated in terms of spatially or functionally distributed entities. Job-shop scheduling represents one such application. We are going to interpret job-shop scheduling problems as distributed sequential decision-making problems, to employ the multi-agent RL algorithms we propose for solving such problems, and to evaluate the performance of our learning approaches in the scope of various established scheduling benchmark problems.eng
dc.language.isoeng
dc.subjectreinforcement learning
dc.subjectmulti-agent systems
dc.subjectdecentralized control
dc.subjectjob-shop scheduling
dc.subjectneural networks
dc.subjectDEC-MDP
dc.subjectmulti-agent learning
dc.subject.ddc004 - Informatikger
dc.titleMulti-Agent Reinforcement Learning Approaches for Distributed Job-Shop Scheduling Problemseng
dc.typeDissertation oder Habilitation [doctoralThesis]-
thesis.locationOsnabrück-
thesis.institutionUniversität-
thesis.typeDissertation [thesis.doctoral]-
thesis.date2009-06-26T12:00:00Z-
elib.elibid925-
elib.marc.edtfangmeier-
elib.dct.accessRightsa-
elib.dct.created2009-08-03T15:51:38Z-
elib.dct.modified2009-08-10T12:21:15Z-
dc.contributor.refereeProf. Dr. Hector Munoz-Avila
dc.subject.dnb28 - Informatik, Datenverarbeitungger
dc.subject.ccsI.2.11 - Distributed Artificial Intelligenceeng
vCard.ORGFB6ger
Appears in Collections:FB06 - E-Dissertationen

Files in This Item:
File Description SizeFormat 
E-Diss925_thesis.pdfPräsentationsformat2,76 MBAdobe PDF
E-Diss925_thesis.pdf
Thumbnail
View/Open


Items in osnaDocs repository are protected by copyright, with all rights reserved, unless otherwise indicated. rightsstatements.org