Multi-Agent Learning of Strategies in Abstract Argumentation Mechanisms

dc.Location2009 T 58.6 N46
dc.SupervisorDr Iyad Rahwan & Dr Sherief Abdallah
dc.contributor.authorNemer, Rama
dc.date.accessioned2013-03-07T16:34:23Z
dc.date.available2013-03-07T16:34:23Z
dc.date.issued2009-01
dc.description.abstractArgumentation has been studied extensively in the field of Artificial Intelligence, however we know very little about its strategic aspects. This thesis aims to contribute to this general problem by examining the behavior of adaptive self-interested agents, in a multi-agent environment,over repeated encounters using game-theoretic techniques. I extended an existing simulation tool to implement argumentation games and used it to run repeated game experiments using combinations of characteristic argumentation games, adapted from literature, and types of adaptive agents under different conditions.The theme used was that of a court setting whereby there is a judge listening to arguments from different agents. Once all arguments have been presented, the judge must make a ruling: i.e decide which arguments are valid and hence which agents win by presenting them. Agents are assumed to be self-interested and adaptive so they may have conflicting preferences about which arguments they want the judge to accept and they can learn di erent strategies in order to achieve goals that reflect those preferences. The results indicate that the agents use a multitude of different strategies to influence the judge and maximize their payoff , thereby revealing different combinations of arguments with different frequencies, depending on the Nash equilibria of the game, the dominance of the pure strategies and the Pareto e fficiency of the pure strategies in a game. These are dependent on aspects inherent in the argumentation game. While truth revelation was a dominant strategy in some games, interestingly in other cases the agents were able to gain a payoff that is higher than that of all the individual Nash equilibria by playing strategies involving combinations of the Nash equilibria. As for the effect of the learning algorithm on the choice of strategy, the results confirm that WPL is biased toward mixed strategies while GIGA is faster in convergence to pure strategy Nash equilibria. The importance of this kind of work lies in the fact that it combines two aspects of multi-agent systems that have been quite separate to-date: argumentation protocols and multi-agent learning in games.en_US
dc.identifier.other20050099
dc.identifier.urihttp://bspace.buid.ac.ae/handle/1234/57
dc.language.isoenen_US
dc.publisherThe British University in Dubai (BUiD)en_US
dc.subjectargumentationen_US
dc.subjectnash equilibriaen_US
dc.subjectpareto efficiencyen_US
dc.titleMulti-Agent Learning of Strategies in Abstract Argumentation Mechanismsen_US
dc.typeDissertationen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
20050099.pdf
Size:
2.03 MB
Format:
Adobe Portable Document Format
Description:
Full Text
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: