Multi-Agent Learning of Strategies in Abstract Argumentation Mechanisms

Loading...
Thumbnail Image
Date
2009-01
Journal Title
Journal ISSN
Volume Title
Publisher
The British University in Dubai (BUiD)
Abstract
Argumentation has been studied extensively in the field of Artificial Intelligence, however we know very little about its strategic aspects. This thesis aims to contribute to this general problem by examining the behavior of adaptive self-interested agents, in a multi-agent environment,over repeated encounters using game-theoretic techniques. I extended an existing simulation tool to implement argumentation games and used it to run repeated game experiments using combinations of characteristic argumentation games, adapted from literature, and types of adaptive agents under different conditions.The theme used was that of a court setting whereby there is a judge listening to arguments from different agents. Once all arguments have been presented, the judge must make a ruling: i.e decide which arguments are valid and hence which agents win by presenting them. Agents are assumed to be self-interested and adaptive so they may have conflicting preferences about which arguments they want the judge to accept and they can learn di erent strategies in order to achieve goals that reflect those preferences. The results indicate that the agents use a multitude of different strategies to influence the judge and maximize their payoff , thereby revealing different combinations of arguments with different frequencies, depending on the Nash equilibria of the game, the dominance of the pure strategies and the Pareto e fficiency of the pure strategies in a game. These are dependent on aspects inherent in the argumentation game. While truth revelation was a dominant strategy in some games, interestingly in other cases the agents were able to gain a payoff that is higher than that of all the individual Nash equilibria by playing strategies involving combinations of the Nash equilibria. As for the effect of the learning algorithm on the choice of strategy, the results confirm that WPL is biased toward mixed strategies while GIGA is faster in convergence to pure strategy Nash equilibria. The importance of this kind of work lies in the fact that it combines two aspects of multi-agent systems that have been quite separate to-date: argumentation protocols and multi-agent learning in games.
Description
Keywords
argumentation, nash equilibria, pareto efficiency
Citation