Social Norms for Self-Policing Multi-agent Systems and Virtual Societies

Ficha

Autor:
Daniel Villatoro Segura
Editorial:
CSIC
ISBN:
9788400096014
Fecha de Publicación:
2013
Formato:
PDF
pdf
Adobe Drm
Impresión permitida
Copiar/Pegar no permitido
Nº de dispositivos permitidos ilimitado
€4,84
[en]Social norms are one of the mechanisms for decentralized societies to achieve coordination amongst individuals. Such norms are conflict resolution strategies that develop from the population interactions instead of a centralized entity dictating agent protocol. One of the most important characteristics of social norms is that they are imposed by the members of the society, and they are responsible for the fulfillment and defense of these norms. By allowing agents to manage (impose, abide by and defend) social norms, societies achieve a higher degree of freedom by lacking the necessity of authorities supervising all the interactions amongst agents. In this thesis we approach social norms as a malleable concept, understanding norms as dynamic and dependent on environmental situations and agents goals. By integrating agents with the necessary mechanisms to handle this concept of norm, we have obtained an agent architecture able to self-police its behavior according to the social and environmental circumstances in which is located. First of all, we have grounded the difference between conventions and essential norms from a game-theoretical perspective. This difference is essential as they favor coordination in games with different characteristics. With respect to conventions, we have analyzed the search space of the emergence of conventions when approached with social learning. The exploration took us to discover the existence of Self-Reinforcing Structures that delay the emergence of global conventions. In order to dissolve and accelerate the emergence of conventions, we have designed socially inspired mechanisms (rewiring and observation) available to agents to use them by accessing local information. The usage of these social instruments represent a robust solution to the problem of convention emergence, specially in complex networks (like the scale-free). On the other hand, for essential norms, we have focused on the Emergence of Cooperation problem, as it contains the characteristics of any essential norm scenario. In these type of games, there is a conflict between the self-interest of the individual and the group s interest, fixing the social norm a cooperative strategy. In this thesis we study different decentralized mechanisms by which cooperation emerges and it is maintained. An initial set of experiments on Distributed Punishment lead us to discover that certain types of punishment have a stronger effect on the decision making than a pure benefit-costs calculation. Based on this result, we hypothesize that punishment (utility detriment) has a lower effect on the cooperation rates of the population with respect to sanction (utility detriment and normative elicitation). This hypothesis has been integrated into the developed agent architecture (EMIL-I-A).We validate the hypothesis by performing experiments with human subjects, and observing that behaves accordingly to human subjects in similar scenarios (that represent the emergence of cooperation).We have exploited this architecture proving its efficiency in different in-silico scenarios, varying a number of important parameters which are unfeasible to reproduce in laboratory experiments with human subjects (because of economic and time resources). Finally, we have implemented an Internalization module, which allows agents to reduce their computation costs by linking compliance with their own goals. Internalization is the process by which an agent abides by the norms without taking into consideration the punishments/sanctions associated to defection. We have shown with agent based simulation how the Internalization has been implemented into the EMIL-I-A architecture, obtaining an efficient performance of our agents without sacrificing adaptation skills.