WG2 Workshop in Paris

Working Group on Norms

PROGRAMME
14 December 2010
16:30 - 18:00

16:30 -16:45 Samhar Mahmood, “Metanorms, Learning and Topologies in Norm Emergence”
16:45-17:00 Stephen Cranefield
17: 00-17:15 Natalia Criado, “Towards a Normative BDI Architecture for Norm Compliance”
17:15 -17:30 Giulia Andrighetto and Daniel Villatoro, “Simulating the relative effects of punishment and sanction in the achievement of cooperation”
17:30-18:00 Discussion

- Metanorms, Learning and Topologies in Norm Emergence
Samhar Mahmood (Department of Informatics, King’s College, London)

Abstract:  Norms are a valuable mechanism for establishing coherent cooperative behaviour in decentralised systems in which no central authority exists. One of the most influential formulations of norm emergence was proposed by Axelrod, who defined a model of norms and metanorms that enables norm establishment in populations of self-interested individuals. However, our analysis of the model has shown that norms collapse in the long term. In addition, the assumption of a fully connected network of agents that is adopted by Axelrod is far from real computational system structures. In response, we have developed alternative techniques using reinforcement learning in the context of topological structures to improve the chances of norm emergence in real systems.

- Towards a Normative BDI Architecture for Norm Compliance
Natalia Criado (DSIC, Universidad Politecnica de Valencia, Spain)

Abstract: Norms have been employed as a coordination mechanism for Open MAS. However, norms must be recognized as norms by agents to become effective. These agents must be able to accept norms but maintaining their autonomy. Nevertheless, traditional BDI agent architectures only represent beliefs, intentions and desires. In this presentation, I will describe an extension of the BDI proposal in order to allow agents to take pragmatic autonomous decisions considering the existence of norms. Specifically, the multi-context BDI agent architecture has been extended which a recognition and a normative context in order to allow agents to acquire norms from their environment and consider norms in their decision making processes. In particular, coherence theory has been employed as a criterion for determining norm compliance.

- Simulating the relative effects of punishment and sanction in the achievement of cooperation
Giulia Andrighetto and Daniel Villatoro (EUI and ISTC, Italy; IIIA - Artificial Intelligence Research Institute CSIC, Spain)

Abstract. As specified by Axelrod in his seminal work ”An Evolutionary Approach to Norms”, punishment is a key mechanism in a self-regulated society to achieve the necessary social control and to impose certain norms. In this paper, we distinguish between punishment and sanction, focusing on the specific ways in which these two different mechanisms favor the emergence of cooperation and the spreading of social norms within a social system. To achieve this task, we have developed a normative agent able to recognize and impose on defectors either punishment and sanction, and have implemented an proof-of-concept simulation model to test our hypotheses.

Towards virtual worlds as environments for norm emergence in mixed human/agent populations

Stephen Cranefield (University of Otago)

Techniques for designing and developing normative multi-agent systems are an active topic of research in the MAS community. However, most research assumes that the norms are provided by human designers, and the study of mechanisms for norm emergence is at an early stage. One difficulty in this area is a lack of common scenarios for experimentation. Another is a bootstrapping problem: in order for agents to learn norms, there must already be norms present in the environment or agents must be provided with the means to invent new norms and act on them. The researcher is then open to criticism for designing agents to learn norms that have been pre-engineering into the environment.

In this talk I will propose the use of virtual worlds such as Second Life to build environments for studies on norm emergence in multi-agent systems (as opposed to the related area of learning the norms already existing amongst human societies in those worlds). Building closed simulation environments in which both agent- and human-controlled avatars can interact via simple APIs (for agents) and head-up displays (for humans) would help to establish common scenarios for experimentation and to solve the bootstrapping problem: the humans can inject norms into the environment. It is hoped that subsequent discussion will help to determine the requirements for such a simulation environment in terms of simple but flexible support for agent sensing, acting, signalling, learning, evolution, etc. and population-level processes such as evolution.

SetPageWidth