You are on page 1of 38

Final Year Project Report

AF-OpenSim
Niall Deasy

A thesis submitted in part fullment of the degree of BA/BSc (hons) in Computer Science Supervisor: Dr Rem Collier Moderator: Dr Mauro Dragone

UCD School of Computer Science and Informatics College of Engineering Mathematical and Physical Sciences University College Dublin
May 5, 2011

0.1

Acknowledgements

I would like to thank everybody who has helped me during this project. In particular I wish to thank my Supervisor Dr Rem Collier, whos expertise in multi-agent systems and Agent Factory was invaluable to the projects success. His genuine interest and support in assisting me throughout this project was greatly appreciated. I would also like to thank Dr Mauro Dragone for assisting me in the early stages of the project, and supporting me in the rst few vital steps. Finally I would like to thank Dr. Eleni Mangina for her constant support throughout this nal year. She was always there for anybody who needed her support or advice, which is rare and so appreciated.

Page 1 of 36

Table of Contents

0.1 0.2 0.3 1 2

Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Project Specication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 4 5 6 8 8 8 9 13 15 16 17 18 18 19 19 21 21 23 26 28 29 29 31 33 33

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 2.2 2.3 2.4 2.5 2.6 2.7 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A brief history of Multi Agent Systems (MAS) . . . . . . . . . . . . . . . . . Agent Factory (AF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Environment Interface Standard (EIS) . . . . . . . . . . . . . . . . . . . . . OpenSim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OpenMetaverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xstream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Core Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 Rebuilding Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OpenMetaverse & XStream . . . . . . . . . . . . . . . . . . . . . . . . . . . Proposed Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Communications Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensor Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Actions Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interactive Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Agents and OpenSim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 4.2 EIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . EIS & AgentFactory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 The Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Page 2 of 36

5.2 5.3 6

Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34 34 35 35

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Page 3 of 36

0.2

Project Specication

The objective of this project is to enable autonomous virtual characters in virtual environment such as Second Life. Second Life is an online 3D virtual world, which oers excellent opportunities to create interactive simulations for various purposes, thanks to its inbuilt physics, programmability and collaborative features. One possible applications of the target software is to enable ICT designers to implement actual Virtual Reality scenarios of Ambient Assisted Living (AAL) in domestic settings. For instance, such a system may be used to verify interaction designs and test AAL products in a simulate environment populated by simulated users. Agent Factory, a java-based agent tool for rational decision making developed in UCD, will be used to inject goal-oriented behavior into the avatars . Both these avatars and the virtual simulated environment will be based on Open Simulator , often referred to as OpenSim, an open source server platform compatible with Second LifeTM , that can be accessed through a variety of clients, on multiple protocols. In particular, the proposed project will improve the design and extend a pre-existing OpenSim text-based interface, OpenSim4OpenCog (OS4OC) . OS4OC is a C# program which opens up an interactive console that can be used to instruct the avatar. While OS4OC supports a list of rudimentary actions (e.g. jump, sit, crouch, move, say...), further sensing and acting capabilities are needed to enable more sophisticated physical and social interaction (e.g. accounting for object manipulation, deictic gestures, facial expressions...). Mandatory: Familiarize with OpenSimulator and OpenSim4OpenCog Write a Java client to impart instructions to the avatar, access the information originated from the virtual world, and maintain a representation (world model) of the avatars surroundings. Create an extensible set of C# classes operating between CogBot and OpenSim. These classes should extend OS4OC oerings, and should be tailored to the purpose of Java Agents by handling a messaging protocol (based on TCP-IP) with the Java client. Discretionary: Integrate the Java client with Agent Factory, by using one of its standard interface capabilities to populate the agents belief model and interact with its reasoning apparatus. Exceptional: Build a model of a home (including furniture etc) relying as much as possible on available 3D models and mirroring a real AAL test-bed. Implement a set of activities, such as make tea (switch on kettle, take milk...), watch tv driven by agents plans formulated in Agent Factory

Page 4 of 36

0.3

Abstract

Abstract One of the greatest issues in system design and specication is predicting if the system will function as expected. This project is particularly concerned with the testing of Ambient Assisted Living Scenarios (AAL) in domestic settings. AAL scenarios are sensor systems, which integrate into a domestic environment Their function is to assist the occupant in their every day life within that environment AAL scenarios can range from AAL designed for special needs to AAL designed for more productive and easy living [10]. One possible solution to this problem may exist in virtual testing environments. Virtual environments have become increasingly popular over recent years as a means of testing such systems. One such Virtual Environment is OpenSimulator (OpenSim), a completely open source virtual simulator server, maintained and developed by an open source community. Using a virtual environment such as OpenSim, in conjunction with an interpreter (OpenMetaverse or OS4OC), this paper will discuss its aims to develop a system, which can tackle such problems. This paper will also describe in detail the technologies used in the process of designing this system, as well as the issues encountered.

Page 5 of 36

Chapter 1: Introduction

Over the past few decades, computer technologies have advanced at an exponential rate. Today computer chips are aordable and in abundance. In recent years computer systems have developed new and interesting abilities to integrate themselves into our everyday lives. This is mostly accredited to sensor systems, which now exist in almost every form known to man kind. We live in a society where computers can now see, hear and smell even beyond our own sensing abilities. These sensing systems can now be seen in cars, mobile phone, laptops and even our own homes. Take the modest house alarm, a system which completely relies on its sensors to detect the presence of intruders. This simple sensor system has been around for over a decade now, and its demand keeps growing. There exists huge potential with these sensing systems, particularly in the area of assisted living environments. Ambient Assisted Living (AAL) is a program funded by many european countries, whose primary goal is to provide a living environment designed to accommodate elderly in the comfort of their own home[10]. There are many issues faced with AAL systems such as reliability, limitations and adaptability. Given the dynamic nature of domestic environments, AAL systems need to be able to adapt while still retaining their ability to function properly after doing so. There exist many methods of implementing adaptable systems, however the output of which can be dicult to predict. One possible solution to this problem is to test the system with random and diverse scenarios. In reality, such tests can be costly and time consuming. One of the main concerns for many startup projects is how much will the equipment cost? What specications will we need? and most important of all, will it work? This paper describes a virtual environment system, which allows for systems to be tested within it. It is particularly aimed towards systems which require user interaction, such as evacuation plans or sensor system testing. The simulator used in this project is called OpenSim, which is based on the popular Second Life platform. There are also two other components to this system which run in conjunction with the OpenSim virtual environment. These are the Java Client, which autonomously or manually imparts instructions to the avatar, and a layer which lies between the Java client and the simulator and should provide a form of communication between the two. The Java Client should also be able to accommodate an autonomous agent system, in particular UCDs own AgentFactory. The requirements of the project are simple. The nal system should be able to meet the following requirements: Enable autonomous virtual characters in a virtual environment. The virtual environment should be customizable and intractable. The overall system should perform well in real time.

Page 6 of 36

1.0.1

Report Structure

This report will be split up into six primary sections Background Research, Core Architecture, Agents and OpenSim, Evaluation and Conclusion. This structure aims to describe the project from its Analysis to Design and nally Implementation, thereby providing a uid transition from concept to construct. Background research will provide a detailed insight into the technologies used in this project as well as the research undertaken in obtaining them. This will begin with a brief introduction into the history of Multi-Agent-Systems, with reference to its origin as DAI. Following this, Agent Factory will be introduced as a valuable platform which allows for the development and deployment of such Multi-Agent-Systems. The OpenSim virtual environment will be introduced as a plausible means for hosting the projects MAS system. This will be followed by both obsolete (OS4OC) and current (OpenMetaverse) technologies that can be used to connect to OpenSim. Core Architecture will describe the design process of the project in its highest level. In particular, this chapter will discuss the reasoning behind the projects decision to rebuild a new architecture, using OpenMetaverse, instead of extending its preceding system, which used OS4OC. Following this, a new architecture will be proposed, which will focus primarily on using XStream and TCP as a link between OpenMetaverse and AgentFactory. Sending complex data structures over a TCP stream will then be discussed in relation to linking OpenMetaverse with AgentFactory. In order to keep this report as brief as possible, this chapter will be consist of both the Design and Implementation aspects of the Core Architecture. The following chapter, Agents and OpenSim, will describe the integration of the architecture discussed in the previous chapter with an EIS environment interface. This will begin with detailed descriptions of both the actions and perceptions which were integrated into this environment, using the Core Architecture. The chapter will then proceed to discuss the integration of this EIS environment into Agent Factory, including a brief description on setting up a sample scenario. The Evaluation chapter will detail a test AAL scenario, which is designed to show the nal systems ability to accommodate such a scenario. This chapter will also attempt to incorporate any extra features, which the project has developed into its nal system, into this scenario. The Agent Programming Language used to implement the scenario will be described in great detail. Finally this section will conclude with an analysis of the expected and the actual results of the scenarios execution. Finally this report will wrap up with its conclusion, which will also include a future work section.

Page 7 of 36

Chapter 2: Background Research

2.1

Introduction

This chapter will focus primarily on the technologies researched and used in this project. This project is largely based on Multi Agent Systems, and will therefore start by introducing the concept of Multi Agent Systems as well as their related technologies. These will include standards such as FIPA and EIS as well as an Agent Platform, namely Agent Factory, which aims to support both. This chapter will also discuss various means for exchanging data between dierent programming languages, with particular emphasis on the XStream project.

2.2

A brief history of Multi Agent Systems (MAS)

The evolution of Multi Agent Systems can be traced back to its predecessor, Distributed Articial Intelligence (DAI), which in turn is a subset of Articial intelligence. To begin to understand how agent systems work, it must rst be dened what is meant by an agent. There exist many dierent variations which attempt to dene what is meant by an Agent. The most generalized denition describes a automated system entity, which performs actions based on its surrounding enviornment. However, it is Wooldridge and Jennings (W&J) denition of weak and strong notions of Agency that is the most recognized [18]. According to W&J, Agents can be dened through two denitions: 1. The Weak notion of Agency 2. The Strong notion of Agency The Weak notion of Agency is a denition proposed by W&J, which attempts to dene Agents in their simplist form. This denition describes agents as computer based hardware or software systems, which are autonomous, reactive, pro-active and also have certain social abilities. Perhaps the most vital point to this denition is that Agents should have the ability to set their own goals and achieve those goals through their own decisions. W&Js denition of Agents also maintains that Agents need not be mobile, which by denition extends an Agents use to beyond mobile systems. W&J also maintain a Strong denition of Agency, which further denes the Weak notion of Agency. This denition describes Agents having a mental state, which typically consists of beliefs, goals, obligations, knowledge, preferences, amongst other typical mental traits normally associated with humans. MAS views agents as having three fundamental characteristics[19]: 1. Autonomy: agents should have at least a minimum level of autonomy 2. Local views: the system as a whole is regarded as too complex for one single agent to conceive. Therefore, agents views are restricted to a local subset of the global system view.
Page 8 of 36

3. Decentralization: there must be no central control agent, which would lead to a monolithic system. A key function of agents is their autonomous capabilities from revising their own goals to sharing knowledge of their environment with other agents. Agents within MAS are said to be social agents if they have the capability to share beliefs and perceptions of their local environment. However, as we can see in our own physical world there exists boundaries in communications where languages are not the same. As dierent agent systems emerged, there was an increasing interest in establishing a standard, which would allow interoperability of these agent systems. The Foundation for Intelligent Physical Agents, or FIPA, set out to establish a set of standards which would promote the interoperability if dierent agent systems [20]. The Agent Communication Language (ACL) was one such standard proposed by FIPA. Two of the most successful ACLs developed by FIPA are FIPA-ACL and Knowledge Query and Manipulation Language (KQML). Both of these standards are make extensive use of Searles Speech Act Theory (SAT), which theorizes that human utterances are spoken, with result of an entity acting or reacting to that utterance[21]. Essentially SAT viewed human utterances as actions which physically change the state of that world. For many agent systems, communications between agents is a primary function, and should be dened in a clear and eective form. Searles Speech Act Theory held quite an important role in the development of agent systems, as it provided a clear breakdown of the types of communication utterances, as well as their aect. SAT derives its three core denitions from John L. Austins doctrine of Locutionary, Illocutionary or Perlocutionary[22] acts where, Locutionary acts dene a well structured utterance,which has a substantial meaning. These acts can range from describing an object to asking a question. For example, That candle is lit is a well dened locutionary act. Illocutionay acts are locutionary acts which have the intention of causing a desired eect/ action. For example, May I light the candle? is a Illocutionary act, which has the desired eect of the person that the utterance is directed at, responding with a conrming answer, i.e., yes or no. Perlocutionary acts are acts which are performed as a result of saying, or not saying something. These acts range from persuasive to inspiring acts. For example, The candle has gone out. is a perluctionary act which may have the eect of a hearing person reacting to the statement by relighting the candle. In essence, these speech acts give FIPAs two ACLs clear and concise meaning, which promotes eective and justiable communications between agents within a multi agent system. While DAI systems focus primarily on how multiple articial intelligence systems can work together across a distributed system, MAI specializes in autonomous self-organized agents.

2.3

Agent Factory (AF)

Agent Factory is an open-source project whose primary purpose is to assist the development of multi-agent systems [2]. Agent Factory is composed of several components including several platforms, tools and languages, and comes in two formats, Agent Factory Standard Edition (AFSE) and Agent Factory Micro Edition (AFME). These two editions allow for Agent Factory to be tailored for both regular and mobile platforms.
Page 9 of 36

Since this project is specically tailored for desktop and server deployment, AFSE was chosen as the edition of Agent Factory to be used with this system. AFSE is a modular and extensible framework which allows for multi-agent systems to be deployed in a support environment [10]. One of the main purposes of Agent Factory is to provide an interface which is compliant with FIPA, therefore allowing for a wide range of compatibility with other agent systems. This achievement can be seen through their AFSE Common Language Framework. This framework consists of a collection of libraries which allows for a wide and diverse range of Agent Programing Languages (APL) to be used. AFSE is composed of three primary features, a Run-Time Environment, a Common Language Framework, and EIS compatibility.

2.3.1

Run-Time Environment (RTE)

The Run-Time Environment is the most critical function within Agent Factory as it provides support for the interoperability of dierent agent platforms [3]. It achieves this by providing the core software required by agent-based applications, which includes several agent platforms. The RTE eectively integrates these specialized agent platforms through a common communication channel. Figure 1 shows two dierent agent platforms, where agents communicate through a shared communication channel. The Agents are represented by purple circles, and communications between agents, both locally and cross-platform, are represented by dotted lines. Communications are transparent in this case and the agents do not need to know how to communicate with agents existing on a dierent platform. Each platform may require certain services in order to support their agents, this is done through dedicated platform services which exist locally between the platform and the communication channel.

Figure 2.1: AgentFactory Run-Time Environment

Page 10 of 36

In addition to providing transparent communications between dierent agent platforms as well as support for multiple platforms, the RTE also provides a few key services which assist in the deployment and maintenance of such platforms. These include the Agent Management Service, which provides runtime support for agents (creating, terminating, suspending, resuming), as well as the Local Message Transport Service, which provides a means for crossplatform communications. The RTE is therefore an essential component to Agent Factory as it provides transparent cross-platform support at run-time.

2.3.2

Common Language Framework (CLF)

The Common Language Framework is another essential component of Agent Factory as it provides support for many FIPA compliant Agent Programing languages and Architectures. The CLF uses its own JavaCC based compiler to check outline Grammar and templates, as well as providing a congurable debugger[4]. The currently exist three main supported APLs Agent Factory Programming Language (AFAPL), AF-AgentSpeak and AF-TeleoReactive. Many of these APLs base their structure on the Beliefs, Desires, Intentions (BDI) model. 1. Beliefs are used to denote a belief which the agent has about its environment. For example, Bel(location,home) represents an agent belief which states that the agent believes it is at home. In short, beliefs are used to represent the state of an agents local environment. Beliefs are usually stored within a database called a belief-set, however dierent systems may use dierent forms of belief storage. 2. Desires are used by agents to denote what it would like to achieve in the future, i.e., Desires represent the agents motivations/goals. Goals represent active beliefs, which should not conict with other goals, i.e., an agent should not have a goal of becoming a doctor, if it also has a goal to become a software engineer. 3. Intentions denote the agents deliberative state, i.e., what the agent has chosen to do. In many systems, intentions denote what the agent has planned to do. A Plan is a set of actions, which an agent has formulated in order to achieve a certain goal. 4. Events are used to update an agents belief set, resulting from a triggered event. These triggers may exist internally or externally, i.e., the agent may consist of its own internal triggers such as sleep, or triggers may be a result of a change in the agents environment.

AF-AgentSpeak AF-AgentSpeak (AF-AS) is a specialization of Anund Raos extend version of AgentSpeak(L) language implemented through Agent Factory[16]. The Jason based language was initially developed as a demonstrative tool, which aimed to show how Agent Factory can be used to eciently develop existing APLs, using the CLF. AF-AS includes a reuse model which allows for inheritance, abstract plans and agents as well as overriding plans. The name of the le must reect the designated agent name within the le, i.e., the le test.aspeak must include #agent test within its declarative statement. This is due to AF-AS ability to extend and inherit agents, where agentspeak les must be clearly dened and easily locatable. Beliefs in AF-AS are simple and take the form of grounded predicate formulae. For example, if an agent sees a ball, the belief see(ball) will be generated. Once this belief is generated, a creation event, denoted by a + symbol, is called, i.e., +see(ball). These events only last for one agent cycle and are automatically removed via the removal event - symbol, i.e., -see(ball).

Page 11 of 36

Plans in AF-AS are composed of a set of rules, and are triggered by an associated event within a specied context. Rules may also use variables through its ? symbol, which is used to dene all variables. AF-AS is also capable of handling simple if else statements within its plans, similar to those seen in Java. AF-AS also supports printing to the console through the commands .print() and .println(). The following example highlights most of these discussed features: #agent helloworld #extends simpleAgent module eis -> com.agentfactory.eis.EISControlModule; +initialized : name(?name) <.println("Hello World from " + ?name), eis.perform(lookAround()); +see(?type) : true <?typeCopy = ?type, .println("I can see an object of type"+?typeCopy); This helloworld agent extends an already existing simpleAgent, and implements an EIS module. Modules are necessary to perform actions, which have been predened within that module. When this agent initializes, it prints out a hello world message along with its name and then calls for an action lookaround() to be performed.

AFAPL AFAPL was an original language specically designed for Agent Factory. It was later adapted in accordance with FIPAs Common Language Framework[15]. AFAPL is composed of a set of commitment rules which dene situations where the agent should act/react. These rules formulate the basis of agents within AFAPL and allow the agent to work towards its goals. AFAPL is also modeled on the BDI model, and is composed primarily of Beliefs, Goals, Plans and Commitment Rules. AFAPL also supports Plan structures, which can be used to dene extra functions based on a precondition and a post condition. The following code demonstrates a plan which is used to print out a statement to the console when ever the say action is performed: state(initialized) <!say(hello), !say(goodbye); plan sayPlan(?x) { pre true; post say(?x); body { .println("You said "+?x); }; } which results in the following output: You said hello You said goodbye
Page 12 of 36

AF-TeleoReactive AF-TeleoReactive (AF-TR) is another language written for Agent Factory, and is derived from Nils Nilssons TeleoReactive model, while adhering to Agent Factorys Common Language Framework[14]. AF-TR was designed to function in conjunction with a dynamic environment, while maintaining and processing the actions of its autonomous agents. A nice feature of AF-TR is its ability to reuse agent code, in a manner similar to that seen in Object Oriented programing languages. It achieves this through the use of its #extends keyword within the agent denition. AF-TR provides simple commands within its language, similar to that seen in AFAPL and AF-AS. However the general structure of functions is slightly dierent, and reects a more structured functional decomposition. AF-AS uses functions to dene an agents actions. Functions can be composed of both production rules and parameters. Production rules always take the form of Condition - Action. For example, the following snippet denes an agent named SpeakingAgent which is capable of speaking, #agent SpeakingAgent #extends simpleAgent function main{ initialized(true) -> say("Im Ready"); }; function say(?sayThis){ true -> .println("I said "+?sayThis) }; Here we can see one production rule per function. This main function is run rst, and states that if the agent is initialized then perform the action say. The second function is a triggered event, which gets called when the action say is performed by the agent, and prints the result to console. The rst line of the script is the declarative statement which declares the agent le as SpeakingAgent as well as extending another AF-TR le named simpleAgent. Consequently, agents implemented through this le also inherit all of the functions and traits of the simpleAgent.

2.4

Environment Interface Standard (EIS)

In order to address the growing issue of inter compatablity between various APLs, a set of standards was set up which came to be known as the Environment Interface Standards, or EIS. EIS decided to model its standards based on popular existing APIs, while at the same time maintaining an interface which is as generic as possible. This approach was taken in order to facilitate the adoption of EIS by existing APLs by providing a set of standards which are similar in style.

2.4.1

Agent-Entities-Relation

The agent-entities-relation refers to the manner of which EIS views an agents interaction with its environment[6]. EIS regards Agents as separate to their environment, where agents interact with their environment through an assigned entity. An entity can be seen as the agents avatar, which allows the agent to indirectly interact with its environment through
Page 13 of 36

the use of sensors and actuators. This process is handled by the Environment Interface (EI), which may be adapted to the APLs specications. When designing the Environment Interface, EIS decided to allow this Agent-Entities-Relation to be congured within that EI. Consequently, the EI requires that both agent and entities be precongured by populating sets of identiers for each. The Agent-Entities-Relations can then be congured by creating associations between agent and entity identiers. This setup accommodates the diversity within Environment Interfaces as it allows for any combination of agent-entity relation, from one-to one, one-to-many, and many-to-one agent-entity-relations. For example, one EI may have multiple agents all sharing control over one entity (Figure 2: Agents C&D), i.e., multiple marines controlling one submarine. On the other hand, one agent may require control over several entities (Figure 2: Agent B), i.e., a central control system operating several trac lights. Or in the simplest scenario, each agent may control a single entity (Figure 2: Agent A), i.e., a competitive virtual football game. The following gure shows a sample Environment Interface conguration which includes each of these three possible relations.

Figure 2.2: Agent-Entity-Relation

2.4.2

AF integration

Agent Factory saw EIS as a valuable commodity, as it provides standards which dene how agent platforms and architectures can connect to an environment interface. Such standards are the key to allowing dierent agent platforms to share the same environment interface, as well as allowing dierent agent platforms to be benchmarked against each-other. In order to successfully integrate EIS into Agent Factorys architecture, several components were designed which would facilitate the integration of EIS into AF. These include a Platform Service, a set of Modules, a Manager Agent and various Run Congurations[5]. The link which connects EIS to agent factory is provided by the platform service itself. In order to interact with EIS environments, CLF-based agents utilize one of two purpose built modules, namely the Manager and Control APIs. The Management API is responsible for the
Page 14 of 36

creation, suspension, resumption and termination[16] of agents existing on that platform. This is implemented through the Agent Management Service (AMS), which represents a core platform service that is implemented on all agent platforms, as required by FIPA. Similarly, the control API is used to manager agents also. The control API is responsible for allowing the creation of agent-entity associations, setting up the API and linking it with the associated EISService, i.e., setup(?serviceId), registering an agent with the environment, registerAgent(Agent), and is also responsible for enabling entities to perform actions through the associated Environment interface, i.e., perform(?Action). When created, the agents use the control api to link to their associated entity and subsequently use the api to retrieve the sensory data of the entity and to perform actions. In addition to this, a default manager agent is provided that creates agents for each free entity in the environment. Finally, a set of Run Congurations are also maintained which assist the debugging and deployment of EIS applications.

2.5

OpenSim

An essential component to this system is the virtual environment. It is important to this project that such a system will be easily customizable, and extendable. It would also be quite benecial if such a virtual environment has a large support base and a well maintained API. OpenSim is an open-source 3D virtual environment server which is based SecondLife[7]. SecondLife is a virtual environment where people can interact with other people through avatars. Users use programs called viewers to interact with the virtual enviornment through their avatar. OpenSim implements the same messaging protocols SecondLife, which allows a SecondLife client view to be used to view an OpenSim virtual environment. However OpenSim is much more open than SecondLife, as its primary goal is to create a virtual environment, which can be moulded and adapted as necessary. Objects within OpenSim are known as Prims, which can come in various shapes and forms. One of OpenSims strong points is its social interaction features. In OpenSim you can make friends, join groups and interact with other avatars. It also provides support for multiple physics engines, which enables a grid to choose which ever physics engine suits it best. OpenSim servers can be implemented in two dierent modes, standalone or grid mode. The rst mode, standalone, is the easiest to setup and allows the simulation to run on one system. This means that standalone is restricted by the number of users it can accommodate. However, Grid mode allows a simulation to be spread across multiple systems, thereby increasing the scalability of a virtual environment, and allowing for a much greater user capacity. One of the great features of OpenSim is its hyper-grid system. The hyper grid allows for multiple opensim servers to connect to each other, much like the structure of the internet. In this way, opensim is potentially an innite virtual environment. The open grid works by keeping a reference of all of the connected servers and allowing a user to teleport to dierent grids.

Page 15 of 36

2.6

OpenMetaverse

The system which was already implemented before this project consisted of the OpenSim virtual environment, as well as OS4OC as the interpreter. However, this setup was found to be unreliable and bug prone. One of the suggested reasons for this is that OS4OC is not very well maintained, and has remained in its early stages of development. As a result of this, this paper decided to attempt to nd an alternative interpreter for OpenSim. It was discovered that such a program existed within the architecture of OS4OC, namely OpenMetaverse. OpenMetaverse is an open source set of libraries, which have been primarily designed to access OpenSims core functionality [1]. This allows for us to login to an OpenSim simulator, impart instructions to an avatar, as well as access the avatars surroundings. Like OS4OC, OpenMetaverse is .NET based. OpenMetaverse is simple and reliable, and allows for systems to be easily built on top of it.

2.6.1

Architecture

OpenMetaverse is composed of three main components, OpenMetaverse-Types, OpenMetaverseStructured-Data and OpenMetaverse-Core. OpenMetaverse-Types are a set of common types, which are required for 3D space manipulation. OpenMetaverse also includes a set of types necessary for communications between client and server nodes. OpenMetaverse Structured Data consists of functions for interpreting and translating objects, to and from OpenSims serialization format. Perhaps the most important object within OpenMetaverse, in regard with this project, is the GridClient object. The Grid Client provides access to an avatars sensors, as well as its actions. The Grid Client is composed up of several managers instances. These managers are responsible for obtaining data relevant to their allocated area, as well as implementing commands on behalf of the grid client. The grid client is composed of over 17 of these managers, and more may be added as OpenMetaverse extends its feature set. The Grid Manager is used to access information about the grid, which the client is connected to. This includes the local time, a list of map items, and the position of the sun and the height of the water. This manager maintains a list of all of the prims (Objects), within a set radius of the avatar. The object manager also allows for prims to be edited by the avatar, given that it has permissions to do so. The Avatar manager, namely Self, maintains the interaction actions of the avatar with its environment. This manager consists of several actions, which are derived from two underlying managers, Movement and actions. There currently exist a few issues with the movement of the avatar such as innite movement. For example, calling an action to move an avatar forward, will result in that avatar moving until stop is called. It is currently not possible to assign a stop position or max movement distance to an avatar. This may be an issue for certain applications, which require precise movements. This issue may also be amplied by network delays as openmetaverse connects to the simulator over a network stream.

Page 16 of 36

2.7

Xstream

As discussed before, this project is composed of several individual sub-systems, most of which are built in dierent languages. Therefore communication protocols between these systems is an essential factor in this project. It is vital that such communication protocols should be ecient in handling large quantities of data. This is why this project chose the reliable TCP over UDP as its communication layer. What is needed now is an ecient method for serializing objects eectively. One such library for serializing objects to and from XML the open sourced XStream [9]. XML (Extensible Markup Language) is a set of rules which allow objects of any form to be translated into a machine readable form [11]. It makes extensive use of tags which dene the object of which they enclose. Using Xstream, it is possible to transfer objects from one system to another. Another key benet to XStream is that it is available as both .Net and Java libraries. XStream is also designed to be ecient and quick to serialize and deserialize objects. XStreams strength exists in its ability to translate complex objects to and from XML with little conguration. However, this strength coexists with a weakness. For example, if a person object was being transferred from OpenMetaverse to the Java client, both services would have to maintain a person object interface locally which are identical in structure. This was essential to XStream as it would otherwise not know how to rebuild the XML stream to its object. However, after testing it was determined that the functionality of the objects could be dierent to each other, as long as the core data structures of the objects were the same.

Figure 2.3: XStream Example

Page 17 of 36

Chapter 3: Core Architecture

In this section, a brief overview of the project system will be described as it developed through its various stages, from the pre-existing system, to the nal system. System implementation details will be kept brief and concise where possible, with specic implementation details being described in the following section.

3.1

Rebuilding Foundations

The initial goal of this project was to build an extensive set of communication protocols ontop of a pre-existing system. The existing system was composed up of three core subsystems OpenSim, OpenSim4OpenCog (OS4OC), and a simple agentspeak agent, within AgentFactory. These communication protocols should allow for a Java client to impart instructions to an avatar within OpenSim, as well as maintaing a complex world model. To connect the two components OS4OC and Agent Factory, this system implemented a simple TCP channel through XML. This channel was used to transfer actions from Agent Factory to OS4OC, which would then be decoded by OS4OC and the relevant action would be called from the grid client. The grid client then forwards the action request directly to the OpenSim server. This system was built into an existing OS4OC program, which consisted of a simple console based interface. This interface allowed the user to issue commands, including world descriptors such as describe all and actions such as move forward. This task began with testing within the OS4OC environment itself, however it quickly became apparent that OS4OC was not a stable platform. Random crashes, core library errors and performance issues were just some of the issues encountered with OS4OC. The main issue with OS4OC was that it had been specically developed towards another AI platform called OpenCog, and was still in development. These issues with the most critical component of the system, the OpenSim interpreter, led the project to consider nding an alternative solution. This task began with research into the problematic component itself, OS4OC, in particular how it connects to OpenSim. Consequently, it was discovered that OS4OC utilizes a small set of libraries called OpenMetaverse, which are specically tailored for connecting to OpenSim. Using this set of libraries, one can control an avatars movements as well as access real-time data about its surroundings. This discovery also allowed for the newest, and most importantly, the most stable release of OpenMetaverse to be deployed. Further research was also done into the possible existence of a Java equivalent to OpenMetaverse, as such a system would substantially benet the system without the necessity for cross platform translations. There did exist such a system called libsecondlife-j [17], which was an attempt to port the existing OpenMetaverse platform to Java. Unfortunately this project was over 4 years old as well as inactive, and was consequently near impossible to setup due to outdated library dependancies. It was decided that implementing the project using OpenMetaverse would be much less time-consuming as well as being much more likely to result in a stable and eective system.

Page 18 of 36

3.2

OpenMetaverse & XStream

OpenMetaverses stability and large and active community base made it the clear choice to be the OpenSim interpreter. However one issue still remained, how to integrate OpenMetaverse, a C# based system, with Agent Factory, a Java based system. What was now needed was a mechanism for providing the most simple and extendable means of integrating these two vital components. This began by researching how the preceding system achieved such an integration, as its interpreter, OS4OC, was also C# based, i.e., through TCP and XML. This mechanism integrated the two components by using TCP communication with XML, both of which are a common component of Java and C#. It built up a set of protocols from scratch, with complex XML parsers existing on both platforms, in order to serialize/deserialize the data. Before attempting to set up custom XML parsers to translate data and requests to and from XML, it was decided to research if an easier alternative existed. Several potential possibilities were explored, including Remote Procedure Calls (RPC), and TCP/UDP communication mechanisms. Research into RPC revealed that setups involving dierent programming languages could be quite dicult and time consuming. Since the functions involved were mainly actions of a simple nature such as Move, Say, it was decided that these methods could be invoked through simple XML structures. This concept was further realized by the discovery of a popular XML parser called XStream. XStream allows for complex data structures of all types to be parsed to and from XML. Using XStream, the system could transfer both sensor objects, i.e., real world data such as a house, and action objects without the need to construct a complex XML parsing system to serialize/deserialize the objects. This was a hugh benet to the progress of the project, as it allowed the project to concentrate more on the data structure of the objects being transmitted, without worrying about their complexity or how to parse them to XML. Having one from of communication for both method invoking and object retrieval procedures allowed the project to focus on developing and maintaining one communication system. As a result, XStream can be seen as being the systems primary marshaling service, and is essential to allowing the project to develop quickly and eectively, as well as allowing the project to be extended easily in the future.

3.3

Proposed Architecture

Having established a means for connecting to OpenMetaverse from Java, the next step was to devise an architecture which would best compliment the capabilities of both OpenMetaverse and Xstream. This process began by developing a core communication layer, which would allow transparent access to OpenMetaverse. This layer was named the Communication Layer, and is composed of TCP streams, using XML to transfer data between nodes. This layer also provides plug in functionality, where any number of components can plug in to the communication layer, allowing access to OpenMetaverse, and consequently access to OpenSim itself. The communication layer is further broken down into two sub-layers, namely the actions and sensor layers. The reason for dividing the communication layer into these two components, is to provide a simple decomposition of the OpenMetaverses functionality. Each of these three communication layers are managed by an associated management service. These management services are responsible for implementing and maintaining communications, while at the same time providing extra services related to that layer.

Page 19 of 36

Figure 3.1: Proposed Architecture Since the communication layer is essentially a wrapping service for the sensor and actions layers, the communication managers only responsibility is to initialize and maintain these two layers. The Actions and Sensors Managers dier in functionality, depending on whether they exist as Server or Client instances. For example, an Actions Manager existing on the client side has the simple task of forwarding actions to the OpenMetaverse server. Where as the Actions Manager existing on the Server side is responsible for carrying out those actions, as well as ensuring concurrency and other related issues are maintained. On both the client and server, the Server Manager is responsible for maintaining an up-to-date world representation at all times. These communication layers rely on the capabilities of XStream to transfer data. XStream instances exist at the point where the communication managers connect to the communication layer, providing a means for serializing and deserializing objects and data. In this case, XStream can be seen as a universal plug, which allows the communication manager to plug into the communication layer regardless of its core language, i.e., Java, C#. XStream achieves this through its alias associations, which associate an agreed name for a data type, with reference to the local representation for that type, e.g., Alias(String, typeOf(string)). This architecture is designed to be able to run across two machines. This was primarily due to the requirements of OpenMetaverse and OpenSim, i.e., a windows environment. Since the communication layer was implemented through TCP, the architecture could be spread across two machines through static IPs. Consequently the system was divided into two components, a Server and a Client. The Server wraps OpenSim and OpenMetaverse together, as they are the only two components which require a windows platform to run on. This allows the client to be platform independent, which is also aided by the fact that Agent Factory is built in Java, which strides to be platform independent. This architecture can be seen used in the following gure, Here we can see four primary components, the EIS environment, the GUI, OpenMetaverse and the OpenSim virtual environment. The three components EIS, GUI and OpenMetaverse are connected by one shared communication layer. This allows for both the EIS environment and the GUI to impart instruction to avatars, as well as retrieve world representations, through OpenMetaverse. OpenMetaverse directly connects to OpenSim through its grid clients, as mentioned in Section 2.6. This architecture is designed to be run with the server
Page 20 of 36

existing on one machine and the client existing on another machine. This is primarily due to performance issues which may occur on some machines. However, the system is perfectly capable of running on one machine where that machine has the necessary resources and power to do so.

3.4

Communications Layer

In the previous section, the communication layer was introduced as a means to connect AgentFactory to OpenSim through a combination of Xstream, TCP and OpenMetaverse. The Client/Server architecture was also introduced as a means to allow for platform independent clients, as well as spreading out the work load of the over all system. This section will go into further detail on the mechanisms behind this communication layer, focussing primarily on how the layer is implemented on both the server and the client, as well as the types of data that is sent across the various communication streams.

3.5
3.5.1

Sensor Manager
OpenMetaverse

On the Server side, the Sensor Manager is responsible for maintaining and updating its world model every 200ms. It does this by translating the data available from OpenMetaverse into a custom world model object, which can then be interpreted by the Java client. During a world update, the sensor manager will choose the best candidate, i.e., a grid client, from a list of currently logged in grid clients in order to retrieve world data from the OpenSim model. The main issue which arose from implementing this system was communications costs. For example, the Sensor Manager should not send the world model to the client, if that model has not changed since the last time it was sent. To achieve this, every object within the world model, including the world model itself, was made comparable to other instances of the same type. This allowed the sensor manager to determine when the state of the world has changed, consequently allowing it to only send the world model in that situation. The server-side sensor manager is also responsible for managing the world sensor objects. In particular, this manager allows for these sensor objects to created and manipulated in real time. It achieves this by rebuilding sensor objects during ever world update. Since world updates only occur when something changes within the OpenSim environment, such as an avatar moving or a prim being created, sensors are only updated where necessary. In order to determine when a user has changed the sensor script, which is located within the prims description, more world comparators were added to detect relevant changes to sensors. The script interpreter is located within the Sensor object itself, which is in turn an extension of a prim object. This allows the sensor object to be built within any primitive object, as its script is located within the prims description. The sensor object works by comparing its position to that of all the avatars within the grid. If the position is within its sensing range, it populates one of two lists of agent names with that agents name. These lists are used to determine if the avatar is moving or not moving, where that avatar is within the range of the sensor. This is particularly useful for testing certain AAL environments, such as

Page 21 of 36

determining the last know location where the occupant was active. Due to the cost eective way in which these sensors are updated, there can be a high number of sensors within a virtual environment. The only limit being the number of avatars, as the sensors need to check each position of every avatar during an update.

3.5.2

AgentFactory & EIS

On the client side, the Sensor Manager plays a more complex role. Here, the Sensor Manager must maintain a dynamic and up-to-date world representation, while at the same time providing extra functionality based on that data set. One such function is the managers ability to generate a sub-world model based on an agents location, and a limited range from that location. For example, an avatar standing in a complex and vastly populated world, may only be interested in a limited number of objects within a certain distance of its position. This can greatly reduce an agents belief set, which is primarily based on its perception of its local world. The Sensor Manager also provides multiple methods for identifying both avatars and objects. It achieves this by providing retrieval mechanisms which can take either a Name or a UUID. This allows for EIS to implement to keep its action set simple, as actions which are associated with objects or avatars can take any supported identication. However, as many objects are not named by default, the default identication is done through UUIDs, unless otherwise requested.

3.5.3

World Objects

World objects are a key component to the Sensor layer as they provide a means for interpreting OpenSim world data into a customized and simplied form. World objects come many forms, and have been designed to best represent a typical OpenSim scenario. The highest form of world objects is the World object itself. This object can be seen as a container for all other world objects, such as Prims, Agents, Avatars, Nodes, Sensors and Useables. When a client makes a request to the server throughout the sensor manager, the servers sensor manager will reply with an up-to-date world object. In the rst implementation, Sensor and Useable objects were extensions of Prim objects. The idea being that both Sensors and Useables are represented physically by Prim objects within the OpenSim environment. Therefore it was logical that such objects should extend the object that they are based upon. However, after testing this system there was a noticeable increase in the throughput of the sensor layer, which was detected through the GUI. It quickly discovered that this was due to Prim data being duplicated, where Prims have been implemented as Sensors or Useables. The world model was storing Prim data within its list of Prims as well as its list of Sensors and/or Useables. Consequently Prim data was removed from Useables and Sensor objects by making them their own object, unrelated to a Prim. Instead, the Prim UUID was stored within the Object, which would allow for the related prim object to be recalled from the list of Prims within the world object.

Page 22 of 36

Figure 3.2: World Objects

3.6
3.6.1

Actions Manager
OpenMetaverse

On the server side, there are two core processes running, both of which are responsible for listening and executing actions accordingly. The listener thread maintains a constant communications with the client, and is responsible for simply adding the received action to a queue of actions. The execution thread is responsible for actually carrying out the actions one by one, removing them from the queue as it works its way through them. The manager was designed in this way because of the bottleneck of the single TCP connection. For example, processing an action can take any amount of time, and may often be costly in nature. However, adding an action to a queue is simple and cost eective. This is why the listener only adds the action to the queue, and then goes back to listening for more incoming actions, allowing actions to be received by the server in a durative speed of up to 100ms intervals. As discussed before, XStream allows for objects to dier in functionality, a trait of which can be seen to be used within the server-side actions. Here, actions are based around a core function execute. This function is common to all actions, which makes the job of the executing thread much easier, since it just needs to call one generic function to execute all actions. The content of this execute function diers per type of action. Some actions require constant processing until their completion, which may take from milliseconds to minutes depending on the type of action. In particular, the movement action requires constant monitoring until its completion. This is due to the nature of movement protocols within OpenMetaverse itself. OpenMetaverse only supports vector-based movements, which means that one can only instruct an avatar to move in a certain direction at a certain speed, and not to a certain point. The problem here is determining when to stop an avatar from moving once it has reached its destination. To achieve this, movement actions were executed as a thread, which would stop after reaching one of the following conditions:

Page 23 of 36

1. Reached its destination 2. Moved Further than its estimated distance 3. Has stopped moving. (e.g. Walking into a wall.) 4. Has timed out Consequently, it was decided to add a new variable to the avatar object which would indicate if that avatar is moving. In addition to this, a function was also dened within the Client-side server manager, which populates a list of objects which are beside that avatar. Using these two functions in conjunction with the EIS belief set, it is possible for an agent to determine if it has reached its destination or not. This type of system allows for the minimum amount of logic to be maintained on the server side, while at the same time allowing agents to dene their own complex movement protocols. Despite this, enabling autonomous server-side actions resulted in an unforeseen bug, where new movement actions were becoming entangled with active previous movement actions. This resulted in confused multi-directional movements, which also meant for a complete loss of movement control for that avatar. It was clear that certain actions needed to be stopped before new ones could start. This was achieved by storing the threads in an array, which were constantly checked for both nished threads and conicting threads, either of which are terminated and removed from the list.

3.6.2

AgentFactory & EIS

On the client side, actions exist as simple objects, which are then sent to the OpenMetaverse server, where they are interpreted and then executed. Since introducing multiple agents support, all actions are associated with an agent UUID. This allows the Server to determine which avatar should carry out the action. Action objects are largely object-oriented-designed. The parent action simply consists of an agent UUID, and a TYPE value. The UUID is used by the server to determine which avatar should carry out the action. The TYPE value is used by the server to determine which of the numerous action types it has received. One fact was known about the incoming actions on the server side, the objects would all be extensions of a parent Action object. Immediate tests were done to determine whether or not polymorphism was maintained such that an object that is a descendant of an action object can be cast to an action object, while at the same time maintaining its data. This test proved that descendants of action objects could in fact be converted to and from their parent type without loss of data. This also meant that objects could be rst cast to their parent type, Action, to determine their actual type such as Move, and consequently could be then cast to their appropriate type. Here, the Action Manager is primarily responsible for ensuring that actions are delivered to the server, even during intense action calling. This is achieved through queueing actions and sending them to the server on a rst-come-rst-served basis every 100ms. This small delay allows other client processes to consume valuable resources, while at the same time reducing the overall CPU usage.

Page 24 of 36

3.6.3

Action Objects

Action objects were designed primarily for use in within the actions layer. Action objects can be viewed as packaged instructions, which are interpreted and executed by the server. In this way, action objects can be seen as a form of remote procedure call which has been designed from the ground up through using XStream and TCP. The actions which are implemented through these action objects are directly related to avatars which are logged controlled by that server. Such actions can range from instructing the avatar to move to a certain point, to instructing the avatar to interact with an object. Since these actions are directly related to a specic avatar, it was decided to associate each action object with an avatar through means of a UUID variable. As previously discussed, the UUID is OpenSims way of identifying each world entity by assigning each entity a unique id. Consequently, actions were constructed in a hierarchal decomposition manner, where the root action contains the Avatars UUID. This ensures that all actions can be associated with an Avatar. In addition to this UUID variable, a Type variable was added to the parent action object, which denotes what type of action that action is. A total of Five avatar actions have been implemented through this system. The simplest of which include the three actions Sit, Stand and Say. The Sit and Stand actions are used to instruct the avatar to sit and stand, while the Say action is used to broadcast a message within the OpenSim environment, and is primarily used for debugging purposes. Both the Sit and Stand actions are seen as empty action objects, as they do not implement extra variables other that what exists from the parent action object. The more complex actions include the Move and Use actions. Rather that implement a number of dierent types of movement actions such as Stop, MoveToAvatar, MoveToObject, one generic Move action was constructed using a trait which is common to all movements, i.e., moving to a point. The Use action can be used to set an objects state to one of the objects available states. For example, an avatar may wish to change the state of a lightbulb from o to on. The following gure shows this hierarchal decomposition of Action objects,

Figure 3.3: Action Objects

Page 25 of 36

3.7

Interactive Objects

It was felt that object interaction would be a useful feature to implement into the system. Since the primary goal of this project is to implement an AAL scenario within OpenSim, one such interaction could be implementing sensors as objects. These objects would be used to detect nearby avatars, and could consequently be used to implement custom AAL scenarios. Another useful object behavior would be objects that react to an avatars input, e.g., an avatar turning on a light switch. Consequently, two types of object behaviors were implemented into the existing system, which will now be discussed further.

3.7.1

Sensor Objects

Ambient Assisted Living scenarios are hugely beneted by the use of autonomous sensors. In particular, such sensors can allow us to provide ubiquitous environments, which allow the inhabitant to live comfortably within their own home. Since the only current form of sensing is through an agents own sensing abilities, it was decided that a new form of sensing should be implemented. These sensors should be able to mimic real life sensors in as many ways possible. Ideally, they should be conceivable from the OpenSim virtual environment itself, i.e., a user should be able to create and manipulate a sensor object from an OpenSim viewer interface. The obvious choice here is OpenSims prim objects, which allow for all of these manipulations and more. These Sensors are dened under the following functionality, 1. Maximum range for the sensor 2. Custom Message when triggered 3. Maintain a list of moving avatars within range 4. Maintain a list of stopped avatars within range These traits require a form of conguration from the user/creator of the sensor object. Consequently, it was decided to use a custom script engine, which would read in a set of parameters from the objects description eld, and create a sensor object from those parameters. When a user creates an object within a viewer, such as Hippo OpenSim viewer, the user is presented with the option to give that object an name and a description, as well as many other dening characteristics. This system uses that eld as a script container. When the system detects that a user has entered a sensor script into the description, it immediately constructs a sensor object which is then propagated to the world model. For example, the following script can be used to send message hello world when ever an avatar comes within 10 meters of the object: <type=sensor;range=10;print=hello world;> This script can be placed anywhere within the description text and is designed to be minimal, as the description text only allows for one line of text, see gure 3.4. The two tags < and > dene the beginning and end of the script, with every internal statement being separated by a semicolon. Statements are dened in the format of Name=Value. On the Java client side, Sensors provide a list of avatars which have activated them at that time. The key parameter here is the type parameter, which in this case is set to sensor. The reason for having a type parameter exists in the fact that there can exist more than one type of object behavior, i.e., Use-able objects.
Page 26 of 36

Figure 3.4: Making a scripted prim (Hippo OpenSim viewer)

3.7.2

Use-able Objects

Many agent scenarios require various forms of interaction with their environment. This project aimed to implement a generic form of implementing custom reactive behaviors into objects, which would be activated through various agent actions. It was decided to achieve this in the same way that objects were transformed into sensor objects, i.e., through scripts within the objects description eld. there were several factors to consider during this development process, such as how the objects could react, and how the agent would know that they reacted. It was decided to use a multiple state system, where the objects could take on various states such as On or O. Once the agent interacts with the object through its Use action, the object would change its state. For example, an agent using a light switch may result in that light switch changing its state from on to o. These objects were denoted as Use-ables and are composed of the following properties, 1. Maximum Distance an avatar can be to use the object 2. Last time object was used 3. Last avatar to use the object (Name and UUID) 4. Current State of the object (e.g. On) 5. List of available states (e.g. On,O,Idle) The useables states are dened through by variable states which can dene one to many states separated by commas ,. For example, the following script can be used to dene a trac light prim, < type=useable; range=5; states=green,orange,red,blinking-red; > Both Useable objects and Sensor Objects react visually when their state changes. Sensors glow when a moving avatar is detected within its range, while Actable objects glow when an avatar acts upon them, or when their state changes. This function was implemented primarily for debugging purposes, allowing the user to know when a sensor or actable has been set up correctly.
Page 27 of 36

3.8

GUI

One of the earlier goals of the project was to develop a simple interface, which would allow the user to impart instructions to an avatar, while at the same time visualizing the avatars local world. The GUI was originally designed for testing the rst stage of the system, i.e., the Communication Layer and OpenMetaverse. This system was only capable of maintaining control over one agent at a time, which is reected through the interface of the GUI itself. The GUI was designed before the communication managers, maintaining access to the OpenMetaverse server through its own mechanisms. These mechanisms were later developed into the Sensor and Action managers, which existed independently to the GUI. This led to the creation of the Communication Manager, which packages these two managers into one independent system. The GUI allows simple interaction with the avatar, as well as maintaining a 2D map of the virtual environment. The 2D map also maintains the scale of objects in regards to their width and length. Clicking on the map will set a waypoint, indicated by a red dot, which the avatar will automatically head towards. Through this interaction paths, indicated by red lines, can also be set by adding more waypoints. Clicking the large Stop button below the map will clear all way points and instruct the avatar to stop moving. In addition to visualizing the world from an agent perspective, the GUI also allows the client to change servers at run time. The GUI also played a vital role in visualizing the communications throughput of the sensor manager. The GUI also provides a simple textbased interface, which can be used to impart instructions to the avatar, describe objects in greater detail and most importantly, can call for the OpenMetaverse server to reset. This GUI was a key contributor to the development of a system which is both stable and ecient. Its ability to visualize data in realtime, with negligible eect on performance of the machine, made it a valuable commodity to the project.

Figure 3.5: GUI

Page 28 of 36

Chapter 4: Agents and OpenSim

4.1

EIS

In order to allow various CLF-compliant Agent Programming languages to access OpenSim, the communication manager was integrated into an EIS environment. In order to integrate the Client Communication Manager with EIS, a few extra mechanisms were implemented on top of the communication manager. This section will now discuss these fundamental processes.

4.1.1

Perceptions

The Sensor Manager has been extended in functionality to include mechanisms for populating beliefs based on queries to its world model. Such functions return a vector of beliefs based on an agents location and a given distance from that location. One such function returns a set of beliefs consisting only of one type of belief See(UUID). This function essentially provides a list of references to objects and avatars within the agents view range. This greatly reduces the agents initial belief set to the bare essentials. However, should the agent wish to generate further beliefs based on these see beliefs, it can do so through its Describe(UUID)action. This action uses a similar function within the Sensor manager, with the dierence being the complexity of the percepts returned. The agents belief set is composed of several dierent beliefs, consisting both of world and personal beliefs. To further benet the agents concept of movement, several beliefs are populated based on the agents movement such as Moving(UUID) and Stopped(UUID), which denote whether that entity is moving or not. In the case of the agent itself, several self beliefs are populated, which represent the agents belief about itself. selfPosition(X, Y, Z) : The agents current position within OpenSim. selfName(Name) : The name of the avatar which the agent is associated with. selfState(moving stopped) : Indicated if the agents associated avatar is currently moving or stopped. selfUUID(UUID) : Represents the UUID of the avatar associated with the agent. Another concept which was integrated into this EIS environment interface is the concept of spatial awareness. The idea behind this is to introduce beliefs into the agents belief set which would help make it more aware of objects within certain distances of its position. Consequently, beliefs were populated which indicated what objects the agent is beside, i.e., within a distance of three OpenSim units (1-2 meters). This belief was denoted as Beside(UUID). Any other objects that the agent sees are simply represented by the See(UUID) belief. The agent can further populate these See(UUID) beliefs by issuing the special action Describe(UUID). This describe action results in a new set of beliefs being added to the agents belief set based on that UUID. Such beliefs resulting from a Describe(UUID) action can include the following,
Page 29 of 36

description(UUID, Description) : Associates an entities* UUID with its description. moving(UUID) : Indicates that the entity of UUID is moving. stopped(UUID) : Indicates that the entity of UUID is not moving. name(UUID, Name) : This belief associates an entities UUID with its Name. type(UUID, Type) : This belief associates an entities UUID with its Type. position(UUID, X, Y, Z) : This belief associates an entities UUID with its position. see(UUID) : Indicates that the entity of UUID is within the agents visibility range. state(UUID, State) : Associates a Useables UUID with its State. *Where an entity can represent any physical object within OpenSim, such as an avatar or a prim. It was decided that beliefs relating to Sensor objects should be maintained by one agent. This was essentially due to the cost of generating beliefs, since there may be numerous sensors which could impend on performance if all agents were required to maintain beliefs of each sensor object. Consequently, an agent named Sensor Agent is the only agent which has access to beliefs about sensors. It is up to the developer to determine how that Sensor Agent should handle those percepts, such as choosing which agents to share those beliefs with.

4.1.2

Actions

The set of agent actions which were implemented involve numerous avatar movement actions as well as interaction actions. To move, a complex but simple to use action was implemented, namely MoveTo. This action takes in a UUID of either an object, an agent or any other physical entity which exists within the OpenSim environment. It is essentially a universal method which simplies and reduces the agents set of movement actions to one single action. This action relies on the server to determine where and when to stop the avatar, i.e., when it has reached its destination. However, should this method fail, the agent can stop itself through a Stop action. Other actions include actions which make the avatar Sit and Stand up as well as actions designed to allow the agent to interact with certain objects through the Use(UUID) and SetState(UUID, StateNumber) commands. The Use command simply changes the objects state to the next available state, e.g., from On to O, where as the SetState allows more control over this action by allowing the agent to implicitly dene which state the object should be set to, e.g., SetState(UUID, 0) would set the objects state to its rst state, which in the previous case would be equal to On. Consequently, the SetState action requires the agent to have prior knowledge of the various states of that object. Below is a list of all of the commands available to an EIS agent through this environment. describe(UUID) : This action can be imparted by the agent who wishes to populate a full set of beliefs based on a prim, avatar, agent, useable or sensor object who is associated with that UUID. The describe action can be seen as a global describe action for all world objects. moveto(UUID) : The move to action works in a similar fashion to the describe action, in that it works for any world object which is associated with an UUID. Consequently, this action can be used by the agent to make it move to any entity that it perceives.
Page 30 of 36

movetopoint(X,Y,Z) : This action allows more control over the agents associated avatars movement by allow the agent to specically dene the coordinates where it wishes to go. stop : This action instructs the agents associated avatar to stop moving, and cancel all of its movements. say(SayThis) : This action can be used broadcast messages to all of the other logged in avatars within that OpenSim grid. This method is not used for agent communications, but instead is used for debugging purposes where visual messages are required. sit : This action is used to instruct the avatar associated with the agent to sit down on the prim or ground that it is standing on, if it is not already sitting. stand : This action instructs the avatar associated with the agent to stand up, if it is not already standing. use(UUID) : This action is used to change the state of a Useable Prim to its next available state. For example, a Useable of type Lamp, of states On, O, of which its current state is On, will transition to the state O once this action is called by the agent. setstate(UUID, StateNumber) : This action allows specic control over the use action, allowing the agent to specically set the state of a Useable Prim by dening which element within the Useables states array to set as its current state. Taking the previous example of the Lamp, the agent could instead set the state of the object to 0 if it wasted to turn the light on, or to 1 if it wanted to turn the light o.

4.2

EIS & AgentFactory

EIS was integrated into agent factory in order to allow various CLF-compliant Agent Programming Languages (APLs) to be developed within its environment. EIS essentially allows AgentFactory to develop environment interfaces which can be used by many dierent APLs. Where this project is concerned, EIS enables the developed Environment Interface to be used by any CLF-compliant APL. In other words, this allows any CLF APL to access OpenSim through this projects EIS environment interface.

4.2.1

Connecting

Connecting to EIS through AgentFactory is simple, and can be achieved through a simple Main Java class. This class will be used to associate agents with their APL le. This is done by Mapping agent names with their associated APL les and adding those maps to AgentFactorys EISDebugConguration class. This class also takes in the environment jar, which in this case is eisOpenSim.jar.

Page 31 of 36

The following code executes two agents agent 1 and sensor agent which are dened by their APL les agent.aspeak and sensor.aspeak accordingly. Map<String, String> designs = new HashMap<String, String>(); designs.put("agent_1", "agent.aspeak"); designs.put("sensor_agent", "sensor.aspeak"); new EISDebugConfiguration("testing", designs, "eisOpenSim.jar").configure();

4.2.2

Moving

The following code is from an AF-AgentSpeak le, which instructs the agent to move towards an avatar whos name is Niall Deasy, when it perceives that agent. In order to control the agent a module is required, which in this case is com.agentfactory.eis.EISControlModule.

module eis -> com.agentfactory.eis.EISControlModule; +see(?UUID,Niall_Deasy,Avatar) : true <eis.perform(moveto(?UUID));

4.2.3

Describing

The following code is from an AF-AgentSpeak le, which instructs the agent to move towards an avatar whos name is Niall Deasy, when it perceives that agent. In order to control the agent a module is required, which in this case is com.agentfactory.eis.EISControlModule.

module eis -> com.agentfactory.eis.EISControlModule; +see(?UUID,?name,?type) : true <eis.perform(describe(?UUID)); +described(?UUID,?name,?type) : description(?UUID,?description) .println(?name+"s description is :"+?description);

Page 32 of 36

Chapter 5: Evaluation

In order to properly evaluate the outcome of the nal project system, a scenario was set up which would best test the features of the system. This scenario is implemented through Agent Factorys AgentSpeak language, and involves two agents, with each agent controls one avatar. A third avatar may also be used to view the simulation through an OpenSim viewer, such as Hippo OpenSim viewer.

5.1

The Scenario

The scene is aimed to best reect the systems overall capabilities while showing how the system can be used to implement a simple AAL scenario. The scenario involves two physical agents Robot and Occupant, and one non-physical agent Sensor Agent. Occupant will represent our AAL occupant, whos job will involve moving about and using objects. The Robot agent will be responsible for checking up on the occupant if it believes that the occupant may be in trouble. The Sensor Agent is responsible for relaying sensor data to the second agent, who will use that data to determine if the occupant has stopped moving. The scenario itself will consist of a single room, as this eliminates the need for implementing complex movement algorithms into the either agents movement abilities. There will be two sensors, one at each far end of the room, each of which will have its range set in such a way as to cover its half of the room. At one end of the room there will be a Useable object, namely a television. Visually, this object will remain the same in either state, however it will be capable of glowing when activated, i.e., when its state is changed from on to o, or from o to on. The Robot Agent will remain in the back left corner of the room when it is not checking up on the occupant. The occupant will spend most of its time in-front of the television, usually sitting on the ground.

Figure 5.1: Scenario


Page 33 of 36

5.2

Implementation

The scene was set up to reect that of Figure 10 in as much detail as possible. The Occupant agent is quite simple in nature. When the Occupant agent is started, it simply goes to the television and turns it on. It then sits down in front of the television to watch it. While this is happening, the sensor agent is monitoring the sensors to determine when the occupant stops moving for over 10 seconds. In real-life AAL scenarios, this time would dier largely depending on other considerable factors such as if the occupant is in bed e.t.c. However, for testing purposes, 10 seconds is suce. When the occupant has not moved for 10 seconds, the Sensor Agent instructs the Robot to go to the occupant, i.e., MoveTo(Occupant). The idea here is that the occupant will react by telling the robot that it is ok or not ok. This is achieved through the Beside(?UUID,?Name,?Type) belief which is populated when an object or agent is beside that agent. Consequently, the two possible responses are Im ok and Help me, either of which will be sent from the Occupant agent to the Robot Agent through Agent Factorys inter-agent communication mechanisms.

(a) Robot Intervention

(b) TV Interaction

Figure 5.2: OpenSim (a), (b)

5.3

Results

The scenario was run 5 times, each resulting in a successful result. The Sensors successfully determined when the agent stops moving, which allowed the sensor agent to instruct the Robot to check out the Occupant. The Robot successfully moved right up to the Occupant each time, allowing the Occupant to determine that the Robot was beside it, and allowing the Occupant to reply each time. The Occupant was also able to turn on the television successfully each time. This scenario, although very simple, was able to show that an AAL scenario is possible through this system.

Page 34 of 36

Chapter 6: Conclusion

Ambient Assisted Living scenarios are designed to allow the occupant to live in the comfort of their own home, while at the same time maintaining the safety and security that is usually only available from specially trained care takers. These scenarios come in many forms, the most common of which relies on incorporating sensor based AAL scenarios directly into the occupants own home. Since all homes are dierent in so many ways, it is important to be able to test such scenarios in a cost eective manner. This project set out to achieve such a method for doing so, by allowing AAL scenarios to be implemented within a virtual environment. The nal system, namely AgentFactory-OpenSim (AF-OpenSim) is designed with eciency, platform independance, and ease of use in mind. Using OpenSim as its virtual environment, the system allows for every conceivable scenario to be built quickly and eectively. OpenSim also allows for multiple avatars to interact with the environment simultaneously, which makes it ideal for testing real world applications. Scenarios can be built within any of the numerous OpenSim viewers, which also allow for a scenario to be viewed in real time. Having successfully built sensory behavior directly into primitive objects, AF-OpenSim also allows for sensors to be created and manipulated within these viewers. This mechanism also allows for sensory objects to be exported and imported between OpenSim servers simply by means of the primitive object itself.

6.1

Future Work

AF-OpenSim does not take advantage of some valuable aspects of OpenSim such as object manipulation and estate management. However, this project does have strong condence that such features could be easily integrated into AF-OpenSim, as the system has been designed to allow such extensions. Many steps have been taken to make this AF-OpenSim a valuable project, in particular its ability to be partially platform independent. Numerous concepts were drawn up to achieve such a system based on available resources and technology, and it was eventually decided that the Server-Client based approach was the best option. Nevertheless, this project believes that a stable port of OpenMetaverse to Java is possible, as it has been attempted before, and that having such a port would eliminate the need for two sets of communication protocols, i.e., Client to OpenMetaverse and OpenMetaverse to OpenSim. It is hoped that the this project may lead to such a port, which would not only benet this system, but other projects also. Although several mechanisms have been implemented in order to keep communications at a minimum, such as not sending duplicate world objects, this project can foresee possible improvements, which could reduce communications even further. One such improvement could be implemented quite easily as it relies on object comparators, which have already been implemented by this project. This improvement to communications would involve only sending partial world data, such as those objects which have changed. This would greatly improve the throughput of the communications, especially for complex and highly populated worlds. Currently, AF-OpenSim is designed to primarily function as a one-to-one system, where there

Page 35 of 36

can exist one Server and one client. Although it is possible to connect more than one client concurrently, by altering the agent-entity relationships within the EIS environment, it is not recommended. This is largely due to the bottleneck of the server, which can only handle one request at a time. Having said this, such a system could be achieved by allowing the clients to agree a set conguration with the server, such as what port to connect to and what agents are available for use. A system such as this would allow for a super sever to be maintained, which could be responsible for maintaining an access point for a community of clients. It should also be possible to implement a distributed server system, which would allow for a single access point to multiple servers while spreading the work load evenly across those servers. This project has taken on its goal of integrating multi agent systems into a virtual environment with the belief that a well designed system, which is designed to allow for further development, is the key to achieving a successful and highly capable system. It is for this reason that the choice was made to start from the beginning, instead of using its antecedent system which proved to be less than reliable. It is also believed that systems which stride to achieve such goals are the key to encouraging existing APLs, to take that vital step of making the transition to FIPA compliant standards. With attractive projects such as AF-OpenSim being developed through FIPA standards, it is believed that the wealth of such projects would prove to be invaluable resources. It is this projects belief that it has achieved a fundamental milestone in providing such a valuable commodity to the FIPA community.

Page 36 of 36

Bibliography

[1] OpenMetaverse/libSL : http://lib.openmetaverse.org/wiki/Main Page, Accessed 24th January 2011. [2] Agent Factory : http://www.agentfactory.com/index.php/Main Page, Accessed 29th January 2011. [3] Agent Factory Run Time Environment http://www.agentfactory.com/index.php/Run-TimeE nvironment (AF-RTE)

[4] Agent Factory: A Framework for Prototyping Heterogenous AOP Languages Sean Russell, Howell Jordan, Gregory M.P. OHare, and Rem W. Collier [5] AgentFactory EIS integration http://www.agentfactory.com/index.php/EIS [6] EIS, Guide for EIS-0.3, Tristan Behrens, February 17, 2011 [7] OpenSim http://opensimulator.org/wiki/Main Page, Accessed 25th January 2011. [8] OpenSim4OpenCog http://wiki.opencog.org/w/OpenSim for OpenCog, Accessed 20th January 2011. [9] XStream : http://xstream.codehaus.org/index.html, Accessed 25th January 2011 [10] AAL: http://www.aal-europe.eu, Accessed 7th February 2011 [11] XML: http://www.xml.com/pub/a/98/10/guide0.html, Accessed 8th February 2011 [12] XStream Alias Tutorial: http://xstream.codehaus.org/alias-tutorial.html, Accessed 8th January 2011 [13] AgentFactory - EIS http://www.agentfactory.com/index.php/EIS, Accessed 11th April 2011 [14] AF-TeleoReactive http://www.agentfactory.com/index.php/AF-TeleoReactive [15] AFAPL ,http://www.agentfactory.com/index.php/AFAPL [16] AF-AgentSpeak : http://www.agentfactory.com/index.php/AFAgentSpeak#Lesson 4: Using Other APIs, Accessed 20 April 2011 [17] libsecondlife-j : http://sourceforge.net/projects/libsecondlife-j, Accessed 22 April 2011 [18] : Acompositional semantic structure for multi-agent systems dynamics Pascal Antonius Theodurus van Eck [19] MAS : Michael Wooldridge, An Introduction to MultiAgent Systems, John Wiley Sons Ltd, 2002, paperback, 366 pages, ISBN 0-471-49691-X [20] The Foundation for Intelligent Physical Agents: FIPA http://www.pa.org/ [21] Speech Act Theory, SAT, John Searle, Speech Acts, Cambridge University Press 1969, ISBN 0-521-09626-X [22] John Langshaw Austin: How to Do Things With Words. Cambridge (Mass.) 1962 Paperback: Harvard University Press, 2nd edition, 2005, ISBN 0-674-41152-8.

Page 37 of 36

You might also like