Theoretical Introduction: From Finite-State Machine and Logical Programming to an Agent

The main definition of our BDI (Belief-Desire-Intention) framework follows Russell & Norvig - Artificial Intelligence: A Modern Approach with the structure of goal-based agents:

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors (Chapter 2.1, page 31)

We are defining furthermore along the book of Michael Wooldrige - An Introduction to MultiAgent Systems (chapter 2.1, page 26):

  • Reactivity: Intelligent agents are able to perceive their environment, and respond in a timely fashion to changes that occur in it in order to satisfy their design objectives
  • Proactiveness: Intelligent agents are able to exhibit goal-directed behaviour by talking the initiative in order to satisfy their design objectives
  • Social ability: Intelligent agents are capable of interacting with other agents (and possible humans) in order to satisfy their design objectives

The technical execution structure of the agent uses the concepts of PRS (Procedural Reasoning System) and architecture of dMARS (Distributed Multi-Agent Reasoning System), so we are definiting the agent as a Finite-State Machine in a Logical Programming language with the following definition:

  • the initial state is optionally defined with the initial goal
  • a state is a set of beliefs if a cycle is not running
  • a transition is the execution of a plan (with instantiation of a goal) and is limited by the plan condition

But in general, in parallel execution of plans there can be many active transition in one cycle. We also focus on the definition of M. Wooldrige, stating that

a multi-agent system is inherently multithreaded, in that each agent is assumed to have at least on thread of control (chapter 2.2, page 30)

Basic Behaviour

This basic example shows the main functionality of the structure. We define the structure of three plans without a condition and an initial goal. The initial goal (main) calls two other plans first and second within the next cycle. The first plan will call itself within the following cycles (loop structure) and the second plan calls the initial goal plan. The first plan will be called once in each cycle, because the trigger !first and the plan +!first match. Note: The plans first and main or second run in parallel.

+!main <- !first; !second.
+!first <- !first.
+!second <- !main.

The state machine of this agent which executes empty plans is shown in the following picture.


The picture shows the static model of the agent and the states represent a set of beliefs which are created during runtime and the transitions are the instantiation of the goal and the execution of the plans. The initial state is defined by the initial goal.
Based on this static model the runtime model shows the execution structure of the state machine. The animation shows the continuous execution_ of the agent on each cycle. In this case the agent runs infinitely, but it switches between the main-state and the first and second-state, but these two states run in parallel (animate finite-state machine).

Action Behaviour

Actions are one of the helpful structures within agent-programming. A definition of an action is:

The action is a function with $f : \mathbb{X}^n \rightarrow \mathbb{B}$ and $\mathbb{X}$ is any input data type and $\mathbb{B}$ is a binary set with {true, false}, which is executed independend and directly within the current agent / plan context. An action can change the environment immediately or the internal structure of the agent.

From a technical point of view, the action is a method, which is called inside the current plan. The method is defined by the IAction interface. All actions1 implement this interface which uses the IExecution interface for all executable structures.
Based on the first illustrated finite-state machine, we will show the action structure. In short: Actions executed on the transition. On the first state machine the transition contains only the achievement goals. An achievement goal is also an action which executes a plan.


+!main <- generic/print(“execute main-goal”); !first; generic/print(“achievement-goal in main: first”); !second; generic/print(“achievement-goal in main: second”) .

+!first <- generic/print(“execute first-goal”); !first; generic/print(“achievement-goal in first: first”) .

+!second <- generic/print(“execute second-goal”); !main; generic/print(“achievement-goal in second: main”) .

  1. see the IAction interface for a detailed description [return]