MultiAgent%20Architecture%20and%20an%20Example - PowerPoint PPT Presentation

About This Presentation
Title:

MultiAgent%20Architecture%20and%20an%20Example

Description:

Title: AGENTES REACTIVOS Author: USUARIO FINAL Last modified by: Ana Lilia Laureano Cruces Created Date: 2/21/2002 5:25:50 PM Document presentation format – PowerPoint PPT presentation

Number of Views:184
Avg rating:3.0/5.0
Slides: 87
Provided by: USUARI459
Category:

less

Transcript and Presenter's Notes

Title: MultiAgent%20Architecture%20and%20an%20Example


1
MultiAgent Architecture and an Example
2
  • Ana Lilia Laureano-Cruces
  • e-mail clc_at_correo.azc.uam.mx
  • http//delfosis.uam.mx/ana/AnaLilia.html
  • Universidad Autónoma Metropolitana Azcapotzalco
    - MEXICO

3
Distributed Artificial Intelligence
  • Distributed resolution of problems
  • MultiAgent systems

4
Distributed resolution of problems
  • Cooperating modules or nodes
  • The knowledge about the problem and the
    development of the solution is distributed

5
MultiAgent Systems
  • Coordinated intelligent behaviour between a
    coordinated collection of autonomos agents
  • Knowledge
  • Goals
  • Skills
  • Planning
  • Reasoning about the coordination between agents

6
Contents
  • Basic ideas
  • Introduction (Control Theory and Cognitive
    Psychology)
  • MultiAgent Systems
  • An expert decision application
  • Conclusions

7
Basic Ideas
  • The intelligence of the majority of traditional
    problem solving algorithms is incoporated by the
    designer.
  • As a result, they are predictable and do not
    allow for unexpected results.
  • This type of systems are repetitive, and always
    yield the same output for a given set of input
    data.
  • Modifying these codes is normally a very
    complicated task.

8
Basic Ideas
  • The resolution methods based on the association
    of agents are conceived to exhibit emergent
    behavior rather than a predicatble one.
  • It is possible to create new agents to take care
    of situations that are not taken into
    consideration during the original design, without
    the need of modifying existing agents.
  • The basic idea is to conceive the solution as a
    set of restrictions to be satisfied rather than
    as the result of a search process.

9
Basic Ideas
  • By creating a society of agents, it is possible
    that each one of them is in charge of a subset of
    restrictions.
  • In this manner, the global problem is solved
    through a series of negotiations or intervention
    hierarchy between agents, rather than through
    searching.
  • Each agent could represent different interest
    conflicts, which should be followed carefully.
  • If at the end of the iteration an adequate
    solution is not reached, a restriction has not
    been taken into account, and an agent that
    considers it should be introduced.

10
The nature of AI problems
  • There are two classes of AI problems.
  • Classic problems (related with optimization).
  • Everyday problems of human beings.
  • The central idea is to find a solution that,
    without being optimum, satisfies our requirements.

11
When we think in MultiAgent Systems to solve the
problem we most take into account some ideas ...
  • In spite of its complexity, any problem can be
    decomposed in tractable parts.
  • The relationship between its parts is weak, that
    is, an increasing complexity does not affect the
    interaction between them.
  • The specifications of the problem and the control
    is distributed among all the agents.

12
When we think in MultiAgent Systems to solve the
problem we most take into account some ideas ...
  • An individual agent is not interested in the
    global problem it is solving.
  • The result of the interaction of agents provides
    the solution that is being searched.
  • This perspective is that of distributed AI.

13
When we think in MultiAgent Systems to solve the
problem we most take into account some ideas ...
  • What is the difference between the classical and
    agent strategies?
  • S (p1,p2,...pk).
  • S p1 x p2 x ... x pk
  • S p1 x p2

14
When we think in MultiAgent Systems to solve the
problem we most take into account some ideas ...
  • The problem is distributed.
  • Each agent represents a relevant entity for the
    problem to be solved, and has an individual
    behavior.
  • When interacting between them and their
    environment, each agent follows its own strategy.
  • Within this context, solutions emerge.

15
The origins
  • Control Theory Vs. Cognitive psychology
  • Theory Control
  • Cognitive Psychology
  • Classical AI Planning systems

16
Philosophical roots
  • Origins in the 18th century.
  • Foundation of model control theory laid by James
    Watt.
  • Mechanical feedback to control steam engines.
  • Cybernetics tried to unify the phenomena of
    control and communication observed in animals and
    machines into a common mathematical model.

17
Agents
  • This term is used to characterize, starting from
    primitive biological systems, very different
    kinds of systems.
  • Biological ants, bees.
  • Movil Robots and air planes.
  • Systems that simulate or describe whole human
    societies or organizations such as
  • shiping companies
  • industrial enterprises

18
A black box agent model
OUTPUT
INPUT
f
Perception
Comunication
19
An agent is internally described through a
function f
  • f is a function which takes perception and
    received messages as input and generates output
    in terms of performing actions and sending
    messages.
  • The mapping f itself is not directly controlled
    by an external authority the agent is autonomous.

20
This general view of an agent allows its
modelling through
  • Biological models
  • Based-kowledge models (this kind of models can be
    defined by mental states)
  • What makes this models drastically different is
  • the nature of the function f which determines the
    agents behaviour.

21
Cognitive Psychology
  • Control theory investigates the agent-world
    relationship from a machine oriented perspective.
  • The question of how goals and intentions of a
    human agent emerge and how they finally lead to
    the execution of actions that change the state of
    the world, is the subject of cognitive
    psychology, particularly of motivation theory.

22
From Motivation to Action
Resulting motivation tendency
Motivation
Formation of intentions
Decision
Initiation of action
Action
23
Motivational Theory
  • The motivation theory study is centered around
    the problem of finding out why an agent performs
    a certain action or reveals a certain behaviour.
    This covers the transition from motivation to
    action where two subprocesses that define two
    basic directions in motivation theory are
    involved.

24
  • Formation of intentions how intentions are
    generated from a set of latent motivation
    tendencies.
  • Volition and action how the actions of a person
    emerge from its intentions.
  • The investigation of reasons, motivations,
    activation, control and duration of human
    behavior goes back at least to Platón and
    Aristóteles. They defined it along 3 categories
    cognition, emotion and motivation.

25
  • The main determinant of motivation was situated
    in the human personality a human being is a
    rational creature with a free will.
  • In AI, the human needs and goals have been
    structured in a hierarchical way.

26
  • Darwin shifted the focus of motivation research
    from a person-centered to a situation-centred
    perspective.
  • He established a duality between the human and
    animal behaviors.
  • As a consequence, it was found that many of the
    models corresponding to animal behavior are also
    valid for humans.

27
  • Another consequence of Darwins theory is that
    human intelligence was viewed as a product of
    evolution rather than a fundamental quality which
    is given to humans exclusively by some higher
    authority.
  • Thus, intelligence and learning became a subject
    of sytematic and empirical research.

28
  • In the case of AI, hybrid architectures have been
    develpoed to combine both paradigms
    (person-centred and situation-centred).
  • Dynamic theory of action (DTA). (Kurt Lewin
    1890-1947).

29
Dynamic Theory of Action
  • It is a model explaining the dynamics of change
    of motivation over time.
  • The model starts from a set of behavioral
    tendencies which can be compared to the possible
    goals of a person.

30
Dynamic Theory of Action
  • For every point in time t and for each behavioral
    tendency b the theory determines a resultant
    action a tendency.
  • That is, how strong is b at time t.
  • The maximal tendency is called dominating action
    a tendency at time t.

31
  • The input for a DTA are an instant t in the
    stream of behavior, and an action tendency which
    is given by a
  • motive (person-centered)
  • an incentive (situation-centered)
  • The dynamics of a DTA is described by means of
    four basic forces
  • instigator
  • consummator
  • inhibitor
  • Resistant force.

32
  • The output of the DTA is the resulting tendency
    of action for a and tn which is computed as a
    function of the four forces defined above.
  • This work is related with Maes Theory
    (agents can have goals), with the BDI
    architecture, and with the control selection of
    the exhibit mechanism of the pedagogical agents
    behaviors.

33
From the point of view of a computer scientist ...
  • How can motives and situations be represented and
    recognized?
  • How can the influence of motives and situations
    to the basic forces In, Co, Ini, and Re, be put
    into a computational model.
  • Can we reduce an agent to a finite set of
    potential behavioral tendencies?

34
Clasical AI Planning systems
  • The planning systems are seen as
  • a world state
  • a goal state and
  • a set of operators
  • Planning can be looked as a search in a state
    space, and the execution of a plan will result in
    some goal of the agent being achieved.

35
The analogy with the agents theory
  • The agent has a symbolic representation of the
    world.
  • The state of the world is described by a set of
    propositions that are valid in the world.
  • The action effect of the agent in the environment
    are also described by a set of operators, and the
    resulting world state.

36
Reactive-Agents Architectures
  • The design of these architectures is strongly
    influenced by behavioral psychology.
  • Brooks, Chapman and Agree, Kelabling, Maes,
    Ferber, Arkin
  • These kind of agents are kown as
  • behaviour-based
  • situated or
  • reactive

37
Reactive Agents
  • The selection-action dynamics for this type of
    system will emerge in response to two basic
    aspects
  • the conditions of the environment
  • internal objectives of each agent
  • Their main characteristics are
  • dynamic interaction with the environment
  • internal mechanisms that allow working with
    limited resources and incomplete information

38
  • The design of reactive architectures is partially
    guided by Simons hypothesis
  • the complexity of an agents behavior can be a
    reflection of its opertating environment rather
    than of a complex design.

39
  • Brooks thinks that the model of the world is the
    best model for reasoning
  • ... and to build reactive systems based on
    perception and action (essence of intelligence)
  • Once the essences of being and reaction are
    available, the solutions to the problems of
    behavior, language, expert knowledge and its
    application, and reasoning, become simple.

40
Functionality Vs. Behavior
  • From a functional perspective, classical AI views
    an intelligent system as a set of independent
    information processors.
  • The subsumption architecture provides an oriented
    descomposition of the activity in this way a set
    of activity (behaviors) producers can be
    identified.
  • The behaviors work in parallel, and are tied to
    the real world through perceptions and actions.

41
  • An instigator is a force that pushes the action
    tendency for b at time t.
  • A consummator is used to weaken the instigating
    force for b over time. This force is only active
    while the behavioral tendency b is active.
  • An inhibitor is a force which inhibits the action
    tendency for b at time t.
  • A resistant force weakens the inhibitory force
    over time.

42
Present situation of Geothermics in Mexico
  • Up to present geothermal resources in Mexico are
    utlized to produce electrical energy
  • Some geothermal resources are utlized for
    different purposes
  • Turist
  • Therapeutic
  • Use of the separated waters or the waste heat for
    industrial in mexican geothermal fields.

43
  • However exploration and develpoment activities
    are focused on use of geothermal resources.
  • The Universities and the CFE (Comisión Federal de
    Elecricidad)

44
  • Regional Geothermal assessment in Mexico was
    completed 1987
  • When 92 of the whole territory had been covered
  • The remining 8 has no geothermal because of its
    tectonically stable location

45
By 1987 ...
  • 545 thermal localities had been identified, which
    grouped around 1380 individual hot points
    including
  • Hot springs
  • Hot water shallow wells
  • Hot soils
  • Fumaroles, etc.

46
  • By 1990, 42 geothermal zones has been located
  • In those zones, pre feasabilty studies
    (geology, fluid geochemistry and geophhysics) had
    been conduced in varynig stages.

47
  • From 1990 to 1994 detailied geological studies
    were made in the following geothermal zones
  • Las tres vírgenes (Baja California Sur)
  • Hidrology
  • Tectonics
  • stratigraphy
  • volcanology

48
  • El Ceboruco-San Pedro (Nayarit)
  • Hidrology
  • Tectonics
  • volcanology

49
Geothermal Fileds and Geothermal zones under
exploration in Mexico
50
Drilling Activities
  • Currently there are 68 geothermal wells,
    representing 104, 859 drilled meters.
  • E.g. In the Humeros Geothermal field two deep
    wells were drilled
  • There are in Mexico, up to the present, 356 deep
    wells drilled for electrical use of geothermal
    resources. These wells give a total amount of
    715,090 drilled meters.

51
  • Currently Mexico has an installed geothermal
    electric capacity of 753 Mwe
  • It represents 7 of the overall country production

52
An example
  • One of the objectives of artificial intelligence
    refers to the development of systems that ease or
    increase the level of comfort in the daily life
    of humans. Such is the case for tasks with
    permanent focus on the input data in convergent
    methods or systems that help in the
    decision-making process involved in costly
    processes.

53
An example
  • In this example we propose a designs of the
    experts decision making process trough the use
    of a cognitive model, and fuzzy sets to model the
    agents reactive deliberative process.

54
  • Software system helps human expert in the
    estimation of the static formation temperatures.
  • Furthermore, we will present an example based on
    a behavior developed from an expert in the field
    of geothermal sciences.

55
  • An attempt to estimate formation temperatures
    from logged temperatures was solved whit this
    methodology based on reactive decision model.

56
Adaptative Behavior
  • Autonomy is also known as adaptive behavior and
    it has the capacity to adjust itself to the
    environment conditions
  • It is the essence of the intelligence and it is
    the animal ability to fight continuously against
    the world complex, dynamic and unpredictable.

57
  • This ability is seeing in terms of flexibility to
    adjust the behavior compendium to the
    contingencies anytime as a product of the
    interaction with the environment.

58
When we use agents to simulate an adaptative
behavior
  • Agents can be developed from two perspectives
  • knowledge and automatic learning acquisition
  • the domain expertise is codified from a human
    expert
  • In our study case we design the adaptative
    behavior taken into account the human expertise

59
the design of the representation of dynamical
environment
  • could be from two approaches
  • the traditional AI considered that the success of
    an intelligent system is closely related with the
    degree of the domain problem, which can be
    treated as a microworld abstraction (symbolic
    processing approaches), that is, at the same
    time, disconnected of the real world.

60
  • There exists another group whose design is usally
    bottom - up, it is an etologic design and bears
    in mind the fundamental steps of animal behavior
    (subsymbolic). These approaches also empathizes
    symbol grounding where various behavior modules
    of an agent interact with the environment to
    produce complex behavior.

61
  • However this group concedes that achieving
    human-level artificial intelligence might require
    integration of the two approaches.
  • In our study case, referring to a simulator
    control, the behavior agent has to be connected
    to the simulator, which represents a dynamic
    environment, modelling the domain expertise to
    the adaptive process. In this case it represents
    a symbolic grounding representation

62
Agents
  • Agents continuously perform three functions
  • perceptions of the dynamic conditions from the
    environment
  • actions that can change the environment
    conditions
  • reasoning for interpreting perceptions, solving
    problems, making inferences and taking an action

63
Agents
  • Conceptually perception inputs data for the
    reasoning process and the reasoning process
    guides the action
  • In some cases the perception can guide the action
    directly

64
  • One of the problems in the design of these agents
    is to establish a decision-making process with
    subjective domains
  • Natural environments exhibit a great deal of
    structure that a properly designed agent can
    depend upon and even actively exploit
  • Strictly talking about the things required to
    achieve an adaptive behavior, a structural
    congruence between the internal dynamic
    mechanisms of an agent and the external
    environment dynamic is needed

65
  • As long as this compatibility exists, both the
    environment and the unit act as mutual sources of
    disturbance, release and conditions alteration
  • In this case it is a two non-autonomous dynamical
    systems

66
  • The agent (the human expert) and the
    environment (the simulator). The design of these
    systems can be seen as a control problem.

67
A control problem
  • have two sub-problems
  • the state estimation, consisting in the
    evaluation of the environment (perception) and
    the controllers input.
  • regulation, consisting in finding an adequate
    response to the environment state (action)

68
The controller consists of
  • a function (f) that estimates the environments
    state
  • a function that regulates the environments
    response

69
From the perspective of AI
  • the agent has the ability to recognize certain
    class of situations, which derive in objectives
    and thus, develop actions that lead to the
    achievement of these objectives

70
  • Most of the environments are too complex to be
    described by differential equations
  • The behavior of a shipment company of an airport,
    or cognitive processes involving expertise, need
    a kind of symbolic model
  • The classic control theory can not deal with
    incomplete information regarding the environment
    in a successful way

71
  • In the case of agents, heuristics are use.
  • Its use implies a basic difference because the f
    function can be implemented through differential
    equations or symbolic reasoning

72
  • A model having an agent and its environment imply
    the existence of two dynamic systems having
    convergent dynamics that is, the value of their
    state variables do not diverge to infinity, but
    eventually converge to a limit set

73
  • Figure 1 shows the dynamical systems and the
    variables of our study case. The WELLBORE DATA is
    included in the symbolic model and these
    variables will make the human-expert (autonomous
    agent) reason. In this example the input data
    used by the human-expert of some variables remain
    constant (the mass flow rate during lost of
    circulation and porosity).

74
Dynamic Systems that constitutes the environment
and the autonomous agent
75
Diagram of the data for the obtaining of existing
temperatures
76
Mental model of experts decision
77
Dependency of agents
78
Logged (TReg) and simulated (TSim) temperatures
for the test well. The resulting formation
temperatures (TMod) are also shown
79
Conclusions
  • Due to its usefulness and full applicability many
    areas of computer science have rapidly adopted
    this sample and powerful concept
  • On AI the introduction of agents is partially due
    to the final deficulties when we try to solve
    problems considering the features of the external
    world or when the agent is involved in a problem
    solving process

80
Conclusions
  • The solutions to address these problems can be
    limited and inflexible if there is not a good
    perception of the external world features.
  • As a response to this difficulty, the agents
    receive inputs from the environment through
    devices that allow them to perceive the world.
  • In response to these inputs, they develop actions
    causing effects on the environment.

81
Conclusions
  • In our example we were established two agents
  • An autonomous
  • Non-autonomous
  • This implies a distributed solution to the
    problem, which consists of finding the existing
    temperatures.

82
Conclusions
  • These characteristics provide the properties of
    robustness and answer quality to the system.
  • The basic reactive behavior design of the agent
    was carried out through located activity that is
    focused on the agents actions and, therefore, on
    its basic behaviors according to the situation,
    moments and environments.

83
Conclusions
  • It is fundamental to find the specific
    perceptions that will cause a certain action on a
    present environment.
  • To achieve this, a cognitive model that
    represents the experts decision, was developed.
  • This model allows the consideration of the
    different situations that can occur in the
    environment, to achieve an emergent response of
    the system.

84
Conclusions
  • The behavior has been formalized taking into
    account all the control variables of the process
  • a) goal type,
  • b) knowledge type and
  • c) perception and action of each agent.
  • This formalization provides an interaction
    between agents with a well-defined interface that
    guarantee a congruent behavior of the muti-agent
    system (environment-agents or agents-agents)

85
Conclusions
  • The temperature behavior in the geothermal well
    has been successfully modeled since the
    difference between simulated and logged
    temperatures is inside the human perception.

86
Conclusions
  • Finally, this work is an example of a design
    technique proposed for the development of
    multi-agent systems with reactive
    characteristics, which shows the simplicity (with
    respect to previous works) that has been achieved
    through the development of the software that
    controls a dynamic process that involves many
    variables
Write a Comment
User Comments (0)
About PowerShow.com