Knowledge Acquisition - PowerPoint PPT Presentation

About This Presentation
Title:

Knowledge Acquisition

Description:

Characteristic features of intelligent agents. Sample tasks for intelligent agents ... or any other kind of device which drastically limits the search for ... – PowerPoint PPT presentation

Number of Views:130
Avg rating:3.0/5.0
Slides: 50
Provided by: learningag
Learn more at: http://lalab.gmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Knowledge Acquisition


1
CS 785, Fall 2001
Knowledge Acquisition and Problem Solving
Introduction
George Tecuci tecuci_at_cs.gmu.eduhttp//lalab.gmu.
edu/
Learning Agents LaboratoryDepartment of
Computer Science George Mason University
2
Overview
1. Course objective and Class introduction
2. Artificial Intelligence and intelligent agents
3. Sample intelligent agent presentation and demo
4. Agent development Knowledge acquisition and
problem solving
5. Overview of the course
3
1. Course Objective
Present principles and major methods of knowledge
acquisition for the development of knowledge
bases and problem solving agents. Major topics
include overview of knowledge engineering,
general problem solving methods, ontology design
and development, modeling of the problem solving
process, learning strategies, rule learning and
rule refinement. The course will emphasize the
most recent advances in this area, such as
knowledge reuse, agent teaching and learning,
knowledge acquisition directly from subject
matter experts, and mixed-initiative knowledge
base development. It will also discuss open
issues and frontier research. The students will
acquire hands-on experience with a complex,
state-of-the-art methodology and tool for the
end-to-end development of knowledge-based
problem-solving agents.
4
2. Artificial Intelligence and intelligent agents
What is Artificial Intelligence
What is an intelligent agent
Characteristic features of intelligent agents
Sample tasks for intelligent agents
5
What is Artificial Intelligence
Artificial Intelligence is the Science and
Engineering that is concerned with the theory and
practice of developing systems that exhibit the
characteristics we associate with intelligence in
human behavior perception, natural language
processing, reasoning, planning and problem
solving, learning and adaptation, etc.
6
Central goals of Artificial Intelligence
Understand the principles that make intelligence
possible(in humans, animals, and artificial
agents)
Developing intelligent machines or agents(no
matter whether they operate as humans or not)
Formalizing knowledge and mechanizing
reasoningin all areas of human endeavor
Making the working with computers as easy as
working with people
Developing human-machine systems that exploit the
complementariness of human and automated
reasoning
7
What is an intelligent agent
  • An intelligent agent is a system that
  • perceives its environment (which may be the
    physical world, a user via a graphical user
    interface, a collection of other agents, the
    Internet, or other complex environment)
  • reasons to interpret perceptions, draw
    inferences, solve problems, and determine
    actions and
  • acts upon that environment to realize a set of
    goals or tasks for which it was designed.

input/
sensors
IntelligentAgent
output/
user/ environment
effectors
8
What is an intelligent agent (cont.)
Humans, with multiple, conflicting drives,
multiple senses, multiple possible actions, and
complex sophisticated control structures, are at
the highest end of being an agent. At the low
end of being an agent is a thermostat.It
continuously senses the room temperature,
starting or stopping the heating system each time
the current temperature is out of a pre-defined
range. The intelligent agents we are concerned
with are in between. They are clearly not as
capable as humans, but they are significantly
more capable than a thermostat.
9
What is an intelligent agent (cont.)
An intelligent agent interacts with a human or
some other agents via some kind of
agent-communication language and may not blindly
obey commands, but may have the ability to modify
requests, ask clarification questions, or even
refuse to satisfy certain requests. It can
accept high-level requests indicating what the
user wants and can decide how to satisfy each
request with some degree of independence or
autonomy, exhibiting goal-directed behavior and
dynamically choosing which actions to take, and
in what sequence.
10
What an intelligent agent can do
  • An intelligent agent can
  • collaborate with its user to improve the
    accomplishment of his or her tasks
  • carry out tasks on users behalf, and in so doing
    employs some knowledge of the user's goals or
    desires
  • monitor events or procedures for the user
  • advise the user on how to perform a task
  • train or teach the user
  • help different users collaborate.

11
Characteristic features of intelligent agents
Knowledge representation and reasoning
Transparency and explanations
Ability to communicate
Use of huge amounts of knowledge
Exploration of huge search spaces
Use of heuristics
Reasoning with incomplete or conflicting data
Ability to learn and adapt
12
Knowledge representation and reasoning
An intelligent agent contains an internal
representation of its external application
domain, where relevant elements of the
application domain (objects, relations, classes,
laws, actions) are represented as symbolic
expressions. This mapping allows the agent to
reason about the application domain by performing
reasoning processes in the domain model, and
transferring the conclusions back into the
application domain.
ONTOLOGY
represents
If an object is on top of another object that is
itself on top of a third object then the first
object is on top of the third object.
RULE ? x,y,z ? OBJECT, (ON x y) (ON y z) ?
(ON x z)
Model of the Domain
Application Domain
13
Separation of knowledge from control
Implements a general method of interpreting the
input problem based on the knowledge from the
knowledge base
Intelligent Agent
Input/
Problem Solving Engine
Sensors
User/ Environment
Ontology Rules/Cases/Methods
Knowledge Base
Output/
Effectors
Data structures that represent the objects from
the application domain, general laws governing
them, action that can be performed with them,
etc.
14
Transparency and explanations
  • The knowledge possessed by the agent and its
    reasoning processes should be understandable to
    humans.
  • The agent should have the ability to give
    explanations of its behavior, what decisions it
    is making and why.
  • Without transparency it would be very difficult
    to accept, for instance, a medical diagnosis
    performed by an intelligent agent.
  • The need for transparency shows that the main
    goal of artificial intelligence is to enhance
    human capabilities and not to replace human
    activity.

15
Ability to communicate
  • An agent should be able to communicate with its
    users or other agents.
  • The communication language should be as natural
    to the human users as possible. Ideally, it
    should be free natural language.
  • The problem of natural language understanding and
    generation is very difficult due to the ambiguity
    of words and sentences, the paraphrases, ellipses
    and references which are used in human
    communication.

16
Ambiguity of natural language
Words and sentences have multiple meanings
17
Other difficulties with natural language
processing
18
Use of huge amounts of knowledge
  • In order to solve "real-world" problems, an
    intelligent agent needs a huge amount of domain
    knowledge in its memory (knowledge base).
  • Example of human-agent dialog
  • User The toolbox is locked.
  • Agent The key is in the drawer.
  • In order to understand such sentences and to
    respond adequately, the agent needs to have a lot
    of knowledge about the user, including the goals
    the user might want to achieve.

19
Use of huge amounts of knowledge (example)
User The toolbox is locked. Agent Why is he
telling me this? I already know that the box is
locked. I know he needs to get in. Perhaps he
is telling me because he believes I can help. To
get in requires a key. He knows it and he knows
I know it. The key is in the drawer. If he knew
this, he would not tell me that the toolbox is
locked. So he must not realize it. To make him
know it, I can tell him. I am supposed to help
him. The key is in the drawer.
20
Exploration of huge search spaces
An intelligent agent usually needs to search huge
spaces in order to find solutions to
problems. Example 1 A search agent on the
internet Example 2 A checkers playing agent
21
Exploration of huge search spaces illustration
Determining the best move with minimax
win
22
Exploration of huge search spaces illustration
The tree of possibilities is far too large to be
fully generated and searched backward from the
terminal nodes, for an optimal move.
Size of the search space
A complete game tree for checkers has been
estimated as having 1040 nonterminal nodes. If
one assumes that these nodes could be generated
at a rate of 3 billion per second, the generation
of the whole tree would still require around 1021
centuries ! Checkers is far simpler than chess
which, in turn, is generally far simpler than
business competitions or military games.
23
Use of heuristics
Intelligent agents generally attack problems for
which no algorithm is known or feasible, problems
that require heuristic methods.
  • A heuristic is a rule of thumb, strategy, trick,
    simplification, or any other kind of device which
    drastically limits the search for solutions in
    large problem spaces.
  • Heuristics do not guarantee optimal solutions. In
    fact they do not guarantee any solution at all.
  • A useful heuristic is one that offers solutions
    which are good enough most of the time.

24
Use of heuristics illustration
Heuristic function for board position evaluation
w1.f1 w2.f2 w3.f3 where wi are
real-valued weights and fi are board features
(e.g. center control, total mobility, relative
exchange advantage.
25
Reasoning with incomplete data
The ability to provide some solution even if not
all the data relevant to the problem is available
at the time a solution is required.
  • Example
  • The reasoning of a physician in an intensive care
    unit.
  • Planning a military course of action.

If the EKG test results are not available, but
the patient is suffering chest pains, I might
still suspect a heart problem.
26
Reasoning with conflicting data
The ability to take into account data items that
are more or less in contradiction with one
another (conflicting data or data corrupted by
errors).
Example The reasoning of a military
intelligence analyst that has to cope with the
deception actions of the enemy.
27
Ability to learn
The ability to improve its competence and
efficiency.
  • An agent is improving its competence if it learns
    to solve a broader class of problems, and to make
    fewer mistakes in problem solving.
  • An agent is improving its efficiency if it learns
    to solve more efficiently (for instance, by using
    less time or space resources) the problems from
    its area of competence.

28
Illustration concept learning
Learn the concept of ill cell by comparing
examples of ill cells with examples of healthy
cells, and by creating a generalized description
of the similarities between the ill cells
29
Ability to learn classification
The learned concept is used to diagnose other
cells
This is an example of reasoning with incomplete
information.
30
Extended agent architecture
The learning engine implements methods for
extending and refining the knowledge in the
knowledge base.
Intelligent Agent
Problem Solving Engine
Input/
Sensors
Learning Engine
User/ Environment
Output/
Ontology Rules/Cases/Methods
Knowledge Base
Effectors
31
Sample tasks for intelligent agents
32
Sample tasks for intelligent agents (cont.)
33
Sample tasks for intelligent agents (cont.)
34
Sample tasks for intelligent agents (cont.)
Any useful task Information fusion. Information
assurance. Travel planning. Email management.
35
2. Sample intelligent agent Presentation and demo
Agent task Course of action critiquing
Knowledge representation
Problem solving
Demo
Why are intelligent agents important
36
Critiquing
Critiquing means expressing judgments about
something according to certain standards.
Example Critique various aspects of a military
Course of Action, such as its viability (its
suitability, feasibility, acceptability and
completeness), its correctness (which considers
the array of forces, the scheme of maneuver, and
the command and control), and its strengths and
weaknesses with respect to the principles of war
and the tenets of army operations.
37
Sample agent Course of Action critiquer
Source Challenge problem for the DARPAs High
Performance Knowledge Base (HPKB) program
(FY97-99).
Background A military course of action (COA) is
a preliminary outline of a plan for how a
military unit might attempt to accomplish a
mission. After receiving orders to plan for a
mission, a commander and staff analyze the
mission, conceive and evaluate potential COAs,
select a COA, and prepare a detailed plans to
accomplish the mission based on the selected COA.
The general practice is for the staff to generate
several COAs for a mission, and then to make a
comparison of those COAs based on many factors
including the situation, the commanders
guidance, the principles of war, and the tenets
of army operations. The commander makes the final
decision on which COA will be used to generate
his or her plan based on the recommendations of
the staff and his or her own experience with the
same factors considered by the staff.
Agent task Identify strengths and weaknesses in
a COA, based on the principles of war and the
tenets of army operations.
38
COA Example the sketch
Graphical depiction of a preliminary plan. It
includes enough of the high level structure and
maneuver aspects of the plan to show how the
actions of each unit fit together to accomplish
the overall purpose.
39
COA Example the statement
Explains what the units will do to accomplish the
assigned mission.
40
COA critiquing task
Answer each of the following questions
Provide general guidance for the conduct of war
at the strategic, operational and tactical
levels.
Describe characteristics of successful operations.
41
The Principle of Surprise (from FM100-5)
Strike the enemy at a time or place or in a
manner for which he is unprepared. Surprise can
decisively shift the balance of combat power. By
seeking surprise, forces can achieve success well
out of proportion to the effort expended. Rapid
advances in surveillance technology and mass
communication make it increasingly difficult to
mask or cloak large-scale marshaling or movement
of personnel and equipment. The enemy need not be
taken completely by surprise but only become
aware too late to react effectively. Factors
contributing to surprise include speed, effective
intelligence, deception, application of
unexpected combat power, operations security
(OPSEC), and variations in tactics and methods of
operation. Surprise can be in tempo, size of
force, direction or location of main effort, and
timing. Deception can aid the probability of
achieving surprise.
42
Knowledge representation object ontology
The ontology defines the objects from an
application domain.
43
Knowledge representation problem solving rules

RASWCER-001
IF the task to accomplish is ASSESS-SECURITY-WRT-
COUNTERING-ENEMY-RECONNAISSANCE
FOR-COA ?O1
Question Is an enemy recon unit present in ?O1
? Answer Yes, the enemy unit ?O2 is performing
the action ?O3 which is a
reconnaissance action.
Condition ?O1 IS COA-SPECIFICATION-MICROTHE
ORY ?O2 IS MODERN-MILITARY-UNIT--DEPLOYABLE
SOVEREIGN-ALLEGIANCE-OF-ORG
?O4 TASK ?O3 ?O3 IS
INTELLIGENCE-COLLECTION--MILITARY-TASK ?O4 IS
RED--SIDE
Then accomplish the task ASSESS-SECURITY-WHEN-ENE
MY-RECON-IS-PRESENT FOR-COA
?O1 FOR-UNIT ?O2
FOR-RECON-ACTION ?O3
A rule is an ontology-based representation of an
elementary problem solving process.
44
Illustration of the problem solving process
Assess COA wrt Principle of Surprise for-coa COA4
11
Does the COA assign appropriatesurprise and
deception actions?
I consider enemy recon
Is an enemy reconnaissance unit present?
Yes, RED-CSOP1 which is performing the
reconnaissance action SCREEN1
Is the enemy reconnaissance unit destroyed?
Yes, RED-CSOP1 is destroyed by DESTROY1
45
COA critiquing demo
COACritiquingDemo
46
Why are intelligent agents important
Humans have limitations that agents may alleviate
(e.g. memory for the details that isnt effected
by stress, fatigue or time constraints).
Humans and agents could engage in
mixed-initiative problem solving that takes
advantage of their complementary strengths and
reasoning styles.
47
Why are intelligent agents important (cont)
The evolution of information technology makes
intelligent agents essential components of our
future systems and organizations.
Our future computers and most of the other
systems and tools will gradually become
intelligent agents.
We have to be able to deal with intelligent
agents either as users, or as developers, or as
both.
48
Intelligent agents Conclusion
Intelligent agents are systems which can perform
tasks requiring knowledge and heuristic methods.
Intelligent agents are helpful, enabling us to do
our tasks better.
Intelligent agents are necessary to cope with the
increasing challenges of the information society.
49
Recommended reading
G. Tecuci, Building Intelligent Agents, Academic
Press, 1998, pp. 1-12. Tecuci G., Boicu M.,
Bowman M., and Dorin Marcu, with a commentary by
Murray Burke, An Innovative Application from the
DARPA Knowledge Bases Programs Rapid Development
of a High Performance Knowledge Base for Course
of Action Critiquing, invited paper for the
special IAAI issue of the AI Magazine, Volume 22,
No, 2, Summer 2001, pp. 43-61. http//lalab.gmu.e
du/publications/data/2001/COA-critiquer.pdf
Write a Comment
User Comments (0)
About PowerShow.com