Artificial intelligence - PowerPoint PPT Presentation

1 / 82
About This Presentation
Title:

Artificial intelligence

Description:

General Problem Solver (GPS) imitates human problem solving protocols ... Crossword puzzle. Fully observable, deterministic, sequential, static, discreet, ... – PowerPoint PPT presentation

Number of Views:127
Avg rating:3.0/5.0
Slides: 83
Provided by: jpt4
Category:

less

Transcript and Presenter's Notes

Title: Artificial intelligence


1
Artificial intelligence
2
Information
  • Instructor Johnson Thomas
  • 325 North Hall
  • Tel 918 594 8503
  • e-mail jpt_at_cs.okstate.edu
  • Office hours
  • Monday 5-7pm
  • Tuesday 2-4pm
  • Alternates between Tulsa and Stillwater

3
Information
  • TA zang, yihong
  • E-mailzangyihong_at_hotmail.com
  • Office hours to be decided

4
Information
  • Book
  • artificial intelligence a modern approach
  • Russell and Norvig
  • Prentice Hall

5
Information
  • Practical x 3 45 marks (3 x 15)
  • Homework x 3 30 marks (3 x 10)
  • Mid-Term 30 marks
  • Final Examination 35 marks

6
Artificial intelligence
7
introduction
  • What is AI?
  • Many definitions
  • Thought processes and reasoning
  • Behavior
  • Thought and behavior like humans
  • Ideal thought and behavior rationality
  • A system is rational if It does the right thing
    given what it knows

8
Introduction
  • Systems that think like humans
  • Computers with minds
  • Automation of activities that we associate with
    human thinking e.g. decision-making, problem
    solving, learning ,
  • Systems that act like humans
  • Machines that perform the functions that require
    intelligence when performed by people
  • Make computers do things at which, at the moment,
    people are better

9
Introduction
  • Systems that think rationally
  • The study of mental faculties through the use of
    computational models
  • The study of the computations that make it
    possible to perceive, reason, and act
  • Systems that act rationally
  • Study of the design of intelligent agents
  • Concerned with intelligent behavior in artifacts

10
Introduction
  • Acting humanly Turing test
  • The computer passes the test if a human
    interrogator, after posing some written
    questions, cannot tell whether the written
    responses come from a person or not.

11
Introduction
  • Therefore, the computer would need to possess the
    following
  • Natural language processing
  • Communication
  • Knowledge representation
  • Store knowledge
  • Automated reasoning
  • Use stored information to answer questions and
    draw new conclusions
  • Machine learning
  • Adaptation and pattern recognition/extrapolation

12
Introduction
  • Turing test no physical interaction
  • Total Turing test
  • Interrogator can test subjects perceptual
    abilities
  • Need computer vision to perceive objects
  • Interrogator can pass physical objects
  • Need Robotics to manipulate objects and move
    about

13
Introduction
  • Thinking humanly cognitive modeling
  • Understand the workings of human minds
  • Once we have a theory of the mind, we can express
    the theory as a computer program
  • Cognitive science brings together computer
    models from AI and experimental techniques from
    psychology

14
Introduction
  • Thinking rationally laws of thought
  • Logic
  • Difficult in practice
  • Knowledge can be informal

15
Introduction
  • Acting rationally rational agents
  • agent something that acts
  • More than just a program
  • Can perceive their environment
  • Can adapt to change
  • Operate under autonomous control
  • Persists over a prolonged time.
  • Can take on anothers goals
  • Rational agent acts to achieve best outcome
    possible

16
History
  • Subjects which have contributed to AI
  • Philosophy 428 BC present
  • Mathematics 800-present a
  • Economics 1776-present
  • Neuroscience 1861 present
  • Psychology 1879-present
  • Computer engineering of 1940-present
  • Control theory and cybernetics 1948-present
  • linguistics 1957-present

17
History
  • First AI work - McCulloch and Pits (1943)
  • Model of artificial neurons
  • Based on
  • Physiology and functions of the brain
  • Analysis of propositional logic
  • Turings theory of computation
  • Each neuron is on or off
  • Neuron switches on in response to stimulation
    by a sufficient number of neighboring neurons

18
History
  • Hebb demonstrated an updating rule for modifying
    the connection strengths between neurons
  • Minsky and Edwards built the first neural network
    computer 1950
  • Turings article in 1950 computing machinery
    and intelligence introduced Turing test,
    machine learning, genetic algorithms, enforcement
    learning

19
History
  • Birthplace of AI meeting organized by McCarthy
    at Dartmouth college in 1956.
  • Some early programs
  • Logic theorist (LT) for proving theorems
  • General Problem Solver (GPS) imitates human
    problem solving protocols
  • Physical symbol system hypothesis any system
    exhibiting intelligence must operate by
    manipulating data structures composed of symbols

20
History
  • Geometric theorem prover (1959)
  • Program for checkers
  • McCarthy (1958)
  • Defined Lisp
  • Invented time sharing
  • Advice taker program first program to embody
    general knowledge of the world
  • Minsky a number of micro world projects that
    require intelligence
  • Blocks world rearrange blocks In a certain way,
    using a robot hand that can pick up one block at
    a time

21
History
  • vision project
  • Vision and constraint propagation
  • Learning theory
  • Natural language understanding
  • Planning
  • Perceptrons

22
History
  • Use of domain specific knowledge that allows
    larger reasoning steps
  • Dendral program Inferring molecular structure
    from the information provided by a mass
    spectrometer
  • Expert systems for medical diagnosis mycin
  • No theoretical framework
  • Knowledge acquired by Interviewing experts
  • Rules reflected uncertainty associated with
    medical knowledge

23
History
  • First commercial expert system R1 DEC
  • For configuring computers
  • Every major U.S. corporation had expert systems
  • Japanese fifth generation project
  • US microelectronics and computer technology
    corporation (MCC)
  • Britains Alveys program

24
History
  • Hopfield statistical approaches for neural
    networks
  • Hinton neural net models of memory
  • so-called connectionist models (as opposed to
    symbolic models promoted earlier)

25
History
  • AI is now a science based on rigorous theorems
    or hard experimental evidence

26
Applications
  • Autonomous planning and scheduling
  • NASAs remote agent program
  • controls scheduling of operations for a
    spacecraft.
  • Plans are generated from high level goals
    specified from the ground.
  • Monitors operation of the spacecraft as the plans
    are executed
  • Game playing
  • IBMs deep Blue defeated world champion

27
Applications
  • Autonomous control
  • ALVINN computer vision system to steer a car
  • Diagnosis
  • Medical diagnosis programs based on probabilistic
    analysis
  • Logistics planning
  • Logistics planning and scheduling for
    transportation in gulf war

28
Applications
  • Robotics
  • Language understanding and problem solving

29
Intelligent agents
30
Agents
  • Agent anything that can be viewed as perceiving
    its environment through sensors and acting upon
    that environment through actuators
  • An agent is a function from percept histories to
    an action
  • f P ? A

31
Agents
  • Human agent
  • Ears, eyes, for sensors
  • Hands, legs, for actuators
  • Robotic agent
  • Cameras for sensors
  • Motors for actuators
  • Software agents
  • Keystrokes, file contents, network packets as
    sensory inputs
  • Display on the screen, writing files, sending
    network packets

32
sensors
Environment
Agent
percepts
?
actions
actuators
33
Agents
  • Percept refers to the agents perceptual inputs
    at any given instant
  • Percept sequence complete history of everything
    the agent has perceived
  • An agents choice of action at any given instant
    can depend on the entire percept sequence
    observed to date
  • Agent function - maps any given percept sequence
    to an action. This function describes the
    behavior of an agent

34
Agents
  • Agent function - an abstract mathematical
    description
  • Agents program an implementation

35
Agents
  • Vacuum cleaner world
  • Two locations squares A and B
  • Vacuum agents perceives which square it is in and
    whether there is dirt in the square
  • Vacuum agent can choose to move left, move right,
    Suck dirt, do nothing
  • Agent function if current square is dirty, then
    suck , otherwise move to the other square
  • Tabulation of agent function

36
B
A
37
Agents
  • Partial tabulation of simple agent function
  • Percept sequence action
  • A, clean right
  • A, dirty suck
  • B, clean left
  • B, dirty suck
  • A, clean, A, clean right
  • A, clean, A, dirty suck
  • A, clean, A, clean, A, clean right
  • A, clean, A, clean, A, dirty suck

38
Agents
  • Rational agent does the right thing, that is,
    every entry in the table for the agent function
    is filled out correctly
  • Performance measure criterion for success of an
    agents behavior
  • Sequence of actions based on percepts it received
  • Sequence of actions causes the environment to go
    through a sequence of states
  • If the sequence Is desirable, agent has performed
    well

39
Agents
  • Vacuum cleaner agent measure performance by the
    amount of dirt cleaned up in a single eight hour
    shift
  • Clean, dump dirt, clean, dump dirt ?
  • More suitable performance measure -reward agent
    for having a clean floor
  • Design performance measures according to what one
    wants in the environment - not according to how
    one thinks the agent should behave

40
Agents
  • Rationality
  • What is rational at any given time depends on
  • performance measure that defines the criterion of
    success
  • Agents prior knowledge of the environment
  • Actions that the agent can perform
  • Agents percept sequence to date

41
Agents
  • Definition
  • For each possible percept sequence, a rational
    agent should select an action that is expected to
    maximize its performance measure, given the
    evidence provided by the percept sequence and
    whatever built-in knowledge the agent has

42
Agents
  • Vacuum cleaner agent cleans the square if it is
    dirty and moves to the other square if it is not
  • Is this a rational agent?

43
Agents
  • What is its performance measure
  • Performance measure awards one point for each
    clean square at each time step over a lifetime of
    1000 times steps
  • What is known about the environment
  • Geography of environment is known a priori. Dirt
    distribution and initial location of agent are
    not known. Clean squares stay clean. Sucking
    cleans the current square. Left and right move
    the agent

44
Agents
  • What sensors and actuators does it have
  • Available actions left, right, suck, NoOp
  • Agent perceives Its location and whether that
    location contains dirt
  • Under these conditions, the agent is rational

45
Agents
  • Note
  • rationality maximizes expected performance
  • Perfection maximizes actual performance
  • Rational agent
  • Gathers information
  • Learns as much as possible from what it perceives

46
Agents
  • Task of computing the agent function
  • When the agent is being designed, some of the
    computation is done by its designers
  • When it is deliberating on its next action, the
    agent does more computation
  • The agent learns from experience it does more
    computation as it learns

47
Agents
  • Autonomous the agent should learn what It can
    to compensate for partial or incorrect prior
    knowledge
  • Vacuum cleaning agent that learns to foresee
    where and when additional dirt will appear will
    do better than one that does not
  • Agent will have some initial knowledge as well as
    an ability to learn. Initially it will make
    mistakes.

48
Agents
  • Specifying the task environment
  • Vacuum cleaning agent
  • Performance measure, and environment, sensors and
    actuators (PEAS) this is the task environment
  • Automated taxi driver
  • Performance measure getting to correct
    destination minimizing fuel consumption , wear
    and tear, trip time, cost, traffic law
    violations maximizing safety, comfort, profits
  • Environment variety of roads, traffic,
    pedestrians, road works, etc. Interact with the
    passengers, snow

49
Agents
  • Actuators accelerator, steering, braking
    output to a display screen or voice synthesizer
    communicate with other vehicles
  • Sensors controllable cameras, speedometer
    odometer, accelerometer, engine and other system
    sensors , GPS, sensors to detect distances to
    other cars and obstacles, keyboard or microphone
    for the passenger to request a destination

50
Agents
  • Properties of task environments
  • Fully observable vs partially observable
  • Fully observable If agents sensors give it
    access to the complete state of the environment
    at each point in time
  • Partially observable because of noisy and
    inaccurate sensors or because parts of the state
    are missing from the sensor data

51
Agents
  • Deterministic vs stochastic
  • Deterministic if the next state of the
    environment is completely determined by the
    current state and the action executed by the
    agent
  • Otherwise stochastic taxi driving cannot
    predict traffic
  • Episodic vs sequential
  • Episodic agents experience is divided into
    atomic episodes. Episode consists of agent
    perceiving and performing a single action. Next
    episode does not depend on the actions taken in
    previous episodes
  • Sequential current decision can affect future
    decisions

52
Agents
  • static vs dynamic
  • dynamic if environment can change while an
    agent is deliberating
  • Static otherwise
  • Discrete vs continuous
  • Time
  • Percepts
  • Actions
  • Single agent vs multiagent
  • Aim is to maximize performance
  • Competitive multiagent environment
  • Cooperative multiagent environment
  • Communication in multiagent environment

53
Agents
  • Crossword puzzle
  • Fully observable, deterministic, sequential,
    static, discreet, single agent
  • Medical diagnosis
  • Partially observable, stochastic, sequential,
    dynamic, continuous, single agent
  • Can be multiagent deal with patients etc.

54
Agents
  • Structure
  • So far talked about behavior
  • Agent program implements the agent function
    mapping percepts to actions
  • This program will run on some sort of computing
    device with physical sensors and actuators
    (architecture)
  • agent architecture program

55
Agents
  • Skeleton of program
  • Input current percept from sensors
  • Output action to actuators
  • Agent function takes entire percept history as
    input

56
Agents
  • Four basic kinds of agent programs
  • simple reflex agents
  • Model based reflex agents
  • Goal based agents
  • Utility based agents
  • Can use a table look up approach but not
    practical

57
Agents
  • Function TABLE-DRIVEN_AGENT(percept) returns an
    action
  • static percepts, a sequence initially empty
  • table, a table of actions, indexed by percept
    sequence
  • append percept to the end of percepts
  • action ? LOOKUP(percepts, table)
  • return action
  • This approach is doomed to failure

58
Agents
  • Simple reflex agents
  • simplest kind of agent
  • Select actions on the basis of the current
    percept, ignoring the rest of the percept history
  • Example vacuum cleaning agent
  • Decision based on current location and whether
    location contains dirt

59
Agents
  • Code for vacuum cleaning agent
  • function reflex-vacuum-agent (location,
    status) returns an action
  • if status dirty then return suck
  • else if location A then return right
  • else if location B then return left

60
Agents
  • Condition-action rule
  • Example
  • if car-in-front-is-braking then initiate-braking

61
Agents
sensors
Environment
Agent
percepts
What the world is like now
What action I should do now
actions
actuators
62
Agents
  • More general and flexible approach
  • Build a general purpose Interpreter for condition
    action rules
  • Create rule sets for specific task environments

63
Agents
  • function SIMPLE-REFLEX-AGENT(percept) returns an
    action
  • static rules, a set of condition-action rules
  • state ? INTERPRET-INPUT(percept)
  • rule ? RULE-MATCH(state, rule)
  • action ? RULE-ACTIONrule
  • return action

64
Agents
  • state ? INTERPRET-INPUT(percept)
  • This function generates an abstracted description
    of the current state from the percept
  • rule ? RULE-MATCH(state, rule)
  • This function returns the first row in the set of
    rules that matches the given state description
  • action ? RULE-ACTIONrule
  • Action defined

65
Agents
  • Current decision can be made on the basis of only
    the current percept and environment must be
    fully observable
  • Suppose vacuum agent is deprived of its location
    sensor and has only a dirt sensor
  • 2 possible percepts Dirty, Clean
  • It can Suck in response to Dirty
  • What about Clean?
  • Moving Left fails (forever) if it happens to
    start in square A
  • Moving Right fails (forever) if it happens to
    start in square B

66
Agents
  • Will only work if the environment is fully
    observable otherwise infinite loops may occur.
  • Can exit from infinite loops if the agent can
    randomize its actions

67
Agents
  • Model based reflex agents
  • Partial observability agent must keep track of
    the part of the word it cannot see now
  • that is, agent maintains internal state that
    depends on the percept history and thereby
    reflects at least some of the unobserved aspects
    of the current state

68
Agents
  • Example changing lanes agent needs to keep
    track of where the other cars are If it cannot
    see them all at once

69
Agents
  • Updating internal state Information as time goes
    by
  • Need some information about how the world
    involves independently of the agent
  • overtaking car generally will be closer than it
    was a moment ago
  • Need some information about how the agents own
    actions affect the world
  • When agent turns the steering wheel clockwise,
    the car turns to the right
  • this knowledge about how the world works a
    model of the world
  • agent that uses such a model model based agent

70
Agents
71
Agents
  • function REFLEX-AGENT-WITH-STATE(percept) returns
    an action
  • static rules, a set of condition-action rules
  • state, a description of the current world state
  • action, the most recent action.
  • state ? UPDATE-STATE(state, action, percept)
  • rule ? RULE-MATCH(state, rule)
  • action ? RULE-ACTIONrule
  • return action

72
Agents
  • state ? UPDATE-STATE(state, action, percept)
  • responsible for creating the new internal state
    description
  • Interprets the new percept in the light of the
    existing knowledge about the state
  • Uses information about how the world evolves to
    keep track of the unseen parts of the world
  • Knows about what the agents actions do to the
    state of the world

73
Agents
  • Knowing about the current state of the
    environment is not always enough to decide what
    to do
  • Agent needs some sort of goal information also
  • Agent a program combines of this with information
    about the results of possible actions (the same
    information as was used to update internal state
    in the reflex agent) to choose the action

74
Agents
75
Agents
  • Searching and planning are needed to find actions
    sequences that achieve the agents goals

76
Agents
  • Certain goals can be reached in different ways.
  • Some are better, have a higher utility.
  • Utility function maps a (sequence of) state(s)
    onto a real number.
  • Describes the associated degree of happiness

77
Agents
  • Improves on goals
  • Selecting between conflicting goals
  • Speed and safety - utilities function specifies
    the appropriate tradeoff
  • Select appropriately between several goals based
    on likelihood of success.

78
Agents
79
Agents
  • All previous agent-programs describe methods for
    selecting actions.
  • Yet it does not explain the origin of these
    programs.
  • Learning mechanisms can be used to perform this
    task.
  • Teach them instead of instructing them.
  • Advantage is the robustness of the program toward
    initially unknown environments.

80
Agents
81
Agents
  • Learning agent 4 components
  • Learning element introduce improvements in
    performance element.
  • Critic provides feedback on agents performance
    based on fixed performance standard.
  • Performance element selecting actions based on
    percepts.
  • Corresponds to the previous agent programs
  • Problem generator suggests actions that will
    lead to new and informative experiences.
  • Exploration vs. exploitation

82
Agents
  • Automated taxi agent
  • Performance element collection of knowledge and
    procedures that taxi has for selecting its
    driving actions
  • Taxi drives using this performance element
  • Critic observes the world and passes information
    to the learning element
  • Taxi makes quick left turn across three lanes of
    traffic critic observes abuse by other drivers.
  • Learning element formulates a rule saying this is
    a bad action
  • Performance element is modified by installing
    the new rule
  • Problem generator identifies certain areas of
    behavior in need of improvement and suggest
    experiments
Write a Comment
User Comments (0)
About PowerShow.com