Innovative Problems Solving Techniques in Clinical Decision Making - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Innovative Problems Solving Techniques in Clinical Decision Making

Description:

... making is always not as clear as black or white, true or false. ... mOLD ( Helen ) = 0.95. 27. Which means that there is a 95% likelihood that Helen is old. ... – PowerPoint PPT presentation

Number of Views:32
Avg rating:3.0/5.0
Slides: 33
Provided by: dinesh6
Category:

less

Transcript and Presenter's Notes

Title: Innovative Problems Solving Techniques in Clinical Decision Making


1
  • Innovative Problems Solving Techniques in
    Clinical Decision Making
  • 1. Genetic Algorithms
  • 2. Artificial Neural Networks
  • 3. Fuzzy Logic Techniques
  • 4. Inexact Rules

2
  • Genetic Algorithm
  • An algorithm is a set of instructions that are
    repeated to solve a problem.
  • Genetic refers to a behavior of algorithms that
    would be similar to biological process of
    evolution.
  • Basic goal of Genetic Algorithm is to develop a
    system that demonstrates self organization and
    adaptation in the sole basis of exposure to
    environment, similar to biological organisms.
  • Genetic algorithm can be viewed as a type of
    machine learning for automatically solving
    complex problems.
  • They provide a set of efficient, domain
    independent solution search for a broad set of
    applications.

3
  • Genetic Algorithm
  • Example
  • Write down a six digit code (binary) - 001010.
  • You are supposed to guess as quickly as possible
    .
  • You can ask questions to your opponent - about
    how many digits that you have guessed are
    correct.
  • Random Method
  • There are 64 possible strings .
  • It would require half the number of guesses,
    before predicting a number.

4
  • Genetic Algorithm
  • Genetic Algorithm
  • Step 1 Select 4 random digits. 001010
  • A ? 110100 1
  • B ? 111101 1
  • C ? 011011 4
  • D ? 101100 3
  • Step 2 Since none of the strings are correct,
    continue
  • Delete A, B (low score)
  • Call C D parents
  • Step 3 Mate parents genes through cross over.
    This is done by splitting numbers.
  • C 01 1011
  • D 10 1100

5
  • Genetic Algorithm
  • Genetic Algorithm
  • Now cross over first two digits of C with last of
    D
  • E 011100 score is 3
  • F 101011 score is 4
  • Use different split
  • C 0110 11
  • D 1011 00
  • G 0110 00 4
  • H 1011 11 3
  • Repeat the step 2, select best couple, i.e. G and
    C select G and F
  • G 0 11 000
  • F 1 01 011

no better
6
  • Genetic Algorithm
  • Genetic Algorithm
  • G 0 11 000, score 4 001010
  • F 1 01 011 , score 4
  • I 111 000 score 3
  • J 001 011 score 5
  • Try one more time
  • F 101 011
  • G 011 000
  • K 101 000 4
  • L 011 011 4
  • Now repeat with J K
  • 00101 1 - M 001010 6
  • 10100 0 - N 101001 3
  • Solution after 13 tries

7
Flow Diagram for Genetic Algorithm Process
8
  • Neural Networks
  • ANN is a model that emulate the biological
    neural network.
  • The artificial neurons receive inputs through
    dendrites and pass signal to other neurons
    through axon.
  • ANN is composed of artificial neurons these are
    the processing elements (PE). Each of the neuron
    receive inputs, processes inputs and delivers a
    single output. This process can be shown below

axon
9
(No Transcript)
10
  • Neural Networks
  • The inputs can be raw data or outputs from other
    processing elements.
  • The outputs can be the final product or it can be
    input to another neuron.
  • The Network
  • Each ANN is composed of a collection of neurons
    that are grouped in layers.
  • A typical structure is shown on next page.
  • The processing of information is massively
    parallel-as in our brain.

11
(No Transcript)
12
  • Processing of Information
  • Inputs
  • Each input corresponds to a single attribute.
  • For example for diagnosing a disease, each
    symptom, could represent an input to one node.
  • Input could be image of skin texture, if we are
    looking for cancer cells.
  • Outputs
  • The outputs of the network represent the
    solution to a problem.
  • For diagnosis of a disease, the answer could be
    yes or no.
  • Weights
  • A key element of ANN is weight.
  • Weight expresses relative strength of the
    entering data from various connections that
    transfers data from input point to the output
    point.

13
W13
x1W13
14
  • Processing of Information
  • Summation Function
  • Finds the weighed average of all input elements
    entering the PE
  • There are several neurons. The output at jth
    neuron is

15
  • Processing of Information
  • Transformation Function
  • Summation function computes the internal
    cumulative signal value. There is an activation
    level of the neuron. Based on the cumulative
    value of signal received , the neuron may or may
    not produce an output.
  • The relationship between the activation level and
    the output of the neuron may be linear or
    non-linear.
  • The selection of the specific function determines
    the networks operation.
  • One of the popular function is sigmoid function
    where YT is the transformed value of Y.

16
  • Processing of Information
  • Transformation Function
  • The purpose of transform function is to modify
    the output level to a reasonable value (between 0
    - 1). This transformation is done before the
    output reaches the next level.
  • Example
  • x1 3 w1 0.2
  • x2 1 w2 0.4 PE Y 1.2
  • x3 2 w3 0.1 YT f(Y)
  • You can use simple threshold value.

X1
w1
YT
Y
X2
w2
X3
w3
17
  • Processing of Information
  • Learning
  • An ANN learns from its experience. The usual
    process of learning involves three tasks
  • Compute output(s).
  • Compare outputs with desired results and feedback
    the error.
  • Adjust the weights and repeat the process.
  • The learning process starts by setting the
    weights by some rules ( or randomly). The
    difference between the actual output (y) and the
    desired output(z) is called error (delta).
  • The objective is to minimize delta ( to zero).
    The reduction in delta is done by changing the
    weights.

18
  • The key is to change weights in right
    directions, so as to reduce the delta.
  • There are various algorithms, but they will not
    be discussed here.
  • The Procedure for developing NN applications will
    be
  • 1. Collect Data.
  • 2. Separate the data into Training and Test Sets.
  • 3. Define a Network Structure.
  • 4. Select a Learning Algorithm.
  • 5. Transform Data to Network Inputs.
  • 6. Start Training and Revise Weights until the
    Error Criterion is Satisfied.
  • 7. Stop and Test the results with Test data.
  • 8. Implementation Use Network for the New Cases.

19
  • Developing NN Applications
  • Important step is the selection of network
    structureThe available network structures are
  • Associative Memory Systems
  • It refers to ability to recall complete
    situations from partial information. Such systems
    correlate input data with information stored in
    memory,
  • Information can be recalled even from incomplete
    or noisy inputs.
  • Associative memory systems can detect
    similarities between new inputs stored input
    patterns.
  • Hidden Layer Systems
  • Complex practical applications require one or
    more hidden layers between inputs and outputs and
    and a corresponding large number of weights.

20
  • Developing NN Applications
  • Hidden Layer
  • Using more that three layers is rare.
  • Amount of computations involved is enormous.
  • Double Layered Networks
  • This structure does not require knowledge of
    precise number of classes in the training data.
  • Instead, it uses feed-forward and feed-backward
    approach to adjust parameters/ weights as data
    are analyzed to established arbitrary ( required
    ) numbers of categories that represent the data
    presented to the system.

21
  • Developing NN Applications
  • Learning Types
  • Supervised Uses a set of inputs for which the
    desired outputs results / classes are known.The
    difference between the desired and actual output
    is used to calculate adjustment to weights of the
    NN structure.
  • Unsupervised Only input stimuli (parameters)
    are presented to the network. The network is self
    organizing, that is, it organizes itself
    internally, so that each hidden processing
    elements and weights responds appropriately to a
    different set of input stimuli.
  • No knowledge is supplied about the classification
    of outputs. However, the number of categories
    into which the network classifies the inputs can
    be controlled by varying certain parameters in
    the model. In any case, human expert must
    examine the final classifications to assign a
    meaning usefulness of results.

22
  • Developing NN Applications
  • Back propagation
  • It is the most widely used learning algorithm. It
    is very popular technique that is relatively
    easy to implement. It requires training data for
    conditioning the network before using it for
    processing other data.
  • A back-propagation network includes at-least one
    hidden layer.
  • The approach is considered as feed-forward
    approach.
  • Limitations
  • NNs do not do well at tasks that are not driven
    well by people.
  • They lack the explaining facility.
  • Training time can be excessive .

23
  • FUZZY LOGIC
  • Some AI programs exploit the technique of inexact
    (approximate) reasoning.
  • The thinking behind this approach is that the
    decision making is always not as clear as black
    or white, true or false. There could be gray
    areas or maybe
  • This approach may be very appropriate in medical
    diagnostics where symptoms are often fuzzy.
  • It provides flexibility -- It makes allowances
    for the unexpected. You can shift your strategy
    whenever necessary.
  • It gives you options -- If you are confronted
    with a number of possibilities, you will need to
    consider them all. Go for highly
    likelihoodpossibility.
  • Then using facts and intuition (highly likely or
    very good) -- you can make an educated guess.

24
  • FUZZY LOGIC
  • It is more forgiving -- when you are required to
    make black or white decisions, you can not afford
    to be wrong, because if you are wrong, you lose
    completely.
  • Fuzzy logic is very considerate.
  • If you figure something is 80 gray, but it turns
    out 90, you are not going to be penalized very
    much.
  • Example Describe a tall person -- when can you
    call a person tall? Proportion voted for
  • 510- 0.05
  • 511- 0.10
  • 6 - 0.60
  • 61 - 0.15
  • 62 - 0.10

25
  • FUZZY LOGIC
  • A 6 person, in probability theory -- you can say
    that there is a 75 chances that he is tall.
  • Is he tall?
  • If fuzzy logic, we say, Jacks degree of
    membership within the set of tall people is 0.75
  • ltJack, 0.75gt Tall or mTALL ( Jack ) 0.75
  • This can be expressed in a knowledge based system
    as Jack is tall (CF 0. 75).
  • In contrast to certainty factors that includes
    two values (degree of belief and disbelief),
    fuzzy sets use a spectrum of possible values.
  • However, it is complex to use and require more
    computing power.

26
  • FUZZY LOGIC
  • Fuzzy logic is an alternate to two valued logic
    and probability theory and uses the concept of
    set theory.
  • Here, the probability theory is replaced with
    possibility theory.
  • Fuzzy truth values deal with likelihood or
    certainty that a fact or rule is true.
  • The truth value or membership values are
    indicated by a range of 0 - 1, 1 going towards
    absolute truth.
  • Expressing inexact concepts, such as large or
    old in fuzzy logic is straight forward.
  • For example Helen is old if she is 75 years, we
    assign the truth value of 0.95. This implies,
    Helen is a member of the the set of old people
  • mOLD ( Helen ) 0.95

27
  • Which means that there is a 95 likelihood that
    Helen is old.
  • We can simultaneously assign
  • mYOUNG ( Helen ) 0.3 Helen is young is
    assigned 0.3 possibility or likelihood.
  • In addition to defining the basic notion of
    uncertainty, fuzzy logic provides operators for
    combining uncertain information, such as AND,
    OR, NOT, EX-OR
  • We define some terms for fuzzy logic
  • A set is empty if and only if m A(x) 0
  • A B , iff mA(x) mB(x)
  • m( cmpl A) (1- mA )
  • A is a subset of B , iff mB(x) ? mA (x), for all
    x
  • C is a union of set A and B,
  • if mC(x) max mA(x), mB(x)

28
  • C is a intersection of set A and B,
  • if mC(x) min mA(x), mB(x)
  • Example mS(x) 0.9, mT(x) 0.8 this
    implies that union of both set will have the
    value mC(x) max mS(x), mT(x) 0.9.
  • In fuzzy logic we define HEDGES - very or less.
    You can also define somewhat, rather, sort of,
    etc.
  • very or less are defined by square of value,
    while others could be defined by SQRT
  • Expert systems are constructed using facts and
    rules.
  • Example
  • m(stock is high-tech) 0.9
  • m(stock is in demand) 0.6
  • m(stock is a new issue) 0.8
  • m(stock is heavily traded) 0.3

29
  • RULE 1
  • IF X is high-tech
  • AND X is in demand
  • THEN X is volatile
  • RULE 2
  • IF X is a very new issue
  • OR X ix not heavily traded
  • THEN X is volatile
  • FUZZY Reasoning by using very
  • 1. m(stock is a very new issue ) 0.8 x 0.8
    0.64
  • 2. m(stock is not heavily traded) 1-0.3 0.7
  • 3. m( x is volatile) min (0.9, 0.6 ) 0.6,
    using RULE 1
  • m( x is volatile) max ( 0.7, 0.64) 0.7,
    using RULE 2
  • You can combine RULE1 and RULE 2, using
    combination rules.

30
  • Inexact Rules
  • Theories of inexact reasoning are interesting for
    expressing and propagating uncertain information.
  • To use them in practical inference tasks, you
    should utilize them within an inference engine.
  • You can apply using backward chaining.
  • You use certainty factors in the range of 0 -
    100.
  • Inexact inference finds confidence factors ( CFs)
    for goals by matching them with the fact base and
    conclusions of other rules ( if needed).
  • CF ( conclusion ) CF ( premise) rule
    certainty / 100
  • Example
  • RULE 1 ( CF 70)
  • Investor -1 should invest in X

31
  • IF Broker - A recommends in X, AND
  • Broker - B recommends in X , AND
  • NOT X is overpriced
  • RULE 2 (CF 50)
  • Investor -1 should invest in X
  • IF Broker - A recommends in X, AND
  • X is undervalued
  • RULE 3 ( CF 100 )
  • X is undervalued
  • IF X is not overpriced
  • Facts ( given)
  • Broker - A recommends Gold ( CF 75)
  • Broker - A recommends Silver ( CF 50)

32
  • Broker - B recommends Silver ( CF 90)
  • Gold is undervalued ( CF 80)
  • Silver is overpriced ( CF 90 )
  • Make a decision what the inference engine will
    do to find the certainty factor for the goal -
    should he invest in Gold?
  • Apply rule 1 - Broker B does not recommend gold,
    so it does not apply CF for the rule 1 is 0,
  • Now try the second rule Fact base 1 has CF of 75
    and the second one 80. Therefore, the CF of the
    premise is Min(75,80) 75. The CF for the Rule 2
    is 75 50/100 37.5
  • Combine Rule 1 and Rule 2
  • Combine ( 0, 37.5) 37.5 ( CFR1 CFR2 -
    CFR1CFR2/100 )
  • So the CF for investing in gold is 37.5
  • Check for silver - the CF 11.68
Write a Comment
User Comments (0)
About PowerShow.com