Physics as Computing - PowerPoint PPT Presentation

1 / 26
About This Presentation
Title:

Physics as Computing

Description:

Physics as Computing Dr. Michael P. Frank Dept. of Electrical & Computer Eng. FAMU-FSU College of Engineering Quantum Computation for Physical Modeling Workshop – PowerPoint PPT presentation

Number of Views:102
Avg rating:3.0/5.0
Slides: 27
Provided by: Micha818
Learn more at: https://eng.fsu.edu
Category:

less

Transcript and Presenter's Notes

Title: Physics as Computing


1
Physics as Computing
  • Dr. Michael P. FrankDept. of Electrical
    Computer Eng.FAMU-FSU College of Engineering

Quantum Computation for Physical Modeling
Workshop(QCPM 04)Marthas VineyardWednesday,
September 15, 2004
2
Abstract
  • Studying the physical limits of computing
    encourages us to think about physics in
    computational terms.
  • Viewing physics as a computer directly gives us
    limits on the computing capabilities of any
    machine thats embedded within our physical
    world.
  • We want to understand what various physical
    quantities mean in a computational context.
  • Some answers so far
  • Entropy Unknown/incompressible information
  • Action Amount of computational work
  • Energy Rate of computing activity
  • Generalized temperature Clock frequency
    (activity per bit)
  • Momentum Motional computation per unit
    distance

Todaystopic
3
Entropy as Information
  • A bit of history
  • Most of the credit for originating this concept
    really should go to Ludwig Boltzmann.
  • He (not Shannon) first characterized the entropy
    of a system as the expected log-improbability of
    its state, H -?(pi log pi).
  • He also discussed combinatorial reasons for its
    increase in his famous H-theorem
  • Shannon brought Boltzmanns entropy to the
    attention of communication engineers
  • And he taught us how to interpret Boltzmanns
    entropy as unknown information, in a
    communication-theory context.
  • von Neumann generalized Boltzmann entropy to
    quantum mixed states
  • That is, the S -Tr ? ln ? expression that we
    all know and love
  • Jaynes clarified how the von Neumann entropy of a
    system can increase over time
  • Either when the Hamiltonian itself is unknown, or
    when we trace out entangled subsystems
  • Zurek suggested adding algorithmically
    incompressible information to the part of
    physical information that we consider to be
    entropy
  • I will discuss a variation on this theme.

4
Why go beyond the statistical definition of
entropy?
  • We may argue the statistical concept of entropy
    is incomplete,
  • because it doesnt even begin to break down the
    ontology-epistemology barrier
  • In the statistical view, a knower (such as
    ourselves) must always be invoked to supply a
    state of knowledge (probability distribution)
  • But we typically treat the knower as being
    fundamentally separate from the physical system
    itself.
  • However, in reality, we ourselves are part of the
    physical system that is our universe
  • Thus, a complete understanding of entropy must
    also address what knowledge means, physically

5
Small Physical Knowers
  • Of course, humans are extremely large complex
    physical systems, and to physically characterize
    our states of knowledge is a very long way off
  • However, we can hope to characterize the
    knowledge of simpler systems.
  • Computer engineers find that in practice, it can
    be very meaningful and useful to ascribe
    epistemological states even to extremely simple
    systems.
  • E.g., digital systems and their component
    subsystems.
  • When analyzing digital systems,
  • we constantly say things like, At such-and-such
    time, component A knows such-and-such information
    about the state of component B
  • Means essentially that there is a specific
    correlation between the states of A and B.
  • For nano-scale digital devices, we can strive to
    exactly characterize their logical states in
    mathematical physics terms..
  • Thus we ought to be able to say exactly what it
    means for one component to know some information
    about another.

6
What wed like to say
  • Want to formalize arguments such as the
    following
  • Component A doesnt know the state of component
    B, so the physical information in B is entropy to
    component A. Component A cant destroy the
    entropy in B, due to the 2nd law of
    thermodynamics, and therefore A cant reset B to
    a standard state without expelling Bs entropy to
    the environment.
  • I want all of these to be mathematically
    well-defined and physically meaningful
    statements, and I want the argument to be
    formally provable!
  • One motivation A lot of head-in-the-sand
    technologists are still in a state of denial
    about Landauers principle!
  • Oblivious erasure of non-entropy information
    turns it into entropy.
  • We need to be able to prove it to them with
    simple, undeniable, clear and correct arguments!
  • To get reversible/quantum computing more traction
    in industry.

7
Insufficiency of Statistical Entropy for Physical
Knowers
  • Unfortunately for this kind of program
  • If the ordinary statistical definition of entropy
    is used,
  • together with a knower that is fully defined as
    an actual physical system, then
  • The 2nd law of thermodynamics no longer holds!
  • Note the unknown information in a system can be
    reduced
  • Simply let the knower system perform a
    (coherent, reversible) measurement of the target
    system, to gain knowledge about the state of the
    target system!
  • The entropy of the target system (from knowers
    perspective) is then reduced.
  • The 2nd law says there must be a corresponding
    increase in entropy somewhere, but where?
  • This is the essence of Maxwells Demon paradox.

8
Entropy in knowledge?
  • Resolution suggested by Bennett
  • The demons knowledge of the result of his
    measurement can itself be considered to
    constitute one form of entropy!
  • It must be expelled into environment in order to
    reset his state.
  • But, what if we imagine ourselves in the demons
    shoes?
  • Clearly, the demons knowledge of the measurement
    result itself constitutes known information,
    from his own perspective!
  • I.e., the demons own subjective posterior
    probability distribution that he would (or
    should) assess over the possible values of his
    knowledge of the result, after he has already
    obtained this knowledge, will be entirely
    concentrated on the actual outcome.
  • The statistical entropy of this distribution is
    zero!
  • So, here we have a type of entropy that is
    present in our own knowledge itself, and is not
    unknown information!
  • Needed A way to make sense of this, and to
    mathematically quantify this entropy of
    knowledge.

9
Quantifying the Entropy ofKnowledge, Approach 1
  • The traditional position says In order to
    properly define the entropy in the demons state
    of knowledge, we must always pop up to the
    meta-perspective from which we are describing the
    whole physical situation.
  • We ourselves always implicitly possess some
    probability distribution over the states of the
    joint demon-target system.
  • We should just take the statistical entropy of
    that distribution.
  • Problem This approach doesnt face up to the
    fact that we are physical systems too!
  • It doesnt offer any self-consistent way that
    physical systems themselves can ever play the
    role of a knower!
  • I.e., describe other systems, assess subjective
    probability distributions over their state,
    modify those distributions via measurements, etc.
  • This contradicts our own personal physical
    experience,
  • as well as what we expect that quantum computers
    performing coherent measurements of other systems
    ought to be able to do

10
Approach 2
  • The entropy inherent in some known information is
    the smallest size to which this information can
    be compressed.
  • But of course, this depends on the coding system.
  • Zurek suggests, use Kolmogorov complexity. (Size
    of shortest generating program.)
  • But there are two problems with doing that
  • Its only well-defined up to an additive
    constant.
  • That is, modulo a choice of universal programming
    language.
  • Its uncomputable!
  • What else might we try?

11
Approach 3 (We Suggest)
  • We propose The entropy content of some known
    piece of information is its compressed size
    according to whatever encoding would yield the
    smallest expected compressed size, a priori.
  • That is, taking the expectation over all the
    possible patterns of information before the
    actual one was obtained.
  • This is nice, because the expected value of
    posterior entropy then closely matches the
    ordinary statistical entropy of the prior
    distribution.
  • Even exactly, in special cases, or in the limit
    of many repetitions
  • Due to a simple application of Shannons
    channel-capacity theorem.
  • We can then show that the 2nd law gets obeyed on
    average.
  • But, from whose a priori probability distribution
    is this expectation value of compressed size to
    be obtained?

Expected length of thecodeword ci
encodinginformation pattern i
12
Who picks the compressor?
  • Two possible answers to this
  • Use our probability distribution when we
    originally describe and analyze the hypothetical
    situation from outside.
  • Although this is a bit distasteful, since here we
    are resorting to the meta-perspective again,
    which we were trying to avoid
  • However, at least we do manage to sidestep the
    paradox
  • Or, we can use the demons own a priori
    assessment of the probabilities
  • That is, essentially, let him pick his own
    compression system, however he wants!
  • The entropy of knowledge is then defined in a
    relative way, as the smallest size that a given
    entity with that knowledge would or could
    compress that knowledge to,
  • given a specification of its capabilities,
    together with any of its previous decisions
    commitments as to the compression strategy it
    would use.

13
A Simple Example
  • Suppose we have a seperable two-qubit system ab,
  • Where qubit a initially contains 1 bit of
    entropy
  • I.e., described by density operator ?a ?0 ?1
    0??0 1??1.
  • while qubit b is in a pure state (say 0?)
  • Its density operator (if we care) is ?b ?0
    0??0.
  • Now, suppose we do a CNOT(a,b).
  • Can view this process as a measurement of qubit
    a by qubit b.
  • Qubit b could be considered a subsystem of some
    quantum know
  • Assuming the observer knows that this process has
    occurred,
  • We can say that he now knows the state of a!
  • Since the state of a is now correlated with a
    part of bs own state.
  • I.e., from bs personal subjective point of
    view,bit a is no longer an unknown bit
  • But it is still entropy, because theexpected
    compressed size of anencoding of this data is
    still 1 bit!
  • This becomes clearer in a larger example

?a ?0?1
?ab ?00 ?01 00??00 11??11
?b 0?
14
Slightly Larger Example
  • Suppose system A initially contains 8 random
    qubits a0a7, with a uniform distribution over
    their values
  • a thus contains 8 bits of entropy.
  • And system B initially contains a large number
    b0, of empty qubits.
  • b contains 0 entropy initially
  • Now, say we do CNOT(ai, bi) for i0 to 3
  • B now knows the values of a0,,a3.
  • The information in A that is unknown by b is now
    only the 4 other bits a4a7.
  • But, the AB system also contains an additional 4
    bits of information about A (shared between A and
    B) which (though known by B) is (we expect) still
    incompressible by B
  • I.e., the encoding that offers the minimum
    expected length (prior to learning a0a3) still
    has an expected length of 4 bits!
  • A second CNOT(bi, ai) can allow B to reversibly
    clear the entropyfrom system A.
  • Note this is a Maxwells Demon type of scenario.
  • Entropy isnt lost because the incompressible
    information in B is still entropy!
  • From an outside observers perspective, the
    amount of unknown information remains the same in
    all these situations
  • But from an inside perspective, entropy can flow
    (reversibly) from known to unknown and back

15
Entropy Conversion
4 bits of Aknown to B(correlation)
Target system A
4 bits un-known to B
8 bits unknown to B
CNOT(a0-3?b0-3)
a0 a1 a2 a3 a4 a5 a6 a7
x0 x1 x2 x3 x4 x5 x6 x7
a0 a1 a2 a3 a4 a5 a6 a7
x0 x1 x2 x3 x4 x5 x6 x7
A
A
b0 b1 b2 b3 b4 b5 b6 b7
x0 x1 x2 x3 0 0 0 0
b0 b1 b2 b3 b4 b5 b6 b7
0 0 0 0 0 0 0 0
B (reversibly)measures A
B
B
Demon system B
4 bits of knowledge 8 bits all
together compressibleto 4 bits
  • In all stages, there remain 8 total bits of
    entropy.
  • All 8 are unknown to us in our
    meta-perspective.
  • But some may be known to subsystem B!
  • Still call them entropy for B if we dont
    expect B can compress them away

4 bits un-known to B
a0 a1 a2 a3 a4 a5 a6 a7
0 0 0 0 x4 x5 x6 x7
A
CNOT(b0-3?a0-3)B (reversibly)controls A
b0 b1 b2 b3 b4 b5 b6 b7
x0 x1 x2 x3 0 0 0 0
B
4 incompressiblebits in Bs internalstate of
knowledge
16
Are we done?
  • I.e., have we arrived at a satisfactory
    generalization of the entropy concept?
  • Perhaps not quite, because
  • Weve been vague about how to define the
    compression system that the knower would use.
  • Or in other words, the knowers prior
    distribution.
  • We havent yet provided an operational definition
    (that can be replicably verified by a third
    party) of the meaning of
  • The entropy of a physical system A, as assessed
    by another physical system (the knower) B.
  • However, there might be no way to do better

17
One Possible Conclusion
  • Perhaps the entropy of a particular piece of
    known information can only be defined relative to
    a given description system.
  • Where by description system I mean a bijection
    between compressed decompressed
    informational objects ci ? di
  • Most usefully, the map should be computable.
  • This is not really any worse than the situation
    with standard statistical entropy, where it is
    only defined relative to a given state of
    knowledge, in the form of a probability
    distribution over states of the system.
  • The existence of optimal compression systems for
    given probability distributions strengthens the
    connection.
  • In fact, we can also infer a probability
    distribution from the description system, in
    cases of optimal description systems
  • We could consider a description system, rather
    than a probability distribution, to be the
    fundamental starting point for any discussion of
    entropy.
  • Perhaps we should not be disappointed

18
The Entropy Game
  • A thought experiment to illustrate why
  • Suppose C wants to know Bs entropy for another
    system A.
  • Classical protocol
  • B performs any desired

19
The Entropy Game
  • A thought experiment that can be used to
    operationally define the entropy content of a
    given target physical system X.
  • X should have a well-defined state space,
    with N states total information content Itot
    log N.
  • Basic idea B must use A (reversibly) as a
    storage medium for data provided by C.
  • The entropy of C is defined as its total
    info. content, minus the expected logarithm of
    the number of messages that A can reliably store
    and retrieve from it.
  • Rules of the game
  • A and B start out unentangled with each other
    (and with C).
  • A publishes his own exact initial classical
    state A0 in a public record.
  • B can probe A to make sure he is telling the
    truth.
  • Meanwhile, B prepares in secret any string WW0
    of any number n of bits.
  • B passes his string W to A. A may observe its
    length n.
  • A may then carry out any fixed quantum algorithm
    Q1 operating on the closed joint system (A,X,W),
    under the condition
  • The final state must leave (A,X,W) unentangled,
    AA0, and W 0n.
  • B is allowed to probe A and W to verify that
    AA0 and W0n.
  • Finally, A carries out another fixed quantum
    algorithm Q2, returning again to his initial
    state A0, and supposedly restoring W to its
    initial state.
  • A returns W to B B is allowed to check W and A
    again to verify that these conditions are
    satisfied.

Iterate till convergence.
Definition The entropy of system X is C minus
the maximum over As strategies (starting states
A0, and algorithms Q1,Q2) of the expectation
value (over states of X) of the minimum over Bs
strategies (sequences of strings) of the average
length of those strings that are exactly
returned by A (in step 8) with zero probability
of error.
20
The Entropy Game
  • A game (or adversarial protocol) between two
    players (A and B) that can be used to
    operationally define the entropy content of a
    given target physical system X.
  • X should have a well-defined state space,
    with N states total information content Itot
    log N.
  • Basic idea B must use A (reversibly) as a
    storage medium for data provided by C.
  • The entropy of C is defined as its total
    info. content, minus the expected logarithm of
    the number of messages that A can reliably store
    and retrieve from it.
  • Rules of the game
  • A and B start out unentangled with each other
    (and with C).
  • A publishes his own exact initial classical
    state A0 in a public record.
  • B can probe A to make sure he is telling the
    truth.
  • Meanwhile, B prepares in secret any string WW0
    of any number n of bits.
  • B passes his string W to A. A may observe its
    length n.
  • A may then carry out any fixed quantum algorithm
    Q1 operating on the closed joint system (A,X,W),
    under the condition
  • The final state must leave (A,X,W) unentangled,
    AA0, and W 0n.
  • B is allowed to probe A and W to verify that
    AA0 and W0n.
  • Finally, A carries out another fixed quantum
    algorithm Q2, returning again to his initial
    state A0, and supposedly restoring W to its
    initial state.
  • A returns W to B B is allowed to check W and A
    again to verify that these conditions are
    satisfied.

Iterate till convergence.
Definition The entropy of system X is C minus
the maximum over As strategies (starting states
A0, and algorithms Q1,Q2) of the expectation
value (over states of X) of the minimum over Bs
strategies (sequences of strings) of the average
length of those strings that are exactly
returned by A (in step 8) with zero probability
of error.
21
Intuitions behind the Game
  • A wants to show that X has a low entropy (high
    available storage capacity or extropy).
  • He will choose an encoding of strings W in Xs
    state that is as efficient as possible.
  • A chooses his strategy without knowledge of what
    strings B will provide
  • The coding scheme must thus be very general.
  • Meanwhile, B wants to show that X has a high
    entropy (low capacity).
  • B will

22
Explaining Entropy Increase
  • When the Hamiltonian of a closed system is
    exactly known,
  • The statistical (von Neumann) entropy of the
    systems density operator is exactly conserved.
  • I.e., there is no entropy increase.
  • In the traditional statistical view of entropy,
  • Entropy can only increase in one of the following
    situations
  • (a) The Hamiltonian is not precisely known, or
  • (b) The system is not closed
  • Entropy can leak into the system from an outside
    environment
  • (c) We estimate entropy by tracing over entangled
    subsystems
  • Take reduced density operators of individual
    subsystems
  • And pretend the entropy is additive
  • However, in the

23
Energy as Computing
  • Some history of the idea
  • Earliest hints can be seen in the original Planck
    Eh? relation for light.
  • That is, an oscillation with a period of ?
    requires an energy at least h?.
  • Also suggestive is the energy-time uncertainty
    principle ?E?t ?/2.
  • Relates average energy uncertainty ?E to minimum
    time intervals ?t.
  • Margolus Levitin, Physica D 120188-195 (1998).
  • Prove that a state of average energy E above the
    ground state takes at least time ?t h/4E to
    evolve to an orthogonal one.
  • Or (N-1)h/2NE, for a cycle of N mutually
    orthogonal states.
  • Lloyd, Nature 4061047-1054, 31 Aug. 2000.
  • Uses that to calculate the maximum performance of
    a 1 kg ultimate laptop.
  • Levitin, Toffoli, Walton, quant-ph/0210076.
  • Investigate minimum time to perform a CNOT
    phase rotation, given E.
  • Giovannetti, Lloyd, Maccone, Phys. Rev. A 67,
    052109 (2003), quant-ph/0210197 also see
    quant-ph/0303085.
  • Tighter limits on time to reduce fidelity to a
    given level, taking into account both E and ?E,
    amount of entanglement, and number of interaction
    terms.
  • These kinds of results prompt us to ask
  • Is there some valid sense in which we can say
    that energy is computing?
  • And if so, what is it, exactly?
  • Well see this also relates to action as
    computation.

24
A Simple Example
  • Consider a constant Hamiltonian with energy
    eigenstates G? and E?, with eigenvalues 0,E.
  • That is, HG?0, HE?EE?. E.g., H ?Osz.
  • Consider the initial state ?0?
    (G?E?)2-1/2.
  • cE? phase-rotates at rate ?E? E / ?.
  • In time 2E/h, rotates by ?p.
  • The area swept out by cE?(t) is
  • aE? ½p(cE?2) p / 4.
  • This is just ½ of a circle withradius rE?
    2-1/2.
  • Meanwhile, cG? is stationary.
  • Sweeps out zero area.
  • Total area a p / 4.

i
ap/4
?p
1
cE?
cG?
0
r 2-1/2
25
Lets Look at Another Basis
  • Define a new basis 0?, 1? with
    0?(G?E?)2-1/2, 1?(G?-E?)2-1/2
  • Use the same initial state ?0? 0?.
  • Note the final state is 1?.
  • Coefficients c0?(t) and c1?(t)trace out the
    pathshown to the right
  • Note that the total areain this new basis is
    still p/4!
  • Area of a circle of radius ½.
  • Hmm, is this true for any basis? Yes!

a p/4
c0?
c1?
26
Action Some Terminology
  • A physical action is, most generally, the
    integral of an energy over time.
  • Or, along some temporalizable path.
  • Typical units of action h or ?.
  • Correspond to angles of 1 circle and 1 radian,
    respectively.
  • Normally, the word action is reserved to refer
    specifically to the action of the Lagrangian L.
  • This is the action in Hamiltons least action
    principle.
  • However, note that we can broaden our usage a bit
    and equally well speak of the action of any
    quantity that has units of energy.
  • E.g., the action of the Hamiltonian H L pv
    L p2/m.
  • Warning I will use the word action in this
    more generalized sense!

27
Action as Computation
  • We will argue Action is computation.
  • That is, an amount of action corresponds exactly
    to an amount of physical quantum-computational
    work.
  • Defined in an appropriate sense.
  • The type of action corresponds to the type of
    computational work performed, e.g.,
  • Action of the Hamiltonian All computational
    work.
  • Action of the Lagrangian Internal
    computational work.
  • Action of pv Motional computational work
  • We will show exactly what we mean by all this,
    mathematically

28
Action of the Hamiltonian
  • Consider now the action A (eq. (1) below) of any
    time-dependent Hamiltonian operator H(t).
  • Note that A is an Hermitian observable as well.
  • The H determines state-vector dynamics via the
    usual Schrödinger relation d/dt iH/?.
  • For our purposes, we are adopting the opposite of
    the usual (but arbitrary) sign convention in this
    equation.
  • This leads to the time-evolution operator (2)
    below
  • Given H(t) ? A(t0,t) ? U(t0,t), any initial
    vector v(t0) yields a complete state trajectory
    v(t) U(t,t0)v(t0).

(1)
(2)
29
Some Nice Identities for A
  • Consider applying a specific operator A itself to
    any initial state v0.
  • For any observable A, well use shorthand like
    Av0 ?v0Av0?.
  • It is easy to show that Av0 is equal to all of
    the following
  • The quantum-average net phase-angle accumulation
    of the coefficients ci of vs components in Hs
    energy eigenbasis vi, weighted by the component
    probabilities (3).
  • The line integral, along vs trajectory, of the
    magnitude of the imaginary part of the inner
    product ?v v dv ? between adjacent states
    (4).
  • Exactly twice the net area a swept out in the
    complex plane (relative to the complex origin) by
    vs coefficients cj, in any basis vj.
  • We will prove this.
  • Note that the value of Av0 therefore depends only
    on the specific trajectory v(t) that is taken by
    v0 itself,
  • and not on any other properties of the complete
    Hamiltonian that was used to implement that
    trajectory!
  • For example, it doesnt depend on the energy Hu
    assigned to other states u that are orthogonal to
    v.

(4)
(3)
30
Area swept out in energy basis
  • For a constant Hamiltonian,
  • By a coefficient ci ofan energy basisvector vi.
  • If rici1, the area swept out is ½ of the
    accumulatedphase angle.
  • For rilt1, note areais this times ri2.
  • Sum over i ½ avg.phase angle accumulated ½
    action of Hamiltonian.

ri
31
In other bases
  • Both the phase and magnitude of each coefficient
    will change, in general
  • - The area swept out is no longer just a
    corresponding fraction of a circular disc.
  • Its not immediately obvious that
    the sum of the areas swept out by all the
    cjs will still be the same in the new
    basis.
  • - Well show that indeed it is.

32
Basis-Independence of a
  • Note that each cj(t) trajectory is just a sum of
    circular motions
  • Namely, a linear superposition of the ci(t)
    motions
  • Since each circular component motion is
    continuous and differentiable, so is their sum.
  • The trajectory is everywhere a smooth curve.
  • No sharp, undifferentiable corners.
  • Thus, in the limit of arbitrarily short time
    intervals, the path can always be treated as
    linear.
  • Area daj approaches ½ the parallelogram area rj
    rj' sin d? cjcj'
  • Cross product of complex numbers considered as
    vectors
  • Use a handy complex identity ab ab i(ab)
  • Implies that daj ½ Imcj cj'
  • So, da ½ Imvv'.
  • So da is basis-independent, since the inner
    product vv' is!

33
Computational Work of a Hamiltonian applied to a
system
  • Suppose were given a time-dependent Hamiltonian
    H(t), a specific initial state v, and a time
    interval (t0, t)
  • We can of course compute the operator A(t0,t)
    from H.
  • Well call Av the computational work performed
    according to the specific action operator A (or
    by H acting from t0 to t) on the initial state
    v.
  • Later we will see some reasons why this
    identification makes sense.
  • For now, take it as a definition of what we mean
    by computational work
  • If we are given only a set V of possible initial
    vectors,
  • The (maximum, minimum) work of A (or H from t0 to
    t) is (5)
  • If we had a prob. dist. over V (or equiv., a
    mixed state ?),
  • we could instead discuss the expected work (6) of
    A acting on V

(5)
(6)
34
Computational Effort to Cause a Desired Change
  • If we are interested in taking v0 to v1, and we
    have a set ? of available action operators A
    (implied, perhaps, by a set of available
    Hamiltonians H(t))
  • we define the minimum work or effort to get
    from v0 to v1, (7)
  • Maximizing over ? isnt very meaningful, since it
    may often yield 8.
  • And if we have a desired unitary transform U that
    we wish to perform on any of a set of vectors V,
    given a set ? of available action operators,
  • Then we can define the minimum (over ?)
    worst-case (over V) work to perform U, or
    worst-case effort to do U (8).
  • Similarly, we can discuss the best-case effort to
    do U.
  • or (if we have vector probabilities) the minimum
    (over ?) expected (over V) work to do U, or
    expected effort to do U (9).

(8)
(7)
(9)
35
The Justification for All This
  • Why do we insist on referring to these concepts
    as computational work or computational
    effort?
  • One could imagine other possible terms, such as
    amount of change, physical effort, the
    original action of the Hamiltonian etc.
  • What is so gosh-darned computational about this
    concept?
  • Answer We can use these concepts to quantify the
    size or difficulty of, say, quantum
    logic-gate operations.
  • And by extension, classical reversible operations
    embedded in quantum operations
  • And by extension, classical irreversible Boolean
    ops, embedded within classical reversible gates
    with disposable ancillas
  • As well as larger computations composed from such
    primitives.
  • The difficulty of a given computational op
    (considered as a unitary U) is given by its
    effort (minimized work over ?)
  • We can meaningfully discuss an operations
    minimum, maximum, or expected effort over a given
    space of possible input states.

36
But, you say, Hamiltonian energy is only defined
up to an additive constant
  • Still, the effort of a given U can be a
    well-defined (and non-negative) quantity, IF
  • We adopt an appropriate and meaningful zero of
    energy!
  • One useful convention
  • Define the least eigenvalue (ground state energy)
    of H to be 0.
  • This ensures that energies are always positive.
  • However, we might want to do something different
    than this in some cases
  • E.g., if the ground-state energy varies, and it
    includes energy that had to be explicitly
    transferred in from another subsystem
  • Another possible convention
  • We could count total gravitating mass-energy
  • Anyway, lets agree, at least, to just always
    make sure that all energies are positive, OK?
  • Then the action is always positive, and we dont
    have to worry about trying to make sense of a
    negative amount of computational work.

37
Energy as Computing
  • Given that Action is computation,
  • That is, amount of computation,
  • where the suffix -ation denotes a noun,
  • i.e., the act itself,
  • What, now, is energy?
  • Answer Energy is computing.
  • By which I mean, rate of computing activity.
  • The suffix -ing denotes a verb,
  • the (temporal) carrying out of an action
  • This should be clear, since H(t) dA/dt
  • Thus the Hamiltonian energy of any given state is
    the rate at which computational work is being (or
    would be) performed on (or by, if you prefer)
    that state.

38
Applications of the Concept
  • How is all this useful?
  • It lets us calculate time/energy tradeoffs for
    performing operations of interest.
  • It can help us find (or define) lower bounds on
    the number of operations of a given type needed
    to carry out a desired computation.
  • It can tell us that a given implementation of
    some computation is optimal.

39
Time/Energy Tradeoffs
  • Suppose you determine that the effort of a
    desired v1?v2 or U(V) (given the available
    actions ?) is A.
  • For a multi-element state set V, this could be a
    minimum, maximum, or expected effort
  • And, suppose the energy that is available to
    invest in the system in question is at most E.
  • This then tells you directly that the
    minimum/maximum/expected (resp.) time to perform
    the desired transformation will be t A/E.
  • To achieve equality might require varying the
    energy of the state over time, if the optimal
    available H(t) says to do so
  • Conversely, suppose we wish to perform a
    transformation in at most time t.
  • This then immediately sets a scale-factor for the
    magnitude of the energy E that must be devoted to
    the system in carrying out the optimal
    Hamiltonian trajectory H(t) i.e., E A/t.

40
Single-Qubit Gate Scenario
  • Lets first look at 2-state (1-qubit) systems.
  • Later well consider larger systems.
  • Let U be any unitary operator in U2.
  • I.e., any arbitrary 1-qubit quantum logic gate.
  • Let the vector set V consist of the sphere of
    all unit vectors in the Hilbert space H2.
  • Given this scenario, the minimum effort to do any
    U is always 0 (just let v be an eigenvector of
    U), and is therefore uninteresting.
  • Instead well consider the maximum effort.
  • What about our space ? of available action
    operators?
  • Suppose for now, for simplicity, that all
    time-dependent Hermitian operators on H2 are
    available as Hamiltonians.
  • Really we only need the time-independent ones,
    however.
  • Thus, ? consists of all (constant) Hermitian
    operators.

41
Analysis of Maximum Effort
  • The maximum effort to do U (in this scenario)
    arises from considering a geodesic trajectory
    in U2.
  • All the worst-case state vectors just follow the
    most direct path along the unit sphere in
    Hilbert space to get to their destinations.
  • Other vectors go along for the ride on the
    necessary rotation.
  • The optimal unitary trajectory U(t0,t) then
    amounts to a continuous rotation of the Bloch
    sphere around a certain axis in 3-space
  • where the poles of the rotation axis are the
    eigenvectors of U.
  • Also, theres a simultaneous (commuting) global
    phase-rotation.
  • If we also adopt the convention that the
    ground-state energy of H is defined to be 0,
  • Then the global phase-rotation factor goes away,
  • And we are left with a total effort A that turns
    out to be exactly equal to ??, where 0 ? p is
    simply the (minimum) required angle of
    Bloch-sphere rotation to implement the given U.

42
Some Special Cases
  • Pauli operators X,Y,Z (including XNOT), as well
    as the Hadamard gate
  • Bloch sphere rotation angle p (rads)
  • Maximum effort h/2
  • Square-root of NOT, also phase gate (square root
    of Z)
  • Rotation angle p/2, effort h/4.
  • p/8 gate (square root of phase gate)
  • Rotation angle p/4, effort h/8.

43
Fidelity and Infidelity
  • The fidelity between pure states u,v is defined
    as F(u,v) ?uv?.
  • So, F2 is the probability of conflating the two.
  • Define the infidelity between u,v as
  • Thus, I2 1 - F2 is the probability that if
    state u is measured in a basis that includes v as
    a basis vector, it will project to a basis state
    other than v.
  • Infidelity is thus a distance metric between
    states

44
Effort Required for Infidelity
  • Guess what, a Bloch-sphere rotation by an angle
    of ? gives a maximum (over V) infidelity of I(?)
    sin(2?).
  • Meanwhile, the minimum fidelity is cos(2?)
  • Youll notice that F2I21, as probabilities
    should.
  • Therefore, achieving an infidelity of I requires
    performing a U whose maximum effort is at least A
    2?arcsin(I).
  • However, the specific initial states that
    actually achieve this infidelity under the
    optimal rotation are Bloch equator states
  • Equal superpositions of high and low energy
    eigenstates
  • They perform a quantum-average amount of
    computational work that is only half of the
    maximum effort.
  • Thus, the actual work required for an infidelity
    of I is only half of the maximum effort, or W
    A/2 ?arcsin(I).
  • And so, a specific state that carries out an
    amount of computational work W p/2 can achieve
    an infidelity of at most I sin(W/?), while
    maintaining a fidelity of at least Fcos(W/?)
  • a nice simple relation Especially if we let ?1

45
Multi-qubit Gates
  • Some multi-qubit gates are easy to analyze
  • E.g., controlled-U gates that perform a unitary
    U on one qubit only when all of the other qubits
    are 1
  • If the space of Hamiltonians is truly totally
    unconstrained, then (it seems) the effort of
    these will match that of the corresponding 1-bit
    gates.
  • However, in reality we dont have such
    fine-tailored Hamiltonians readily available.
  • A more thorough analysis would analyze the effort
    in terms of a Hamiltonian thats expressible as a
    sum of realistically-available, 1- and 2-qubit
    controllable interaction terms.
  • We havent tried to do this yet

46
Conclusion
  • We can define a clear and stable measure of the
    length of any continuous state trajectory in
    Hilbert space. (Call it computational work.)
  • Its simply given by the action of the
    Hamiltonian.
  • It has a nice geometric interpretation as well.
  • From this, we can accordingly define the size
    (or effort) of any unitary transformation.
  • As the worst-case (or average-case) path length,
    minimized over the available Hamiltonians.
  • We can begin to quantify the effort required for
    various quantum gates of interest
  • From this, we can compute lower bounds on the
    time to implement them for states of given energy.

47
Temperature as Clock Speed
48
Momentum as Motional Computation per unit
Distance
  • For a system moving at velocity v
  • Let ß v/c (dimensionless velocity)
  • Let ? (1-ß2)1/2 (relativistic gamma recip.)
  • Split up the Hamiltonian H into
  • An L H - pv (Lagrangian, internal) part
  • and an M pv (Motional) part.
  • At velocity v0, let H0E0 (rest mass-energy)
  • Then we find that
  • L
Write a Comment
User Comments (0)
About PowerShow.com