A Review of Polynomial Representations and Arithmetic in Computer Algebra Systems

1 / 48
About This Presentation
Title:

A Review of Polynomial Representations and Arithmetic in Computer Algebra Systems

Description:

Polynomials force basic decisions in the design of a CAS ... This is not a clean-cut distinction. When does a sparse polynomial 'fill in'? Consider powering: ... –

Number of Views:53
Avg rating:3.0/5.0
Slides: 49
Provided by: richardj76
Category:

less

Transcript and Presenter's Notes

Title: A Review of Polynomial Representations and Arithmetic in Computer Algebra Systems


1
A Review of Polynomial Representations and
Arithmetic in Computer Algebra Systems
  • Richard Fateman
  • University of California
  • Berkeley

2
Polynomials force basic decisions in the design
of a CAS
  • Computationally, nearly every applied math
    object is either a polynomial or a ratio of
    polynomials
  • Polynomial representation(s) can make or break a
    system compactness, speed, generality
  • Examples Maples object too big
  • SMPs only-double-float
  • Wasteful representation can make a computation
    change from RAM to Disk speed (Poisson series)

3
Polynomial Representation
  • Flexibility vs. Efficiency
  • Most rigid fixed array of floating-point numbers
  • Among the most flexible hash table of exponents,
    arbitrary coefficients.
  • (cos(z)sqrt(2) x3y4z5
  • key is (3,4,5), entry is ( (cos z) (expt 2 ½))

4
A divergence in character of the data
  • Dense polynomials (1 or more vars)
  • 34x5x20x39x4
  • 12x3y 4xy 5x2 0y2 7x2y
    8xy2 9x2y2
  • Sparse (1 or more vars)
  • X100 3x10 43
  • 34x100y301
  • abcde

5
Dense Representation
  • Set number of variables v, say v3 for x, y, z
  • Set maximum degree d of monomials in each
    variable (or d max of all degrees)
  • All coefficients represented, even if 0.
  • Size of such a polynomial is (d1)v times the
    maximum size of a coefficient
  • Multidimensional arrays look good.

6
Dense Representation almost same as bignums..
  • Set number of variables to 1, call it 10
  • All coefficients are in the set 0,,9
  • Carry is an additional operation.

7
Sparse Representation
  • Usually new variables easy to introduce
  • Many variables may not occur more than once
  • Only some set S of non-zero coefficients
    represented
  • Linked lists, hash tables, explicit storage of
    exponents.

8
This is not a clean-cut distinction
  • When does a sparse polynomial fill in?
  • Consider powering
  • (1x5x11) is sparse
  • Raise to 5th power x55 5 x49 5 x44
    10 x43 20 x38 10 x37 10 x33
    30 x32 5 x31 30 x27 20 x26
    x25 10 x22 30 x21 5 x20 20
    x16 10 x15 5 x11 10 x10 5 x5
    1
  • This is looking rather dense.

9
Similarly for multivariate
  • When does a sparse polynomial fill in?
  • Consider powering
  • (abc) is completely sparse
  • Cube it c3 3 b c2 3 a c2 3
    b2 c 6 a b c 3 a2 c b3 3
    a b2 3 a2 b a3
  • This is looking completely dense.

10
How many terms in a power?
  • Dense
  • (d1)v raised to the power n is (nd1)v.
  • Sparse
  • assume complete sparse t term polynomial p
    x1x2xt.
  • Size(pn) is binomial(tn-1,n).
  • Proof if t2 we have binomial theorem
  • If tgt2 then rewrite p xt (poly with 1 fewer
    term)
  • Use induction.

11
A digression on asymptotic analysis of algorithms
  • Real data is not asymptotically large
  • Many operations on modest-sized inputs
  • Counting the operations may not be right
  • Are the coefficient operations constant cost?
  • Is arithmetic dominated by storage allocation in
    small cases?
  • Benchmarking helps, as does careful counting
  • Most asymptotically fastest algorithms in CAS are
    actually not used

12
Adding polynomials
  • Uninteresting problem, generally
  • Merge the two inputs
  • Combine terms that need to be combined
  • Part of multiplication partial sums
  • What if the terms are not produced in sorted
    form? O(n) becomes O(n log n) or if done naively
    (e.g. insert each term as generated) perhaps
    O(n2). UGH.

13
Multiplying polynomials - I
  • The way you learned in high school
  • MSize(p)Size(q) coefficient multiplies.
  • How many adds? How about this terms combine
    when they dont appear in the answer. The numbers
    of adds is Size(pq)-Size(p)Size(q).
  • Comment on the use of above..
  • We have formulas for the size of p, size of p2,

14
Multiplying polynomials - II
  • Karatsubas algorithm
  • Polynomials f and g if each has 2 or more terms
  • Split each if f and g are of degree d, consider
  • f f1x(d/2)f0
  • g g1x(d/2)g0
  • Note that f1, f0, g1, g0 are polys of degree d/2.
  • Note there are a bunch of nagging details, but it
    still works.
  • Compute Af1g1, Cf0g0 (recursively!)
  • Compute D(f1f0)(g1g0) (recursively!)
  • Compute BD-A-C
  • Return AxdBx(d/2)C (no mults).

15
Multiplying polynomials - II
  • Karatsubas algorithm
  • Cost in multiplications, the cost of multiplying
    polys of size 2r cost(2r) is 3cost(r) ?
    cost(s)slog2(3) s1.585 looking good.
  • Cost in adds, about 5.6s1.585
  • Costs (addsmults) 6.6s1.585
  • Classical addsmults) is 2s2

16
Karatsuba wins for degree gt 18
17
Analysis potentially irrelevant
  • Consider that BOTH inputs must be of the same
    degree, or time is wasted
  • If the inputs are sparse, adding f1f2 etc
    probably makes the inputs denser
  • A cost factor of 3 improvement is reached at size
    256.
  • Coefficient operations are not really constant
    time anymore.

18
Multiplication III evaluation
  • Consider H(x)F(x)G(x), all polys in x
  • H(0)F(0)G(0)
  • H(1)F(1)G(1)
  • If degree(F) degree(G) 12, then degree(H) is
    12. We can completely determine, by
    interpolation, a polynomial of degree 12 by 13
    values, H(0), , H(12). Thus we compute the
    product HFG

19
What does evaluation cost?
  • Horners rule evaluates a degree d polynomial in
    d adds and multiplies. Assume for simplicity
    that degree(F)degree(G)d, and H is of degree
    2d.
  • We need (2d1) evals of F, (2d1) evals of G,
    each of cost d (2d)(2d1).
  • We need (2d1) multiplies.
  • We need to compute the interpolating polynomial,
    which takes O(d2).
  • Cost seems to be (2d2)(2d1) O(d2), so it is
    up above classical high-school

20
Can we evaluation/interpolate faster?
  • We can choose any n points to evaluate at, so
    choose instead of 0,1, . we can choose w, w2,
    w3, where w nth root of unity in some
    arithmetic domain.
  • We can use the fast Fourier transform to do these
    evaluations not in time n2 but nlog(n).

21
About the FFT
  • Major digression, see notes for the way it can be
    explained in a FINITE FIELD.
  • Evaluation You multiply a vector of
    coefficients by a special matrix F. The form of
    the matrix makes it possible to do the
    multiplication very fast.
  • Interpolation You multiply by the inverse of F.
    Also fast.
  • Even so, there is a start-up overhead,
    translating the problem as given into the right
    domain, translating back rounding up the size

22
How to compute powers of a polynomial RMUL
  • RMUL. (repeated multiplication)
  • Pn P P (n-1)
  • Algorithm set ans1
  • For I1 to n do anspans
  • Return ans.

23
How to compute powers of a polynomial RSQ
  • RSQ. (repeated squaring)
  • P(2n) (Pn) (Pn) compute Pn once..
  • P(2n1) P P(2n) reduce to prev. case

24
How to compute powers of a polynomial BINOM
  • BINOM(binomial expansion)
  • P monomial. Fiddle with the exponents
  • P (p1p2 pd) (ab)n. Compute
    sum(binomial(n,i)aib(n-i), i0,n).
  • Computing b2 by BINOM, b3, by RMUL etc.
  • Alternative split P into two nearly-equal
    pieces.

25
Comparing RMUL and RSQ
  • Using conventional n2 multiplication, let
    hsize(p). RMUL computes p, pp, pp2,
  • Cost is size(p)size(p) size(p)size(p2)
  • For dense degree d with v variables
  • (d1)v sum ((id1)v,i1,n-1) lt
    (d1)(2v)n(v1)/(v1).

26
Comparing RMUL and RSQ
  • Using conventional n2 multiplication, lets
    assume n2j.
  • Cost is size(p)2 size(p2)2
    size(p(2(j-1)))2
  • For dense degree d with v variables
  • sum ((2i d1)v,i1,j) lt (n(d1))(2v)/(3v-1)
  • This is O((nd)(2v)) not O(nvd(2v))
    rmul so RSQ loses. If we used Karatsuba
    multiplication, the cost is O((nd)(1.585v))
    which is potentially better.

27
Theres more stuff to analyze
  • RSQ vs RMUL vs. BINOM on sparse polynomials
  • Given Pn, is it faster to compute P(2n) by
    squaring or by multiplying by P, P, P n times
    more?

28
FFT in a finite field
  • Start with evaluating a polynomial A(x) at a
    point x

29
Rewrite the evaluation
  • Horners rule

30
Rewrite the evaluation
  • Matrix multiplication

31
Rewrite the evaluation
  • Preprocessing (requires real arith)

32
Generalize the task
  • Evaluate at many points

33
Primitive Roots of Unity
34
The Fourier Transform

35
This is what we need
  • The Fourier transform
  • Do it Fast and it is an FFT

36
Usual notation

37
Define two auxiliary polynomials
38
So that
39
Putting the pieces together

40
Simplifications noted

41
Re-stated

42
Even more succinctly
  • Here is the Fourier Transform again..

43
How much time does this take?
44
What about the inverse?

45
Why is this the inverse?

46
An example computing a power
47
An example computing a power
48
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com