Title: SingleFrame Super Resolution
1Single-Frame Super Resolution
-
- Qin Gu
- Wenshan Yu
- Hao Tian
- Andrey Belokrylov
- 1/30/06
2Introduction
- Super-resolution is the problem of generating a
high-resolution image (HR) from one or more
low-resolution images (LR).
3Motivation
- A number of real-world applications
- A common application occurs when we want to
increase the resolution of an image while
enlarging it using a digital imaging software
(such as Adobe Photoshop). - To save storage space and communication bandwidth
(hence download time) - Another application arises in the restoration of
old, historic photographs, enlarge them with
increased resolution for display purposes.
4Interpolation
LR image
True HR image
Interpolation blurred!
5Our method
- Most methods of super-resolution are based on
multiple low-resolution images of the same scene
(which means we have to take the same photos many
times for each evaluation). - Our method is generating a high-resolution image
from a single low-resolution image, with the help
of a set of one or common training images.
6Training Set
Training Set (Low-High resolution image pairs)
7Our project
- Our project implements 2 kinds of
super-resolution algorithms. - Super-resolution Through Neighbor Embedding
(Manifold learning ,LLE) - Learning Low-Level vision (Example Based
Algorithm ) - Both of them are based on learning training
examples. - Finally, we will compare the performance of these
2 algorithms for different images and training
examples.
8Overlap patches
- For the low-resolution images, we use 3 3
patches with an overlap of one or two pixels
between adjacent patches.
3x3
9Overlap patches
10Super-Resolution Through Neighbor Embedding
- 1. For each patch x in image Xt
- (a) Find the set Nq of K nearest neighbors in Xs.
- (b) Compute the reconstruction weights of the
neighbors that minimize the error of
reconstructing x - (c) Compute the high-resolution embedding y,using
the appropriate high-resolution features of the K
nearest neighbors and the reconstruction weights. - 2. Construct the target high-resolution image Yt
by enforcing local compatibility and smoothness
constraints between adjacent patches obtained in
step 1(c).
11Example K5
3x3
Training Set
w1 w2 w3 w4 w5
9x9
12Feature vector
- Each patch is represented as a feature vector
- N dimensional feature space.
- Intensity, gradient, etc.
13LR image
True HR image
Our result
Interpolation
14- Purpose
-
- To get high resolution image from the input
low resolution image. - Way
- Use a set of training examples
15Single-image super-resolution
- Application
- 1.Enlarge a digital image (software).
- 2.Click the image on the web page.
16Related previous work
- Simple resolution enhancement methods.
- 1.Smoothing Gaussian, Wiener, median filters
- 2.Interpolation 1)bicubic interpolation
- 2)cubic interpolation
17Problem formulation
- 1.Input(X)low resolution image
- 5050
- 2.Target(Y)high resolution image
18Training set
19Training set
20Get the patch
- Patch
- Separate the image into patches.
- Each patch is 33.
- Need to extend the image.
- One column and one row
21Get the patch in Training set
- The patch
- The patch(?(I(i,j)-A(i1,j1))2) 1/2
- ai(?(I(i,j)-A(i1,j1))2) 1/2
22Get the patch in Training set
- Input low resolution patch
- Five nearest neighbors
23Get the weight in Training set
- The weight
- The weightai/?(ai) i1,2,3,4,5
- bi ai/?(ai)
24Get the initial image
- For all patch
- Yb1a1b2a2b3a3b4a4b5a5
25Overlap
- For overlap
- We use 3N3N patches in the high resolution, then
we use 1N1N for the adjacent patches. - Get the average value for the adjacent patches.
26Basic Framework
- Given image data y, we want to estimate the
underlying scene, x - We use the posterior probability,
- We seek the MAP estimate.
- We make the Markov assumption
27Markov network with loops
F(xi,yi)
?(xi,xj)
- Knowing xj implies knowing yj
- Knowing xj gives information about nearby xs
28Markov network without loops
Example
29Representation F and ?
- At each node we collect a set of 10 or 20 scene
candidates - We want to find, in each column, the scene
candidate which best explains the image patch,
and is compatible with its neighbors
30Two main assumptions
- High frequencies are independent of low
frequencies. (Only mid-frequencies)
P(HM,L)P(HM) - Image frequencies are independent of image
contrast
31Training set
Spline interpolation
downsample
Difference between them (high frequencies)
Eliminating low frequences using high-pass
filter(next slide)
Normalized(/MeanAbse) patches
32Predicting and RMS error
Final Image (1Iteration) (RMS10.8)
Final Image (4 iterations) (RMS6.8)
Original Image
Diff.
High frequencies we need to predict
Recovered High frequencies (1 iteration)
Recovered High frequencies (4 iterations)
Interpolated Image (RMS11.3)
33Different training sets
Original-gt
34Motivation of improvement
- The fact that belief propagation converged to a
solution of the Markov network so quickly(
typically 3-4 iterations) led us to believe that
more straightforward and time efficient approach
can be used in practice. - Premise produce comparable results.
35Basic idea (One-pass)
- Goal maximize two compatibilities
- similarity compatibility---sc
- neighboring compatibility---nc
- Markov find a set of candidate patches with
- highest sc then predict the best
one - by iterative belief propagation.
- One-pass directly find one patch with best sc
and - nc in single operation.
-
36Assumption
- Base of One-pass
- Raster-scanning processing
- which means we only need to compute the
neighboring compatibility with previous decided
high-patch( left, top)
37Concatenation
- 1.Combine two parts
- and then search for
- most similar patch
- in training set.
- 2. Change the storage
- structure in training
- set to the same
- concatenated model.
38Control balance
- The parameter a controls the trade-off between
matching the low resolution patch data and
finding a high-resolution patch that is
compatible with its neighbors. (Mlow patch size,
Nhigh patch size) - pixels in low resolution patch
pixels in borders of precious -
decided patches
39Illustrative example
Training set
Input image
output image
40Exploring best training patch
- Similarity function
- Euclidean distance sqrt((a-a1)2(b-b1)2
) - Manhattan distance abs(a-a1) abs(b-b1).
- Searching Algorithm (how to search training
space) - Brute Searching
- best effect, very time-consuming because of
huge space (at least more than 10 thousand) - Based on Mean
- divide training set to groups. Compare the
mean between current low patch and each group,
then search the group with most similar mean.
mean searching
brute searching
41LLE Example Based Algorithm
- Both of them are looking for one or a set of most
similar patches in training set. - LLE manage to restore the high patch by computing
weighted combination of those selected patches. - Example Based Algorithm manage to find the best
one of selected patches and take it as the output
high patch directly.
Output of Example Based Algorithm
Output of LLE
42Training set limitation
- It might seem that to enlarge an image of one
featurefor example, a catwe would need a
training set that contained images of other cats
. However, this isnt the case. - Although the training set doesnt have to be very
similar - to the image to be enlarged, it should be in
the same - image classsuch as text or color image.
43Thanks
Thanks
- Qin Gu
- Wenshan Yu
- Hao Tian
- Andrey Belokrylov