How Digital Imaging Work - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

How Digital Imaging Work

Description:

Digital cameras have much more electronics: electronics for sensing, ... Since digital cameras do not use films, you do not need to wait for film to ... – PowerPoint PPT presentation

Number of Views:94
Avg rating:3.0/5.0
Slides: 26
Provided by: sing8
Category:

less

Transcript and Presenter's Notes

Title: How Digital Imaging Work


1
How Digital Imaging Work Outline 1. Digital
(electronic) vs. Traditional (film, analog)
cameras 2. Digital sensors (that replaces
photographic films) 2.1 CCD imagers
2.2 CMOS imagers 2.3 Digital
quantization / sampling (and resolution) 2.4
Capturing Color 3. LC Display and Digital
storage (that replaces photographic films,
e.g. SmartMedia cards, CompactFlash cards and
Memory Sticks) Summary
2
  • 1. Digital (electronic) vs. Traditional (film,
    analog) cameras
  • Conventional cameras depend entirely on chemical
    (for sensing and
  • storage) and mechanical processes (for controls).
  • Digital cameras have much more electronics
    electronics for sensing,
  • storage, display and processing/control
    (computer).
  • Digital cameras allow you to do quite a few
    things that traditional cameras
  • cannot. For examples,
  • Since digital cameras do not use films, you do
    not need to wait for film to
  • be processed before viewing the images.
  • You can choose which pictures you want to keep,
    deleting bad shots to
  • make room for new ones.

3
Digital (electronic) vs. Traditional (film,
analog) cameras (continue)
  • The media that digital images are stored on have
    large capacities
  • and erasable / reusable (equivalent to many
    rolls of film), are
  • durable and would not degrade physically or
    chemically over time.
  • You can attach descriptions to digital images,
    dates and times to
  • help you organize them into folders that work
    like digital photo
  • albums.
  • Digital images can be enhanced, altered,
    reproduced, inserted into
  • creative projects, and shared over the
    Internet.

But, the resolution in digital images may not be
as good as those in films, especially enlarged
images taken by low cost digital cameras. The
resolution problems have been improving over
time. Viewing images on the digital cameras
screens consume much battery power, although the
battery can be recharged.
4
2. The Digital Sensors Instead of film, a
digital camera has a sensor that converts light
into electrical charges. The image sensor
employed by most digital cameras is a charge
coupled device (CCD). Some cameras use
complementary metal oxide semiconductor (CMOS)
technology instead. Both CCD and CMOS image
sensors convert light into electrons. Both CCD
and CMOS imagers are manufactured in a silicon
foundry the equipment used is similar. But
alternative manufacturing processes and device
architectures make the imagers quite different in
both capability and performance. For example
  • CCD sensors create high-quality, low-noise
    images. CMOS censors are
  • generally more susceptible to noise.
  • CCD sensors have been mass produced for a longer
    period of time, so
  • they are more mature. They tend to have higher
    quality pixels, and
  • more of them.

5
  • Because each pixel on a CMOS sensor has several
    transistors
  • located next to it, the light sensitivity of a
    CMOS chip is lower. Many
  • of the photons hit the transistors instead of
    the photodiode.
  • CMOS sensors traditionally consume little power.
  • CCDs, on the other hand, use a process that
    consumes lots of power,
  • as much as 100 times more power than an
    equivalent CMOS sensor.
  • A little more details on the technologies of CCD
    and CMOS imagers
  • and their future potentials are provided in the
    following. Then, we will
  • discuss about their resolutions and look at how
    the camera adds color
  • to their images.

6
2.1 CCD imagers
Developed in the 1970s and 1980s specifically for
imaging applications, CCD technology and
fabrication processes were optimized for the
best possible optical properties and image
quality. The technology continues to improve and
is still the choice in applications where image
quality is the primary requirement or market
share factor.
A CCD sensor
A CCD comprises photosites, typically arranged
in an X-Y matrix of rows and columns. Each
photosite, in turn, comprises a photodiode and
an adjacent charge holding region, which is
shielded from light. The photodiode converts
light (photons) into charge (electrons). The
number of electrons collected is proportional to
the light intensity. Typically, light is
collected over the entire imager simultaneously
and then transferred to the adjacent charge
transfer cells within the columns.
Interline transfer CCD
7
Next, the charge is read out each row of data is
moved to a separate horizontal charge transfer
register. Charge packets for each row are read
out serially and sensed by a charge-to-voltage
conversion and amplifier section (see image
below). This architecture produces a low-noise,
high-performance imager. That optimization,
however, makes integrating other electronics onto
the silicon impractical. In addition, operating
the CCD requires application of several clock
signals, clock levels, and bias voltages,
complicating system integration and increasing
power consumption, overall system size, and cost.
CMOS and CCD Sensor Architectures
8
CMOS and CCD Sensor Architectures
9
2.2 CMOS imagers
A CMOS imager, on the other hand, is made with
standard silicon processes in high-volume
foundries. Peripheral electronics, such as
digital logic, clock drivers, or
analog-to-digital converters, can be readily
integrated with the same fabrication process.
CMOS imagers can also benefit from process and
material improvements made in mainstream
semiconductor technology.
A CMOS image sensor
To achieve these benefits, the CMOS sensors
architecture is arranged more like a memory cell
or flat-panel display. Each photosite contains a
photodiode that converts light to electrons, a
charge-to-voltage conversion section, an
amplifier section, a reset and select transistor.
Overlaying the entire sensor is a grid of metal
interconnects to apply timing and readout
signals, and an array of column output signal
interconnects. The column lines connect to a set
of decode and readout (multiplexing) electronics
that are arranged by column outside of the pixel
array. This architecture allows the signals from
the entire array, from subsections, or even from
a single pixel to be readout by a simple X-Y
addressing techniquesomething a CCD cant do.
10
  • The biggest opportunities for CMOS sensors lie in
    new product
  • categories for which they are uniquely suited.
    Keys to their success are
  • Lower power usage
  • Integration of additional circuitry on-chip
  • Lower system cost
  • Such features make CMOS sensors ideal for mobile,
    multifunction
  • products like Kodaks mc3 or imaging attachments
    like the PalmPix.
  • Still, if CMOS sensors offer all of these
    benefits, why havent they
  • completely displaced CCDs? There are a number of
    reasons some are
  • technical or performance related, and others are
    related more with the
  • growing maturity of the technology. CCDs have
    been mass-produced
  • for over 25 years whereas CMOS technology has
    only just begun the
  • mass production phase. Rapid adoption was also
    hindered because
  • some early implementations of these devices were
    disappointing they
  • delivered poor imaging performance and poor image
    quality.

11
CMOS imaging technology needs to be further
developed to the point where it could deliver
quality images before introducing commercial
products. Scientists and engineers are applying
the optical science and image processing
experience derived from more than 25 years of
work with CCD sensors and digital cameras to
develop and characterize CMOS sensorsand to
define modifications in standard CMOS
manufacturing lines and equipment to make
low-noise, good-quality sensors. Understanding
and accounting for numerous process trade-offs
has enabled engineers to create CMOS devices
that deliver the leading imaging performance.
As the next figure shows the current sensor
market divides itself into two areas the
high-performance, low-volume branch, and the
low-cost, high- volume branch. In the
high-performance branch are applications that
will continue to be dominated by CCD technology,
but CMOS technology will find market share too,
especially for lower cost or more portable
versions of these products. The second area is
where most of the CMOS activity will be. Here,
in many applications CCD sensors will be replaced
with CMOS sensors. These could include some
security applications, biometrics and most
consumer digital cameras.
12
Most of the growth, though, will likely come from
products that can employ imaging
technologyautomotive, computer video, optical
mice, imaging phones, toys, bar code readers and
a host of hybrid products that can now include
imaging. These kinds of products will require
millions of CMOS sensors.
13
2.3 Digital quantizations / sampling While
almost all object information in real world we
want to take pictures of are analog, CCD and
CMOS sensors quantize a picture into many pixels
(spatial quantization). The brightness of each
pixel is also quantized into many levels and
represented by a string of 0 and 1 (brightness
quantization). Essentially, an image after
being quantized spatially and in brightness
becomes a long string of 0s and 1s. Computers
works with strings of 0s and 1s.
Quantizations are the results of sampling.
The function to be sampled
The sampled function
14
Any analog (spatial or brightness) function can
be decomposed into its Fourier frequency
components. Shannons sampling theorem says that
if the function is sampled at least twice per
cycle of the highest frequency component, that
original function can always be retrieved.
Resolution The size of the pixel is called
resolution higher resolution provides more
details of an image. The more pixels a camera
has, the pictures can be enlarged without
becoming blurry or grainy. However, the
camera resolution or the number of pixel in a
picture needs not exceed what a camera lens can
resolve. The resolution of a camera lens is
inversely proportional to the diameter of the
lens.
15
Resolution The size of the pixel is called
resolution higher resolution provides more
details of an image. The more pixels a camera
has, the pictures can be enlarged without
becoming blurry or grainy. Some typical
resolutions include 256x256 - Found on very
cheap cameras, this resolution is so low that the
picture quality is almost always unacceptable.
This is 65,000 total pixels. 640x480 - This is
the low end on most "real" cameras. This
resolution is ideal for e-mailing pictures or
posting pictures on a Web site. 1216x912 - This
is a "megapixel" image size -- 1,109,000 total
pixels good for printing pictures. 1600x1200
- With almost 2 million total pixels, this is
"high resolution." You can print a 4x5 inch
print taken at this resolution with the same
quality that you would get from a photo lab.
2240x1680 - Found on 4 megapixel cameras -- the
current standard -- this allows even larger
printed photos, with good quality for prints up
to 13.5x9 inches. 4064x2704 - A top-of-the-line
digital camera with 11.1 megapixels
takes pictures at this resolution. At this
setting, you can create 16x20 inch prints with
no loss of picture quality.
16
You may have noticed that the number of pixels
and the maximum resolution don't quite compute.
For example, a 2.1-megapixel camera can produce
images with a resolution of 1600x1200, or
1,920,000 pixels. But "2.1 megapixel" means
there should be at least 2,100,000 pixels. This
isn't an error from rounding off or binary
mathematical trickery. There is a real
discrepancy between these numbers because the CCD
has to include circuitry for the ADC to measure
the charge. This circuitry is dyed black so that
it doesn't absorb light and distort the
image. High-end consumer cameras can capture
over 12 million pixels. Some professional
cameras support over 16 million pixels, or 20
million pixels for large-format cameras. For
comparison, Hewlett Packard estimates that the
quality of 35mm film is about 20 million pixels.
17
2.4 Capturing Color Unfortunately, each
photosite is colorblind. It only keeps track of
the total intensity of the light that strikes
its surface. In order to get a full color image,
most sensors use filtering to look at the light
in its three primary colors. Once the camera
records all three colors, it combines them to
create the full spectrum. There are several
ways of recording the three colors in a digital
camera. The highest quality cameras use three
separate sensors, each with a different filter.
A beam splitter directs light to the different
sensors. Think of the light entering the camera
as water flowing through a pipe. Using a beam
splitter would be like dividing an identical
amount of water into three different pipes. Each
sensor gets an identical look at the image but
because of the filters, each sensor only
responds to one of the primary colors.
18
The advantage of this method is that the camera
records each of the three colors at each pixel
location. Unfortunately, cameras that use this
method tend to be bulky and expensive. Another
method is to rotate a series of red, blue and
green filters in front of a single sensor. The
sensor records three separate images in rapid
succession. This method also provides
information on all three colors at each pixel
location but since the three images aren't taken
at precisely the same moment, both the camera
and the target of the photo must remain
stationary for all three readings. This isn't
practical for candid photography or handheld
cameras.
Both of these methods work well for
professional studio cameras, but they're not
necessarily practical for casual snapshots. Next,
we'll look at filtering methods that are more
suited to small, efficient cameras.
19
A more economical and practical way to record the
primary colors is to permanently place a filter
called a color filter array over each individual
photosite. By breaking up the sensor into a
variety of red, blue and green pixels, it is
possible to get enough information in the general
vicinity of each sensor to make very accurate
guesses about the true color at that location.
This process of looking at the other pixels in
the neighborhood of a sensor and making an
educated guess is called interpolation. The
most common pattern of filters is the Bayer
filter pattern. This pattern alternates a row of
red and green filters with a row of blue and
green filters. The pixels are not evenly divided
-- there are as many green pixels as there are
blue and red combined. This is because the human
eye is not equally sensitive to all three
colors. It's necessary to include more
information from the green pixels in order to
create an image that the eye will perceive as a
"true color."
20
The advantages of this method are that only one
sensor is required, and all the color
information (red, green and blue) is recorded at
the same moment. That means the camera can be
smaller, cheaper, and useful in a wider variety
of situations. The raw output from a sensor with
a Bayer filter is a mosaic of red, green and
blue pixels of different intensity. Digital
cameras use specialized demosaicing algorithms to
convert this mosaic into an equally sized mosaic
of true colors. The key is that each colored
pixel can be used more than once. The true color
of a single pixel can be determined by averaging
the values from the closest surrounding pixels.
Some single-sensor cameras use alternatives to
the Bayer filter pattern. X3 technology, for
example, embeds red, green and blue
photo- detectors in silicon. Some of the more
advanced cameras subtract values using the
typesetting colors cyan, yellow, green and
magenta instead of blending red, green and blue.
There is even a method that uses two sensors.
However, most consumer cameras on the market
today use a single sensor with alternating rows
of green/red and green/blue filters.
21
2. Display and Digital Storage Most digital
cameras have an LC screen, so you can view your
picture right away. This is one of the great
advantages of a digital camera -- you get
immediate feedback on what you capture. Of
course, viewing the image on your camera would
lose its charm if that's all you could do. You
want to be able to load the picture into your
computer or send it directly to a printer. There
are several ways to do this.
Early generations of digital cameras had fixed
storage inside the camera. You needed to connect
the camera directly to a computer with cables to
transfer the images. Although most of today's
cameras are capable of connecting through
serial, parallel, SCSI, USB or FireWire
connections, they usually also use some sort of
removable storage device.
22
Digital cameras use a number of storage systems.
These are like reusable, digital film, and they
use a caddy or card reader to transfer the data
to a computer. Many involve fixed or removable
flash memory. Digital camera manufacturers often
develop their own proprietary flash memory
devices, including SmartMedia cards,
CompactFlash cards, Memory Sticks and SD cards.
Some other removable storage devices include
Floppy disks Hard disks, or microdrives
Writeable CDs and DVDs No matter what type of
storage they use, all digital cameras need lots
of room for pictures. They usually store images
in one of two formats -- TIFF, (Tagged Image
File Format) which is uncompressed, and JPEG
(Joint Photographic Experts Group), which is
compressed. Most cameras use the JPEG file format
for storing pictures, and they sometimes offer
quality settings (such as medium or high). The
following chart will give you an idea of the file
sizes you might expect with different picture
sizes.
23
(No Transcript)
24
To make the most of their storage space, almost
all digital cameras use some sort of data
compression to make the files smaller. Two
features of digital images make compression
possible. One is repetition. The other is
irrelevancy. Imagine that throughout a given
photo, certain patterns develop in the colors.
For example, if a blue sky takes up 30 percent of
the photograph, you can be certain that some
shades of blue are going to be repeated over and
over again. When compression routines take
advantage of patterns that repeat, there is no
loss of information and the image can be
reconstructed exactly as it was recorded.
Unfortunately, this doesn't reduce files any
more than 50 percent, and sometimes it doesn't
even come close to that level. Irrelevancy is a
trickier issue. A digital camera records more
information than the human eye can easily
detect. Some compression routines take advantage
of this fact to throw away some of the more
meaningless data.
25
  • Summary
  • It takes several steps for a digital camera to
    take a picture. Here's a review
  • of what happens in a CCD camera, from beginning
    to end
  • You aim the camera at the subject and adjust the
    optical zoom to get closer
  • or farther away.
  • You press lightly on the shutter release.
  • The camera automatically focuses on the subject
    and takes a reading of
  • the available light.
  • The camera sets the aperture and shutter speed
    for optimal exposure.
  • You press the shutter release all the way.
  • The camera resets the CCD and exposes it to the
    light, building up an
  • electrical charge, until the shutter closes.
  • The ADC measures the charge and creates a
    digital signal that represents
  • the values of the charge at each pixel.
  • A processor interpolates the data from the
    different pixels to create natural
  • color. On many cameras, it is possible to see
    the output on the LCD at this
  • stage.
Write a Comment
User Comments (0)
About PowerShow.com