Digital camera
makers lie about megapixels. This is OK
because unlike other specs, all legitimate camera makers lie in exactly
the same way. This means it's easy to compare cameras from different
makers.
A pixel is defined as a location with a discrete value for EACH of red AND green AND blue.
Modern cameras do not measure or sense the red AND green AND blue values for each pixel location. Instead, they use a black-and-white sensor covered with a pattern of red and green and blue filters, most commonly arranged in a Bayer pattern. Each location senses the brightness of just one color, not all three.
The sensor now gives a red OR green OR blue value for each pixel location, and the system image processor has to guess at the value for each of the other two colors not sensed directly at each location. To do this the system mathematically interpolates (guesses) at the other colors’ values for each point for two of the three colors not directly sensed at each pixel.
In other words, everything is blurred a pixel or two in each direction so we can estimate and store a red and green and blue value for each pixel location because our sensor can only measure one color, red or green or blue (but not all three), at each location.
This blurring is called Bayer interpolation and done in every conventional camera. It’s in addition to anti-alias filters, which are a completely different discussion.
If you set a camera to store the image at a lower resolution you may eliminate the blurring effect of Bayer interpolation.
Imagine saving the image at half the resolution in each direction, say 2,000 x 3,000 pixels (small, 6 MP) in a 24 MP (4,000 x 6,000 pixel) camera.
Now each output pixel is created from four (2x2) source (sensor) pixels. Since four sensor pixels used to create one output pixel we now have a real value for each of red and green and blue for every output pixel. We’ll even get two samples for one of the three colors for each output pixel!
When downsampling, either with in-camera settings or later in our computers, we can eliminate the effects of Bayer interpolation.
Because of this a 12 MP image downsampled from a 48 MP camera can be almost twice as sharp as a 12 MP image directly from a 12 MP sensor, because what's called a 12 MP sensor doesn't really have 12 full megapixels, but downsampled from a 48 MP sensor we can get a true 12 megapixels.
In other words, set a 45 MP camera down to 24 MP and the result will be sharper and have more resolution than from a 24 MP camera.
The details of how we do this in a camera's processor are far more complex, but this is the general idea.
Looking at the results at 100% the individual output pixels are sharper at lower resolution settings, but of course you have fewer of them.
Shooting at a lower setting doesn’t lose as much as you’d think, because much of the data at the full setting is made up anyway.
You don't lose as much sharpness as you'd expect by shooting at lower resolution settings.
Bayer
Interpolation
A pixel is
defined as a complete set of color data for a point in an image.
Some pixels may be colored red and others may be colored green,
but there are no such things as red pixels or green pixels. A pixel
is a complete pixel only when the red and green and blue values are
all known for the unique location of that one pixel.
Here’s why setting a camera to a lower resolution can eliminate resolution losses from Bayer interpolation:
Digital Cameras
Digital camera
pixels aren't as sharp as scanned film pixels.
All digital
cameras, except for $30,000 scanning backs and the old Sigmas,
have only a third of their claimed pixels! Instead of having separate
R, G and B sensors for each pixel location, they only have a single
monochrome CCD with each pixel location painted with a R, G
or B filter. This alternating R, G and B filter matrix most
often follows the Bayer pattern
with twice as many G as R or B spots. This is named for Kodak scientist Bryce Bayer who invented this in 1976.
A special Bayer interpolation
algorithm is then used to create separate R, G and B values for every
pixel location. Remember that before this interpolation that each
location only had a R or G or B value; not a R value and a G value and a B value for each location.
The algorithm
creates values for each of the three colors at every location
by smearing (interpolating) each set of partial R, G and B values
to create values at every location.
These algorithms
are proprietary to each camera maker. They become more clever with
time to allow higher perceived sharpness more closely simulating
full resolution. As of 2006 these clever algorithms allow starting
with one-third the data and making it look about
as good as having one-half the number of pixels claimed.
Raw and JPG
These all start
from the same data. The sensor is unchanged regardless
of the mode you select in-camera.
RAW offers
no advantages here, except for one potential gamble. Bayer interpolation
takes place in the software opening the raw data. Future advances
in Bayer interpolation algorithms could be incorporated in future
raw software, if and only if your camera maker continues to support
yesterday's cameras in tomorrow's software. Just as likely, your
camera maker may no longer support your old camera in tomorrow's
raw software!
I shoot JPG.
Scanned Film
Scanned film
and images reduced to fit the web have full red, green and blue resolution
for every pixel. They look as sharp at 100% as they do reduced.
Scanners do this because they have three separate sets
of CCDs, one for each color.
Therefore, a scanned image can be sharper than a digital
camera image of the same resolution. Of course to do this the scanner's
optics and the image on the film being scanned must be sharp enough
to support it.
EXAMPLE
Roll your mouse over to see image without Bayer interpolation.
The original
image is cropped from a Nikon D200 at 100%. Like every other digital
camera, it is Bayer-interpolated. Roll your mouse over it to see
the same image, at full RGB resolution without the interpolation.
If still cameras used three CCDs like professional video cameras
we wouldn't need Bayer Interpolation.
The full resolution
image was also shot on my D200, but with a lens of twice the focal
length. I then downsampled it to half the size. By resizing the image
to half the linear pixel dimensions, Photoshop takes four pixels
and combines them into one. This has enough information to get full
RGB resolution for this example.
The base image was shot with a Zeiss
ZF 50mm lens at
f/5.6. The other image was shot with a 105mm
Micro lens at f/5.6. Obviously
the light and wind changed from shot to shot. The tripod wasn't
moved. Each lens has more resolution than my D200 at these apertures.
Of course you
could apply sharpening. That would make it sharper,
but not increase the resolution. Here's the Bayer-interpolated image
with added sharpening (150% at 0.3 pixels). Compare it to the non-interpolated
image.
|