If you've ever seen a Samsung Galaxy Tab
S sitting screen up, turned on, and white-screened on the stage of a
microscope, and looked through the eyepiece, you would've seen this:
SGTS subpixels, x200 |
Most people would've gone,
"Whaaaaa... what are all those bars and boxes doing?" They are what
help produce light and color on the screen; an assortment of little tidbits of
light called subpixels, some red, some green, and some blue. One red, green,
and blue subpixel make up a pixel. On many devices, you can see the pixels as
little blocks of color, buuuut they are actually colored subpixels scrunched up
so that there is only a tiny bit of space between them, thus creating the
illusion that a pixel with all three subpixels lit is white. If you hold a
high-power magnifying glass up to your computer, you can get the basic idea of
what pixels look like. Also, most TV subpixels are so colossal, you can see
them with the naked eye!!!
But, there's something very odd about
the image. Why are the red and green subpixels shaped like the classic
boxy ones, however the blue
counterparts shaped in a strange dash-pattern?
I looked through my microscope at the
colors and there were a lot more colors that had various shades of blue. I
formed a hypothesis that the shape was altered so that more blue could fit on
the screen.
History of the Pixel
What is a pixel? The word “pixel” is
formed when the words “picture” and “element” are meshed together. This is how
“pixel” is formed from the two words “picture” and “element”: Pic(x)ture element
The history of the pixel, or most
anything to do with pictures, spans a long time. Way back in 1839, the
daguerreotype, invented by French Louis-Jaques-Mandé Daguerre, was introduced.
The first available, practical form of photography, and it contained a bunch of
metals like gold and silver and a whole lot of other substances. But it was NOT
easy to get or use.
One started with polishing a
silver-coated copper plate, then sensitizing the plate to light in specialized,
light-proof boxes with iodine and bromine. The plate would then be transferred
with a light-proof holder to the camera. The picture would be etched onto the
plate, but not with a chisel, with the light. Since dark and light colors give
off different wavelengths of light, thus the plate would show dark and light
spaces when the light hit it. This would form a latent image, which is an
invisible image created this way. Later, the latent image would be made visible
by blowing the plate with hot fumes of mercury vapor. Then the light
sensitivity was removed with a solution of a substance called sodium
thiosulfate, and the plate would be given a thorough wash with distilled water.
Finally, the plate would be gilded or toned with gold chlorine, dried lightly
with a pump, and sealed in glass to protect it.
Then, in 1861, the first permanent color
photo was taken, by projecting three black/white images taken through filters
of red, green, and blue back through their respected filters over each other,
and the image became colored.
In 1926, the first televised, moving
images were produced. They used a mechanical television set with a scanning
disk that spun very, very quickly.
One year later, Philo T. Farnsworth
demonstrated the first cathode ray tube television (CRT TV). It worked like
this: there’s a sealed glass tube, with an electron gun at one end. It does
exactly what its name sounds like; it is a gun that shoots out a stream of
electrons in various patterns, which are then steered by a powerful magnet, and
then the electrons land on a phosphor-covered screen, which forms the picture.
Color TV was introduced in the 1950's.
In CRT TVs, there was only one electron gun that shot out a stream of electrons
to form a picture. In color TV, there were three electron guns, one for red,
green, and blue, the primary optical colors. These electron beams would hit
arrays and patches of phosphorous, a highly light-sensitive substance. The
patches were called triads. They were the closest ancestors of modern pixels.
Also, these TVs had pictures made of horizontal lines, up to 512 lines on a
screen.
But then, in the digital age, these
lines were spliced into rectangles, thus creating the pixel in the year 1965.
The founding father of the digital image was scientist Russel Kirsch, who took
a picture of his baby son and scanned it into a computer. Kirsch had the
computer break up the image into many tiny squares, and assigned each square a
binary color of black or white. This technique is closely related to how the
ancient Greeks made mosaics: they would glue together many different-colored,
very small squares of glass or stone onto wet cement at just the right spots to
create the mosaic. If you stand right up close to it, you see little squares.
If you stand far away, then you see a very clear image.
How do Pixels Work?
A pixel is made up of one red, green,
and blue subpixel. A pixel can come in all sorts of sizes, shapes, and colors.
The light type varies too. The Samsung Galaxy Tab S pixels are LED lights. The
acronym means light-emitting diode. A diode is this device that only allows the
flow of electric current in one direction, so it's sort of like an electric
one-way road. LEDs give off light when activated. They also release little
crumbs of energy many times smaller than an atom, called photons. The photon
may be slow or fast, depending on how much energy it has, and this determines
the color of the LED. But the subpixel LEDs need only give off three colors,
the optical primary colors of red, green, and blue.
When you combine different intensities
(brightnesses) of red, green, and blue light, you can achieve just about any
color you wish. In fact, a single pixel can generate about 2^24, or 16,777,216
distinguishable colors!! There are 256 intensities for each color. To
demonstrate this, I went to Photoshop and experimented with different colors.
There was a background-color choosing tab that I used to do so. At the bottom
of the tab, there were three textboxes, each one labeled with either R for red,
G for green, or B for blue. In the textboxes, there was a number, always
between zero and 255, inclusive. If the value in a textbox was zero, that meant
the color was turned all the way off, or completely absent in the final color
created by the red, green, and blue. If the value was 255, that meant the color
was turned all the way on, or completely present in the final color. The
following is a color pallet showing all the primary and secondary colors, plus
some other basic colors.
The
intensities of the subpixels are presented in their corresponding Photoshop
R/G/B information on the left. The "H/S/B" on the top means
"Hue/Saturation/Brightness"
|
Wow Ethan, nice job
ReplyDelete