Science Fair Project Encyclopedia
In computing, JPEG is a commonly used standard method of compressing photographic images. The file format which employs this compression is commonly also called JPEG; the most common file extensions for this format are .jpeg, .jfif, .jpg, .JPG, or .JPE although .jpg is the most common on all platforms.
The name stands for Joint Photographic Experts Group. JPEG itself specifies only how an image is transformed into a stream of bytes, but not how those bytes are encapsulated in any particular storage medium. A further standard, created by the Independent JPEG Group, called JFIF (JPEG File Interchange Format) specifies how to produce a file suitable for computer storage and transmission (such as over the Internet) from a JPEG stream. In common usage, when one speaks of a "JPEG file" one generally means a JFIF file, or sometimes an Exif JPEG file. There are, however, other JPEG-based file formats, such as JNG.
JPEG/JFIF is the most common format used for storing and transmitting photographs on the World Wide Web. It is not as well suited for line drawings and other textual or iconic graphics because its compression method performs badly on these types of images (the PNG and GIF formats are in common use for that purpose; GIF, having only 8 bits per pixel is not well suited for colour photographs, but PNG may have as much or more detail than JPEG).
Many of the options in the JPEG standard are little used. Here is a brief description of one of the more common methods of encoding when applied to an input that has 24 bits per pixel (eight each of red, green, and blue). This particular option is a lossy data compression method.
Color Space Transformation
First, the image is converted from RGB into a different color space called YUV. This is similar to the color space used by NTSC and PAL color television transmission. The Y component represents the brightness of a pixel, and the U and V components together represent the hue and saturation. This part is useful because the human eye can see more detail in the Y component than in the others.
The above transformation enables the next step, which is to reduce the U and V components (called "downsampling" or "chroma subsampling"). The ratios at which the downsampling can be done on JPEG are 4:4:4 (no downsampling), 4:2:2 (decimate by factor of 2 in horizontal direction), and most commonly 4:2:0 (decimate by factor of 2 in horizontal and vertical directions). For the rest of the compression process, Y, U and V are processed separately and in a very similar manner.
Discrete Cosine Transform
Next, each component (Y, U, V) of the image is "tiled" into sections of eight by eight pixels each, then each tile is converted to frequency space using a two-dimensional discrete cosine transform (DCT).
If one such 8x8 8-bit subimage is:
which is then shifted by 128 results in
and then taking the DCT and rounding to the nearest integer results in
Note the rather large value of the top-left corner. This is the Direct current (DC) coefficient.
The human eye is fairly good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency brightness variation. This fact allows one to get away with greatly reducing the amount of information in the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This is the main lossy operation in the whole process. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers.
A common quantization matrix is:
Using this quantization matrix with the DCT coefficient matrix from above results in:
For example, using the DC coefficient of -415
Entropy coding is a special form of lossless data compression. It involves arranging the image components in a "zigzag" order that groups similar frequencies together, inserting length coding zeros, and then using Huffman coding on what is left. The JPEG standard also allows, but does not require, the use of arithmetic coding which is mathematically superior to Huffman coding. However, this feature is rarely used as it is covered by patents and because it is much slower to encode and decode compared to Huffman coding. Arithmetic coding typically makes files about 5% smaller.
The zig-zag sequence for the above quantized coefficients would be:
-26, -3, 0, -3, -2, -6, 2, -4, 1 -4, 1, 1, 5, 1, 2, -1, 1, -1, 2, 0, 0, 0, 0, 0, -1, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
JPEG has a special Huffman code word for ending the sequence prematurely when the remaining coefficients are zero. Using this special code word, EOB, the sequence becomes
-26, -3, 0, -3, -2, -6, 2, -4, 1 -4, 1, 1, 5, 1, 2, -1, 1, -1, 2, 0, 0, 0, 0, 0, -1, -1, 0, 0, 0, 0, 0, -1, -1, EOB
Compression ratio and artifacts
The resulting compression ratio can be varied according to need by being more or less aggressive in the divisors used in the quantization phase. Ten to one compression usually results in an image that can't be distinguished by eye from the original. 100 to one compression is usually possible, but will look distinctly artifacted compared to the original. The appropriate level of compression depends on the use to which the image will be put.
Those who use the World Wide Web may be familiar with the irregularities known as compression artifacts that appear in JPEG digital images. These are due to the quantization step of the JPEG algorithm. They are especially noticeable around eyes in pictures of faces. They can be reduced by choosing a lower level of compression; they may be eliminated by saving an image using a lossless file format, though for photographic images this will usually result in a larger file size.
Decoding to display the image consists of doing all the above in reverse.
Taking the DCT coefficient matrix (after adding the difference of the DC coefficient back in)
and multiplying it by the quantization matrix from above results in
which closely resembles the original DCT coefficient matrix for the top-left portion. Taking the inverse DCT results in an image with values (still shifted down by 128)
|Notice the slight differences between the original (top) and decompressed image (bottom) which is most readily seen in the bottom-left corner.|
and adding 128 to each entry
This is the uncompressed subimage and can be compared to the original subimage (also see images to the right) by taking the difference (original - uncompressed) results in error values
with an average absolute error of about 5 values per pixels (i.e., ). The error is most noticable in the bottom-left corner where the bottom-left pixel becomes darker than the pixel to its immediate right.
JPEG is at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. In this case it usually performs much better than purely lossless methods while still giving a good looking image (in fact it will produce a much higher quality image than other common methods such as GIF which are lossless for drawings and iconic graphics but require severe quantization for full-color images).
JPEG compression artifacts blend well into photographs with detailed non-uniform textures, allowing higher compression ratios.
The mid-quality photo uses only one sixth the storage space but has noticable loss of detail and artifacts. However, once a certain threshold of compression is passed, compressed images show increasingly visible defects. See the article on rate distortion theory for a mathematical explanation of this threshold effect.
Other lossy encoding formats
Newer lossy methods, particularly wavelet compression, perform even better in these cases. However, JPEG is a well established standard with plenty of software available, including free software, so it continues to be heavily used as of 2005. Also, many wavelet algorithms are patented, making it difficult or impossible to use them freely in many software projects.
The JPEG committee has now created its own wavelet-based standard, JPEG 2000, which is intended to eventually supersede the original JPEG standard.
Potential patent issues
In 2002 Forgent Networks asserted that it owns and will enforce patent rights on the JPEG technology, arising from a patent that had been filed in 1986 (). The announcement has created a furor reminiscent of Unisys' attempts to assert its rights over the GIF image compression standard.
The JPEG committee investigated the patent claims in 2002 and found that they were invalidated by prior art.  Nevertheless, between 2002 and 2004 Forgent was able to obtain about $90 million by licensing their patent to some 30 companies. In April 2004 Forgent sued 31 other companies to enforce further license payments. In July of the same year, a consortium of 21 large computer companies filed a countersuit, with the goal of invalidating the patent.
The JPEG committee has as one of its explicit goals that their standards be implementable without payment of license fees, and they have secured appropriate license rights for their upcoming JPEG 2000 standard from over 20 large organizations.
- Image compression
- JPEG 2000
- Motion JPEG
- Graphics editing program
- GDI+ section of GDI article (mentions jpeg exploit)
- Official JPEG site
- JPEG FAQ
- Wotsit.org's entry on the JPEG format
- ITU T.81 JPEG compression (PDF)
- JFIF File Format (PDF)
- The JPEG Still Picture Compression Standard, Summary by Gregory K. Wallace
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details