Science Fair Project Encyclopedia
- MPEG-2 should not be confused with MP2, or MPEG-1 Audio Layer 2
MPEG-2 (1994) is the designation for a group of audio and video coding standards agreed upon by MPEG (Moving Pictures Experts Group), and published as the ISO/IEC 13818 international standard. MPEG-2 is typically used to encode audio and video for broadcast signals, including direct broadcast satellite and Cable TV. MPEG-2, with some modifications, is also the coding format used by standard commercial DVD movies.
MPEG-2 includes a Systems part (part 1) that defines Transport Streams, which are designed to carry digital video and audio over somewhat-unreliable media, and are used in broadcast applications.
The Video part (part 2) of MPEG-2 is similar to MPEG-1, but also provides support for interlaced video (the format used by broadcast TV systems). MPEG-2 video is not optimized for low bit-rates (less than 1 Mbit/s), but outperforms MPEG-1 at 3 Mbit/s and above. All standards-conforming MPEG-2 Video decoders are fully capable of playing back MPEG-1 Video streams.
With some enhancements, MPEG-2 Video and Systems are also used in most HDTV transmission systems.
The MPEG-2 Audio part (defined in Part 3 of the standard), enhances MPEG-1's audio by allowing the coding of audio programs with more than two channels. Part 3 of the standard allows this to be done in a backwards compatible way, allowing MPEG-1 audio decoders to decode the two main stereo components of the presentation.
In part 7 of the MPEG-2 standard, audio can alternatively be coded in a non-backwards-compatible way, which allows encoders to make better use of available bandwidth. Part 7 is referred to as MPEG-2 AAC.
MPEG-2, the standard
General information about MPEG-2 Video and Audio and Systems excluding modifications when used on DVD / DVB.
A MPEG-2 System Stream typically consists of two elements:
- video data + time stamps
- audio data + time stamps
MPEG-2 video coding (simplified)
MPEG-2 is for the generic coding of moving pictures and associated audio and creates a video stream out of three types of frame data (intra frames, forward predictive frames and bidirectional predicted frames) that can be arranged in a specified order called the GOP structure (GOP = Group Of Pictures - see below). (Actually, the standard itself does not define or use the term GOP, except in the name of a syntax structure called a GOP header — however, users of MPEG-2 have found that the GOP concept helps convey a basic understanding of the standard.)
MPEG-2 supports both interlaced and progressive scan video streams. In progressive scan streams, the basic unit of encoding is a frame, while in interlaced streams, the basic unit may be either a field or a frame. In the discussion below, the generic terms "picture" and "image" refer to either fields or frames, depending on the type of stream.
An MPEG-2 video bitstream is made up of a series of data frames encoding pictures. The three ways of encoding a picture are: intra-coded (I picture), forward predictive (P picture) and bidirectional predictive (B picture).
The video image is separated into one luminance (Y) and two chrominance channels (also called color difference signals Cb and Cr). Blocks of the luminance and chrominance arrays are organized into "macroblocks", which are the basic unit of coding within a picture. Each macroblock is divided into four 8×8 luminance blocks. The number of 8×8 chrominance blocks per macroblock depends on the chrominance format of the source image. For example, in the common 4:2:0 format, there is one chrominance block per macroblock for each of the two chrominance channels, making a total of six blocks per macroblock.
In the case of I pictures, the actual image data is then passed through the encoding process described below. P and B pictures are first subjected to a process of "motion compensation", in which they are predicted from the previous (and in the case of B pictures, the next) image in time order. Each macroblock in the P or B picture is associated with an area in the previous or next image that is well-correlated with it, as selected by the encoder using a "motion vector". The motion vector that maps the macroblock to its correlated area is encoded, and then the difference between the two areas is passed through the encoding process described below.
Each block is treated with an 8x8 discrete cosine transform. The resulting DCT coefficients are then quantized, re-ordered to maximize the probability of long runs of zeros and low amplitudes of subsequent values, and then run-length coded. Finally a fixed-table huffman encoding scheme is applied.
I pictures encode for spatial redundancy, P and B pictures for temporal redundancy. Because adjacent frames in a video stream are often well-correlated, P pictures may be 10% of the size of I pictures, and B pictures 2% of their size.
The sequence of different frame types is called the Group of Pictures (GOP) structure. There are many possible structures but a common one is 15 frames long, and has the sequence I_BB_P_BB_P_BB_P_BB_P_BB_. A similar 12-frame sequence is also common. The ratio of I, P and B pictures in the GOP structure is determined by the nature of the video stream and the bandwidth constraints on the output stream, although encoding time may also be an issue. This is particularly true in live transmission and in real-time environments with limited computing resources, as a stream containing many B pictures can take three times longer to encode than an I-picture-only file.
The output bit-rate of an MPEG-2 encoder can be constant or variable, with the maximum bit rate determined by the playback media — for example the DVD movie maximum is 10.4 Mbit/s. To achieve a constant bit-rate the degree of quantization is iteratively altered to achieve the output bit-rate requirement. Increasing quantization leads to visible artefacts when the stream is decoded, generally in the form of "mosaicing", where the discontinuities at the edges of macroblocks become more visible as bit rate is reduced.
MPEG-2 audio encoding
MPEG-2 also introduces new audio encoding methods. These are
- low bitrate encoding with halved sampling rate (MPEG-1 Layer 1/2/3 LSF)
- multichannel encoding with up to 5.1 channels
- MPEG-2 AAC
MPEG-2 on DVD
Additional restrictions and modifications of MPEG-2 on DVD are:
- 720 × 480, 704 × 480, 352 × 480, 352 × 240 pixel (NTSC)
- 720 × 576, 704 × 576, 352 × 576, 352 × 288 pixel (PAL)
- Aspect ratio
- Frame rate
- 59.94 field/s, 23.976 frame/s (with 3:2 pulldown flags), 29.97 frame/s (NTSC)
- 50 field/s, 25 frame/s (PAL)
- Audio+video bitrate
- Buffer average maximum 9.8 Mbit/s
- Peak 15 Mbit/s
- Minimum 300 Kbit/s
- YUV 4:2:0
- Additional subtitles possible
- Closed captioning (NTSC only)
- Linear Pulse Code Modulation (LPCM): 48 kHz or 96 kHz; 16- or 24-bit; up to six channels (not all combinations possible due to bitrate constraints)
- MPEG Layer 2 (MP2): 48 kHz, up to 5.1 channels (required in PAL players only)
- Dolby Digital (DD, also known as AC-3): 48 kHz, 32–448 kbit/s, up to 5.1 channels
- Digital Theater Systems (DTS): 754 kbit/s or 1510 kbit/s (not required for DVD player compliance)
- NTSC DVDs must contain at least one LPCM or Dolby Digital audio track.
- PAL DVDs must contain at least one MPEG Layer 2, LPCM, or Dolby Digital audio track.
- Players are not required to playback audio with more than two channels, but must be able to downmix multichannel audio to two channels.
- GOP structure
- Sequence header must be outputted for every GOP
- Maximum frames per GOP: 18 (NTSC) / 15 (PAL)
- Closed GOP required for multiple-angle DVDs
MPEG-2 on DVB
Additional restrictions and modifications on DVB-MPEG.
- Restricted to one of the following resolutions
- 720 × 480 pixel, 24/1.001, 24, 30/1.001 or 30 frame/s
- 640 × 480 pixel, 24/1.001, 24, 30/1.001 or 30 frame/s
- 544 × 480 pixel, 24/1.001, 24, 30/1.001 or 30 frame/s
- 480 × 480 pixel, 24/1.001, 24, 30/1.001 or 30 frame/s
- 352 × 480 pixel, 24/1.001, 24, 30/1.001 or 30 frame/s
- 352 × 240 pixel, 24/1.001, 24, 30/1.001 or 30 frame/s
- 720 × 576 pixel, 25 frame/s
- 544 × 576 pixel, 25 frame/s
- 480 × 576 pixel, 25 frame/s
- 352 × 576 pixel, 25 frame/s
- 352 × 288 pixel, 25 frame/s
MPEG-2 over ATSC
- Restricted to one of the following resolutions
Note: 1080i is encoded with 1920×1088 pixel frames, but the last 8 lines are discarded prior to display.
- ISO/IEC 13818-1
- Systems - describes synchronization and multiplexing of video and audio.
- ISO/IEC 13818-2
- Video - compression codec for interlaced and non-interlaced video signals.
- ISO/IEC 13818-3
- Audio - compression codec for perceptual coding of audio signals. A multichannel-enabled extension of MPEG-1 audio (MP3).
- ISO/IEC 13818-4
- Describes procedures for testing compliance.
- ISO/IEC 13818-5
- Describes systems for Software simulation.
- ISO/IEC 13818-6
- Describes extensions for DSM-CC (Digital Storage Media Command and Control.)
- ISO/IEC 13818-7
- Advanced Audio Coding (AAC)
- ISO/IEC 13818-9
- Extension for real time interfaces.
- ISO/IEC 13818-10
- Conformance extensions for DSM-CC.
- Appromimately 640 patents world wide make up the "essential" intellectual property surrounding MPEG-2
- These are held by over 20 corporations and one university
- Canon Inc.
- Columbia University
- France Télécom (CNET)
- General Electric Capital Corporation
- General Instrument Corp.
- GE Technology Development, Inc.
- Hitachi, Ltd.
- KDDI Corporation (KDDI)
- Lucent Technologies
- LG Electronics Inc.
- Nippon Telegraph and Telephone Corporation (NTT)
- Robert Bosch GmbH
- Sanyo Electric Co., Ltd.
- Scientific Atlanta
- Thomson Licensing S.A.
- Victor Company of Japan, Limited (JVC).
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details