Recently I tried to import all my videos from my old SONY digital camcorder and transcode them to DVDs for sharing, and I’d like to summarize what I leant here as a future reference for myself.🙂
PAL vs. NTSC
The first thing I figured out is that my digital camcorder records PAL videos in DVD quality. That is to say the captured video format should be interlaced PAL video with resolution 720X576 and 25 FPS according to the DVD specification, and the following screenshot of video properties confirms my assumption:
They’re not compatible;
They’re convertible to each other with reduction of PQ;
They use different resolutions and FPS;
Both of them support 4:3 and 16:9 display aspect ratio.
The following diagram shows how they are used in different countries:
I then figured out that my digital camcorder supports both 4:3 and 16:9 display aspect ratios. I usually shot videos in 4:3 because I thought it used more pixels than 16:9, but I was wrong as you’ll see in next section.
Pixel aspect ratio
The next thing I noticed is that the resolution 720X576 has aspect ratio 5:4, but the display aspect ratio is 4:3 when I played the video. I thought it was something related to different video systems used by my camcorder and my PC (PAL vs. NTSC), so I looked up in NTSC specification and found that the resolution in NTSC is 720X480 which is 3:2. I was rather confused by the fact that neither PAL nor NTSC provides native resolution with aspect ratio 4:3. I then learnt that both PAL and NTSC support display aspect ratio 16:9 with the same resolution (720X576 for PAL and 720X480 for NTSC) as for 4:3. What the heck? Are pixels rectangles instead of squares?
Right, in both video systems pixels are indeed rectangles instead of squares due to problems introduced from legacy analogue TV systems. To make it short, we have 3 different aspect ratios:
Storage Aspect Ratio (SAR): the aspect ratio of the image as stored (5:4 in PAL and 3:2 in NTSC).
Display Aspect Ratio (DAR):the aspect ratio of the image as displayed (4:3 or 16:9).
Pixel Aspect Ratio (PAR): the aspect ratio of the pixels themselves. For square pixels it is 1:1, and for PAL video in 4:3 display aspect ratio it is 16:15. See the image below for illustration:
It’s not hard to conclude that:
SAR X PAR = DAR
You can read more description on how it works here. To summarize, no matter what display aspect ratio is chosen, our camcorders store them in full resolution. When capturing or playing them, we need to specify either display aspect ratio or pixel aspect ratio so that images won’t be compressed horizontally or vertically.
I have mentioned that videos from my camcorder are interlaced according to PAL DVD standard, but what does it mean by interlacing?
Interlacing is a technique of doubling the perceived frame rate of a video signal without consuming extra bandwidth. It is achieved by intermixing every 2 consecutive pictures (with half the height) into 1 frame. In fact, we don’t call them pictures, but fields, so 2 fields are mixed into 1 frame. The following process describes how it works:
Record field 1
- Record field 2
- Mix (interlace) field 1 and field 2 into one frame and save the frame as frame 1.
- Record field 3
- Record field 4
- Mix (interlace) field 3 and field 4 into one frame and save the frame as frame 2.
As an example, my camcorder records 50 fields and interlace them to 25 frames per second. The following 3 pictures shows how 2 fields are interlaced to 1 frame:
There are two field orders depending on whether lower or upper field should be rendered first:
Lower/Bottom field first: Lower field (lines) should be rendered first.
Upper/top field first: Upper field (lines) should be rendered first.
To make things even more complex, different systems use different field orders. MPEG-2 videos in DVD always use upper field first while our camcorders normally use lower field first, see the difference below:
Therefore, it is very important to choose the right field order when we encode captured DV AVI videos to MPEG-2 videos. Fortunately transcoders/encoders today are normally aware of the difference and does the conversion from one to the other automatically.
A deinterlacing process is required to properly play an interlaced material on a progressive display such as a computer monitor (which is always progressive). We’ll see mice teeth at the edges of objects as the previous interlaced frame shows if deinterlacing is not applied. Today’s display devices such as LED TVs and projectors all support progressive signals, so we might even consider applying deinterlacing when we transcode the video to MPEG-2 (I wouldn’t recommend it as the change is invertible).
There are many different algorithms to deinterlace an interlaced video:
Weave: Show both fields per frame. This basically doesn’t do anything to the frame, thus it leaves we with mice teeth but with the full resolution.
Bob: Displaying every field (so we don’t lose any information), one after the other (without interlacing) but with 50 fps.
Blend: Both fields are overlaid together. This gives we good results, when there’s no movement, but results in unnatural low quality movements. There seem to be a ghosty unsharpness when something moves.
Area based: Don’t blend everything but only the mice teeth themselves. This can be done by comparing frames by time or by space/position.
Motion Blur: Blur the mice teeth where needed, instead of mixing (blending) them with the other field. This way we would get a more film-like look.
Discard: We discard every second line (the movie is half the height then) and then resize the picture during play. That is the same as skipping Field2, Field4, Field6… We could call this "Even Fields Only" or "Odd Fields Only".
Combination of Bob+Weave: Analyzing the two fields and deinterlace only parts which need to.
Motion compensation: Analyzing the movement of objects in a scene, while the scene consists of a lot of frames. In other words: Tracking of each object that moves around in the scene. Thus effectively analyzing a group of consecutive frames instead of just single frames.
Capture and transcode interlaced video to MPEG-2
In this section I’d like to present my steps to capture, edit, transcode, and author DVDs from video tapes. The following tools are needed:
Coral VideoStudio X3: for capturing and editing
TMPGEnc 4 Xpress: for transcoding
Coral DVD MovieFactory 7 SE: for authoring final DVDs
1. Capturing and editing
For simplicity, I used VideoStudio to capture and edit videos from my camcorder. I normally capture videos in native DV AVI format and then split the video by scenes. VideoStudio provides automatic scene detection based on content which helps me a lot, see the following screenshot:
After the video is divided into scenes, I then quickly review all the scenes and remove the ones that I don’t need. In the end I choose to output the video in DV format (either 4:3 or 16:9), and this will make sure VideoStudio does not transcode the video. I strongly recommend NOT to use VideoStudio to transcode the videos as the built-in MPEG encoder produce poorer PQ comparing to more professional encoders.
Next step is to transcode videos in DV AVI format to MPEG-2. I use TMPGEnc 4 Xpress for transcoding because it is more configurable and provides better PQ. To start, we first add videos to the source video list, make sure that TMPGEnc 4 Xpress detects correct interlacing field order (bottom field first):
Then click Format and configure MPEG encoding settings as the screenshots below:
Finally click Encode button, choose the output folder and press Start encode button . It normally takes a long time to encode, so please be patient.
3. Authoring DVD
Finally I use Coral DVD MovieFactory 7 SE for authoring DVDs. The software is very user friendly, and I just follow the wizard to add transcoded MPEG-2 videos, choose a menu template, and then burn the DVD or write an ISO image to hard drive. Remember to make sure that “Do not convert compliant MPEG files” option is selected to bypass unnecessary transcoding (which decreases PQ dramatically):
Deinterlacing during playback of the video
The trancoded MPEG-2 and DVD play fine in PowerDVD 10. Upscaling and hardware deinterlacing are applied automatically, and the PQ looks comparative fine. However I noticed deinterlacing doesn’t work when I play it by using MPC-HC+FFDShow. I suspect it is because FFDShow outputs RGB32 which cannot be deinterlaced by the graphics card. To solve the problem we can add a software deinterlacing filter in FFDShow as follows:
Don’t be confused with the deinterlacing settings in the output section. In the output section it is mainly for sending interlacing related information obtained from the input stream or FFDShow’s internal decoders to the next filter: