> the DV, DVCAM & DVCPRO Formats copyright 1998-2008 Adam J. Wilt  
DV FAQ - etc. search

What's new:
2008.08.01 - Minor typos and grammatical errors fixed.

Topics on this page:

Didn't find what you wanted here? Try the other pages listed below...

DV - contents & links
  Detailed listing of this site's DV contents, and links to other sites.

DV Technical Details
  The DV Formats Tabulated; standards documents & where to get them.

DV FAQ - technical
  DV formats, sampling, compression, audio, & 1394/FireWire/i.LINK.

DV FAQ - editing
  linear & nonlinear; hard & soft codecs; transcoding; dual-stream NLE.
you are here >
DV FAQ - etc.
  16:9; film-style; frame mode; slow shutters; image stabilization, etc.

DV Pix
  DV sampling, artifacts, tape dropout, generation loss, codecs.

Video Tidbits
  Tips & tricks, mostly DV-related; getting good CG.

16:9 widescreen

What is 16:9 widescreen?

16:9 is the widescreen format that the world has standardized on for future HDTV services. It has also been used in the NHK 1125-line analog HDTV standard and the Eureka 1250-line HDTV standard, as well as variety of enhanced SDTV (standard-definition TV) services in Europe and Japan. The screen is 16 units wide by 9 units high, so the "aspect ratio" is called 16:9 because it's easier to remember than 1.78:1 (approximately) which is the "normalized" number.

Currently, most SDTV in the world is 4:3 (which equals 12:9, or 1.33:1). 35mm motion pictures are typically 1.66:1 (European), 1.85:1 (American) or 2:39:1 (anamorphic; adopted by SMPTE in 1971; hides projectionist's splices a bit better than the previous standard of 2:35:1) although a bewildering variety of aspect ratios has been used at one time or another.  

Why should I care about 16:9?

As the world slowly and painfully switches over to digital broadcasting, it looks to be a 16:9 world we're all moving into. Although it's likely to take ten years or more before 16:9 receivers outnumber 4:3 receivers worldwide, and there will always be a huge legacy of 4:3 SDTV programs in the vaults, "premium" programming in the future will almost certainly be 16:9 material, in both "standard definition" and "high definition" forms.

4:3 program material won't be obsoleted by any means, but many forward-looking producers are composing and shooting for 16:9 to maintain as high a value as possible for all future distribution possibilities. Some are actually shooting 16:9, while others are practicing "shoot and protect" in 4:3, just by making sure that the material can be cropped to 16:9 without losing any important content from the top or bottom of the image.  

How do you get 16:9 pictures?

You can use the 16:9 switch on your camera (if it has one). Or, you can shoot and protect a 16:9 picture on 4:3. Or, you can use an anamorphic lens.

Many cameras have a 16:9 switch, which when activated results in either a "letterboxed" image and/or an anamorphically-stretched image. But be careful; there's a right way and a wrong way to do this.

The "right way" is to use a 16:9 CCD. When in 4:3 mode, the camera ignores the "side panels" of the CCD, and reads a 4:3 image from the center portion of the chip. When in 16:9 mode, the entire chip is used. In either case, the same number of scanlines is used: 480 (525/59.94 DV) or 576 (625/50 DV). You can tell when a camera is capturing 16:9 the "right way" because when you throw the switch, whether the resultant image is letterboxed in the finder or squashed, a wider angle of view horizontally is shown, whereas the same vertical angle of view is present.

The "wrong way" is for the camera to simply chop off the top and bottom scanlines of the image to get the widescreen picture. When you throw the switch on these cameras, the horizontal angle of view doesn't change, but the image is cropped at the top and bottom compared to the 4:3 image (it may then be digitally stretched to fill the screen, but only 75% of the actual original scanlines are being used).

[There are some Philips switchable cameras that do clever tricks with subdivided pixels on the CCDs; when you flip into 16:9 mode, the image's angle of view will get wider horizontally and tighter vertically. So to really be sure, use the change -- or lack thereof -- in the horizontal angle of view to see if your camera is doing 16:9 "the right way".]

[Some Digital8 and DV cameras, like the PDX10, seem to split the difference: when in 16:9, the picture gets slightly cropped on top and bottom, and it gets a little wider! They seem to be using some extra chip area normally used for digital image stabilization to go wider, yet they don't have a wide enough CCD for true 16:9.]

The "wrong way" is wrong because the resultant image only uses 360 lines (525/59.94) or 432 lines (625/50) of the CCD instead of the entire 480 or 576. When this is displayed anamorphically on your monitor, the camera has digitally rescaled the lines to fit the entire raster, but 1/4 of the vertical resolution has been irretrievably lost, and the in-camera algorithms used to stretch the image often create ugly sampling artifacts. This is not too terrible for SDTV playback (still, it isn't great), but it's asking for disaster if the image is upconverted to HDTV or film (Soderburgh's "Full Frontal" is prime example of the perils of in-camera vertical stretch).

The bad news is that most inexpensive DV cameras (including the VX2000 and XL-1s) do 16:9 the wrong way.

[Note that there are two "wrong ways," the vertical-pixel-shift method used by Canon and Panasonic, which isn't quite as bad as I make it sound; and the field-doubled/interpolated method employed by Sony (I don't know what JVC does). The Canon/Panasonic method yields images softer than true 16:9, but cleaner and sharper than the Sony method. I discuss the differences in more detail a bit further on.]

16:9 chips were very costly and the yields (and demand) were low at the turn of the century; in late '98 Sony's DXC-D30WS 16:9-capable DSP camera (which, docked with the DSR-1 DVCAM deck, became the DXC-D130WS camcorder) was only available in short supply, and the Sony sales force was encouraged to steer folks to the non-widescreen D30 model unless they really needed widescreen, because the supplies were so limited. Even then, the WS model commanded a US$3000 premium over its 4:3-only sibling.

By 2005, things were a lot better. Canon's XL2 was the current entry-level true 16:9 camera in any of the DV formats. Many low-cost HDV and DVCPROHD cameras with true 16:9 sensors also record DV (and DVCAM or DVCPRO50) in 16:9 mode. At the low end, the single-chip CMOS Sony HDR-HC1 shoots 16:9 for under $2000; at the high end, cameras like the Canon's XL H1 (DV and HDV) and Panasonic's AG-HVX200 (DV, DVCPRO50, and DVCPROHD) shoot 16:9 for under $10,000.

An anamorphic lens is the way film folks have done widescreen for years. A cylindrical element squashes the image laterally, so that you get tall, skinny pictures like images in a fun-house mirror. This squashing allows the 16:9 image to fit in the 4:3 frame. Century Precision Optics has anamorphic adapters to fit the VX1000, DSR-200, VX2000, PD150, GL1, and similar camcorders, as does Optex (distributed in the USA by ZGC). Both allow you to use the wider half of the zoom range, and both run about US$800.

In the film theatre, or in the film print lab, a similar anamorphic lens unsquashes the image to yield the original widescreen image. In video, you use a DVE or an NLE plug-in filter to unsquash the image for letterboxed output, or you embed the appropriate codes into the data stream or video image (the codes differ in specification between different broadcast standards) to tell the receiver that the image should be displayed as widescreen. Most DV NLEs that support widescreen production, including Premiere 6.0, Final Cut Pro 1.2.5 and later, EditDV 2.0, and CineStream, insert this code when you specify a 16:9 aspect ratio.

Anamorphics come with their own problems; they tend to be on the soft side, and they're limited in the focal ranges and focal distances at which they give a satisfactory image. They effectively work as wide-angle lenses in the horizontal direction only; as a result, they tend to focus differently in the horizontal and vertical directions! Color fringing and general softening tend to be problems, too. Still, anamorphics can be worth the effort if you're willing to work within their limits, and their bokeh (the pattern of fuzziness of out-of-focus areas) and flare are very distinctive. Anamorphics have a different "look" than "flat" lenses, and sometimes that look is just what you want.

And if you don't have a true 16:9 camera and can't find an anamorphic lens? First, try using a 4:3 Canon or Panasonic camera; as explained below, they do a better-than-expected job in 16:9.

Otherwise, Shoot and protect 16:9 on 4:3. Use the entire, non-widescreen 4:3 image, but protect your future revenue streams by ensuring that all important visual information is contained vertically in the center or upper 3/4 of the screen. That way you have the full resolution 4:3 image for use today, and you can always upconvert to HDTV later in the 4:3 aspect ratio or the 16:9 aspect ratio if you can accept the reduced vertical resolution. Should you need to repurpose the material into a 16:9 SDTV format later, you can letterbox it in post by setting up a vertical shutter wipe, putting black bands at the top and bottom of the screen just like on MTV.

You're no worse off than with 16:9 material shot "the wrong way", but you have the freedom and flexibility of a full-resolution 4:3 image that's compatible with today's broadcast and non-broadcast standards.

Or are you? Since the "wrong way" digitally stretches the image prior to DV compression, the DV codec doesn't have to compress the "wasted" material at the top and bottom of the 4:3 image. As a result, those central 360 (or 432) lines are spread out over the entire height of the picture, and all the DCT blocks are employed in compressing useful bits of the image. As a result, slightly more vertical resolution is preserved through the compression process when shooting the "wrong way" vs. "shoot and protect". Ben Syverson has pix that show the difference.

Unfortunately, only the Canons and Panasonics look as good as Ben's pictures show. These cameras employ "pseudo-frame" resampling courtesy of vertical pixel shift, in the same way they get decent frame mode images. As a result, the images have more vertical resolution than purely field-based resampling provides, even if they aren't as good as using an anamorphic or a true 16:9 CCD.

Sonys do a much poorer job of fake 16:9; they look equivalent to performing the same resampling in a field-based NLE like Final Cut Pro, with an added and excessive vertical edge enhancement used in a losing battle to retain perceived sharpness.

I'd rate the quality of  16:9 images as follows:

  • True 16:9 cameras, like the Canon XL2, Sony DSR-500 series, HDV and DVCPROHD camcorders.
  • 4:3 cameras with an anamorphic lens attachment (within limits).
  • Fake 16x9 from a Canon XL1 or GL1, or a Panasonic AG-EZ1, AJ-D200 series, or the like.
  • 4:3 cropped and stretched in post using an NLE.
  • Fake 16x9 shot on a 4x3 Sony.
  • Mind you, this ranking does not take into account the fundamental quality differences in the different camera heads and lenses. I'm only discussing the relative qualities of the different means of generating a 16:9 image in what's still largely a 4:3 world.

    Ben Syverson's Shooting in Widescreen DV is worth a look for more info.  

    Frame mode, slow shutters, and "the film look"

    What is this "frame mode" I hear so much about?

    Several cameras, including the Panasonic AG-EZ1 and AJ-D200/210/215 and the Canon XL1, XL1s, GL1, GL2, XM1, and XM2, have a "frame movie mode" or "frame mode" switch that changes the way the CCD is read out into buffer memory from interlaced to progressive scanning. This gives a 30 fps "film look" frame-based image instead of the 60 fps field-based image we normally see on TV.

    Each video frame shows up as an intact frame-based image in which both the even and odd fields have been captured at exactly the same time with no interlacing artifacts (of course, the data stream written to tape still interleaves the even and odd fields for proper interlaced TV display; it's just that both fields have been captured simultaneously instead of in even-odd alternation). When shown on TV, frame mode images have had their temporal resolution reduced by half to 30 fps, fairly close to film's 24 fps. For the 625/50 XL1s sold in PAL countries, the 25fps video frame rate will make for an even closer match.

    This is useful for those looking for a more "film-like" motion rendering while staying in video. Independent documentry filmmaker Sam Burbank shoots most of his stuff for National Geographic in Frame Movie Mode on his Canon XL1, and reports that DigiBeta shooters see his stuff in editing and say, "there you go, making us look bad by shooting film"!

    These cameras get their "proscan" images not by truly perfoming progressive readout on the chips, but rather by offsetting the green CCD's read timing by one scanline during readout -- vertical pixel-shift, if you will. In essence, an even field from R & B CCDs is blended with an odd field from the G CCD, giving you a frame that has the scanlines for both fields captured at the same instant in time. This gives a definite improvement over mere field-doubled "frames", but it's not as sharp vertically as true proscan. Each "scanline" is actually composed of two scanlines from each chip, so there is some softening vertically; also, the effective chroma resolution is halved vertically. My Technical Difficulties article "Frames and Fields" goes into a lot more detail on the topic. 

    Current Sonys, alas, do not do nearly as well. They have a true proscan mode, but only at half the normal frame rate (15 fps NTSC, 12.5 fps PAL). Setting the Sonys into slow shutter speeds appears to work, but only on the half-resolution LCD viewscreens; the recorded image is line-doubled, and quite noticeable "jaggy" and inferior.

    Of course, the Panasonic AG-DVX100 shoots both 30P and 24P (NTSC: 25P only in PAL) with a true progressive CCD, as does the Canon XL2 and the  Panasonic AG-HVX200.

    How do I get "film look" shooting with DV cameras?

    Buy a used Arriflex 16BL or CP GSMO, stencil "Canon XL1 DV camcorder" on the side, and shoot film!

    Seriously, though, the most important way to get a filmlike look is to shoot film style. Light scenes, don't just go with whatever light is there. Use lockdowns or dolly shots, not zooms. Pan and tilt sparingly to avoid motion judder (i.e., if you're using the XL1's frame mode, you shouldn't compose any shot to call attention to the 30 fps motion rendering). If you're using a camera that allows it (most prosumer 3-chip camcorders, pro cameras), back down the "detail" or "sharpness" control. Reduce chroma slightly. Lock the exposure; don't let it drift. Use wide apertures, selective focus, and "layered" lighting to separate subjects from the background. Pay attention to sound quality. In post, stick mostly to fades, cuts, and dissolves; avoid gimmicky wipes and DVE moves.

    The Panasonic AG-DVX100 and AJ-SDX900 and the Canon XL2 DV camcorders record a 24 fps image using 3-2 or 2-3-3-2 pulldown on regular DV tape; it gives you the same motion sampling as motion picture film, undeniably part of the "film look". Many higher-end professional Sony DVCAM camcorders acquired 24p modes as of 2005. Beyond that, you can use "frame mode" on the Canon XL1/XL1s/GL1/GL2/XM1/XM2, Panasonic AG-EZ1, or AJ-D215; try 15 or 30 fps on the VX1000. On the Sony it's not the same as frame mode and has other problems, but it may pass as film's motion rendering for some purposes.  In HD, the JVC GY-HD100, Panasonic AG-HVX200 and HDC27 Varicam, and the Sony HDW-F900 CineAlta, as well as Sony's XDCAM HD and GVG's Infinity camcorders, have 24p modes.

    On higher-end cameras (DSR-300, DSR-130, AJ-D700, and the like), as well as some of the better prosumer camcorders, you may have setup files to adjust gamma, clipping, sharpness, color rendition, and white compression (knee); these can be exploited to give the camera a more filmlike transfer characteristic.

    Take the aperture correction (edge enhancement or sharpness setting), if available, and turn it down or off. This also makes a huge difference both in film transfer and in HDTV upconversion.

    Try out the Tiffen Pro-Mist filters. I like the Black Pro-Mist #1 or lower (fractional numbers). Jan Crittenden at Panasonic prefers the Warm Pro-Mist 1/2, while others prefer the Tiffen Glimmerglass series. These knock off a bit of high-frequency detail and add a bit of halation around highlights. Bonus: by fuzzing the light around bright, sharp transitions, these filters have the added effect of reducing hard-to-compress high-contrast edges, resulting in fewer "mosquito noise" artifacts.

    In post, there are a variety of filters or processes available to adjust the gamma and extend the red response; simulate 3-2 pulldown of 24fps imagery from 60i sources; add gate weave, dust and scratches, film fogging; and so on.

    The hot one as of mid-2002 is Magic Bullet, an After Effects plug-in also usable in many NLEs. It was developed internally at high-end post house The Orphanage, then packaged for the unwashed masses such as you and me. ToolFarm, which stocks a variety of useful postproduction applications, is the exclusive distributor. You can download both the manual and a free demo version: just like the local drug dealer says, "the first hit is free..." <grin>.

    In December 1999, Jeffrey Townsend of The Fancy Logo Company wrote me and said:

    ...and I have only one thought to add (so far):  In your section on getting a film look on video, you should consider referencing the DigiEffects product "CineLook," which was created as an Adobe After Effects plug-in, but works great with Final Cut Pro.  I almost didn't want to write this note, because I'd just as soon not have everybody know about this incredible cheat.  It's gonna kill the terrific black-box technology called FilmLook, because it's so capable, so flexible, and easily as successful in doing what it's supposed to do.

    I promise I don't work for DigiEffects.  I've just gotten going on a Final Cut Pro/Canon XL-1 based production studio, and just rendered three commercials with CineLook (after doing exactly what you describe, lighting as though it was film), and I swear it looks like something between superbly transferred 16mm and an ordinary transfer of 35mm.  And I'm still in my first week of playing with it!  I don't even know how to get the most out of it...

    I've never seen such an enthusiastic endorsement before, but it tracks other things I've heard about CineLook. They've got a companion product, CineMotion, for faking 3:2 pulldown. They've got packages for Mac, PC, and Unix systems. The DigiEffects stuff isn't cheap, but it would appear to be worth it (and no, I don't work for DigiEffects, either!).

    John Jackman writes that a company called BigFX makes a $500 FilmFX plug-in that's faster than CineLook and does a passable job.

    Keith Johnson of Xentrik Films & Software was planning on a plug-in called FliXen, but that project seems to have fallen by the wayside.

    Ned Nurk worked on a standalone processor for Windows called FilmRender (formerly FilmMunge), which batch-processed AVI files. It was by all accounts good, fast, and very affordable. Unfortunately some slimeball crackers hacked his authentication system and pirated it. As a result, Ned stopped development and sales, and the DV world lost a useful tool. Think about that the next time you decide to "borrow" some software!

    There are also proprietary processes such as "FilmLook" that, for a price of around $95/minute, makes the video look so film-like that real film looks like video by comparison (joke. Well, at least a little).

    Andy Somers has more useful info at VideoLikeFilm, along with his own process "Feature Look".

    Of course, if you really wanted film, why didn't you shoot film? :-) 

    What do the slow shutter speeds do for me?

    The slow shutter speeds (those below 60 fps) found on many DV cameras use the digital frame buffer of the camera in conjunction with a variable clock on the CCDs to accumulate more than a field's worth of light on the face of the chip before transferring the image to the buffer and thence to tape. This can do two things for you: more light integration, and slower frame update rates.

    More light integration means that you can get usable images in lower light than you might expect. I've shot sea turtles by moonlight at midnight at 1/4 sec shutter speed; the images update slowly but are certainly recognizable, whereas the same scene at 60 fps looked like I had left the lens cap on.

    You can also use the long shutter times as a poor man's "clear scan" for recording computer monitors without flicker. As you increase the integration time on the CCD, the computer monitor goes through more complete cycles before the image is transferred, reducing recorded flicker; many computer images have little motion so the slow update rate may not even be noticed. Be aware, however, that at least some cameras (the Sony VX1000 among them) appear to go into a strange field-doubling mode at shutter speeds below 60; vertical resolution is cut in half (while two clearly-interlaced fields are recorded on tape, as can be seen in a NLE, the field-mode flag is set in the DV datastream so that field-doubling is performed by the DV codec during playback to eliminate interfield flicker) so fine detail will be impaired. You'll need to judge this tradeoff on a case-by-case basis.

    Slower frame update rates are good for two things: a poor man's "film look" at 30 fps or 15 fps, and special effects at slower rates. You can capture a strobing, strangely disturbing image at the lower rates... use it sparingly, of course; no sense in annoying your viewers.  

    Those funny free-spinning lens controls

    Why don't consumer DV cameras have "real" lenses with focus marks?

    "Real" lenses use helical grooves to rack focus; the resistance you feel when you focus such a lens is the natural friction of the rotating barrels sliding through the lightly-greased grooves.

    That smooth friction, alas, plays havoc with autofocus systems which all consumer cameras must have, so goes the conventional wisdom; strong and battery-draining motors are needed to spin such barrels, and they can't obtain the fast focus response that's so useful in optimizing autofocus algorithms.

    Thus autofocus lenses use lighter, more easily positioned internal focusing elements (which are also advantageous from an optical standpoint) with lighter, faster, more thrifty focus servos.

    The "focus ring" you manhandle isn't actually connected to the focusing mechanism. It's a free-spinning ring with an optical or electromagnetic sensor attached: when you spin the ring, a series of pulses is sent to the focus controller. The faster the pulse train, the faster the controller changes focus.

    However, it's not perfectly linear. If you turn the ring too slowly, nothing at all will happen since the controller discards all pulses below a certain rate as random noise. If you spin it 1/4 turn very quickly, you'll get more of a focus shift that if you turn it 1/4 turn at a more moderate rate.

    As a result of all of this, there's no way for the focus ring to have focus marks -- nor is it possible for you to measure such marks yourself and be able to repeat them.

    The same argument applies to the zoom controls on some lenses, such as the 16:1 and 3:1 zooms on the Canon XL1 or the zoom rings on the Sony VX2000 and PD150.  

    How do I work with these lenses?

    Carefully, with patience and understanding. You can't set marks, or focus by scale. Slow, fine adjustments may do nothing. But with practice and perhaps some adjustment of operating style, most people can use if not necessarily love these lenses.

    On the XL-1, you'll get better zoom control and smoother operations if you stick to the zoom rocker on the handgrip than if you use the zoom ring on the lens. Some folks are taping over the zoom ring entirely and only using the rocker.

    I find the zoom rings on the VX2000, PD150, and DSR-250 to be superb, almost as good as a "real" zoom control. You still can't set marks with them, but they're good enough for slow ramps and smooth accelerations.

    Don't like it? Buy a real camera with a real lens, like the Sony DSR-300 (US$8,000 and up, with lens) or the Panasonic AG-DVC200 (US$6,000 or so) or the JVC GY-DV5000 (US$5,000 with lens). Hey, it's only money...  

    Image Stabilization

    What's EIS/DIS?

    Electronic Image Stabilization and Digital Image Stabilization are completely electronic means for correcting image shake. As the shaky image hits the CCD chip, these systems reposition the active area of the chip (the location on the chip that the image is read from) to compensate for it, by re-addressing the area of the chip that they're reading from. If you've seen Rocky & Bullwinkle (a US cartoon involving a moose and a squirrel), think of Bullwinkle running back and forth with the bucket of water to catch Rocky after Rocky jumps from the high diving board (of course, Bullwinkle winds up in the water, but that's another story).

    The EIS/DIS controllers look for motion vectors in the image (typically a widespread displacement of the entire image) and then decide how to "reposition" the image area of the chip under the image to catch it in the same place. The actual repositioning is done in one of two ways: one is to enlarge (zoom) the image digitally, so that the full raster of the chip isn't used. The controller can then "pan and scan" within the full chip raster to catch the image as it moves about. The other is to use an oversize CCD, so that there are unused borders that the active area can be moved around in without first zooming the image.

    The zoom-style pan 'n' scanner can be detected quite simply: if the image zooms in a bit when EIS/DIS is turned on, then a zoom-style pan 'n' scanner is being used. Unfortunately, such methods reduce resolution, often unacceptably.

    All EIS/DIS systems suffer from several problems. One is that, because the actual image is moving across the face of the chip, image shakes induce motion blur. Even though the position of an image may be perfectly stabilized, you can often notice a transient blurring of the image along the direction of the shake. Sometimes it's quite noticeable. To get around this, many EIS/DIS systems close down the shutter a bit to reduce blur. This reduces light gathering capability. You can't have everything, you know.

    Another problem is that the motion-vector approach to stabilization can be easily fooled. If the area of the image being scanned doesn't have any contrasty detail that the processor can lock onto, the stabilization can hunt, oscillate, or bounce. This looks like a mini-earthquake on the tape, and it can occur at the most annoying times.

    Also, the stabilization can work too well. Often when one starts a slow pan or tilt with EIS/DIS engaged, the system will see the start of the move as a shake, and compensate for it! Eventually, of course, the stabilizer "runs out of chip" and resets, and the image abruptly recenters itself.

    The big advantage of EIS/DIS is that it's cheap.  

    What's optical stabilization?

    Optical stabilization such as "SteadyShot" is descended from Juan de la Cierva's 1962 Dynalens design, a servo-controlled fluid prism used to steer the image before it hits the CCDs (in the '60s, of course, it steered images onto film or onto tubes!). In the late '80's and early '90's, Canon and Sony updated this technology for use in consumer gear, and it worked so well that Canon now offers a SteadyShot attachment for some of their pro/broadcast lenses.

    The fluid prism is constructed of a pair of glass plates surrounded by a bellows and filled with fluid so that the entire assembly has a refractive index comparable to a glass prism. The angle of the prism is changed by tilting the plates; one plate can be rotated vertically, moving the image up or down, and the other rotates horizontally, steering the picture right or left.

    Rotation rate sensors detect shake frequencies and tilt the front and back plates appropriately. Position sensors are also used so that in the absence of motion the prism naturally centers. The position sensors also detect when the prism is about to hit its limit stops, and reduce the corrections applied so that shake gradually enters the image instead of banging in as the prism hits its limits.

    Optical stabilization of this sort is expensive, tricky to manufacture and calibrate, and must be tuned to the lens. Adding a wide-angle or telephoto adapter to a SteadyShot lens screws up SteadyShot; the processor doesn't know about the changed angle of view (all it knows is the current zoom setting) and thus over- or under-compensates for shake.

    But for all that it works brilliantly: because the image is stabilized on the face of the CCDs, there is no motion blur; because rate sensors are used, the system isn't fooled by motion in the scene or by lack of detail; because a physical system has to move to reposition the image, there are no instantaneous image bounces or resets as can happen with EIS/DIS.

    [It's interesting to note that on the XL-1, Canon added image motion-vector detection to the rate gyros on their optical stabilizer. As a result, the system seems to "stick" on slow pans and tilts just like an EIS/DIS system, although the recovery is more fluid and less jarring. On the other hand, it really does a superb job on handheld lockdowns.]  

    What about Steadicam/GlideCam?

    These mechanical stabilizers work by setting up the camera so that it has large rotational moments of inertia, but little reason to want to rotate: the camera is mounted on an arm or pole that's gimballed at its center of gravity or just above it. The gimbal mount is either handheld, or attached to an arm, often articulated and countersprung, mounted on a body bracket or vest. One steers the camera by light touches near the gimbal; otherwise it just tends to float along in whatever attitude it's already at. The trick is in getting it into an attitude that makes nice pictures, stopping it there, and then not disturbing it.

    These systems work very well, but require a lot of practice for best results. It's very easy to oversteer the camera, and off-level horizons are a trademark of suboptimal Steadicam skills. The handheld systems can also be surprisingly fatiguing to use for extended periods.

    I find that the Steadicam JR is also a bit wobbly; its plastic arms aren't especially rigid and the whole thing tends to vibrate a bit. Fortunately, the wiggles that get through the JR are neatly compensated for by SteadyShot in the VX1000, resulting in buttery-smooth moving camera shots (complete with off-level horizons!).  

    When do I use what kind of image stabilization?

    Try it; see if it works; if it helps, then use it.

    I tend to leave optical stabilization on most of the time. I'll turn it off when using the wide-angle adapter, or when using the camera on a tripod and needing to conserve power.

    If I'm planning to do any significant camera motion during a shot, and I don't have a wheelchair, dolly, car, airplane, or helicopter available (there's never a helicopter around when you need one...), I'll use the Steadicam JR. Depending on the roughness of the ride in the aforementioned conveyances, and space allowing, I'll use Steadicam there, too (Mikko Wilson writes that in general, you don't want to be using Steadicams in helicopters. He's right--you really should use Tyler mounts or Wescam-type rigs--but for shooting sideways out of a Schweizer 300C with a Handycam on no budget, it seems to work fairly well. Just remember: safety first--something that, by the evidence, many camera ops and camera pilots fail to remember).

    And don't forget that other, less glamorous form of stabilization: the tripod. Tripods work really, really well. Try one sometime, you'll like what it does for your image!

    Copyright (c) 1998-2008 by Adam J. Wilt.
    You are granted a nonexclusive right to reprint, link to, or frame this material for educational purposes, as
    long as all authorship, ownership and copyright information is preserved and a link to this site is retained.
    Copying this material for a publicly-accessible website (instead of linking to it) is expressly forbidden except
    for archival purposes: this page is dynamic and will change as time goes by, and experience has proven that sites
    copying my material do not keep their copies up-to-date, thereby doing their visitors, themselves, and me a disservice.

    DV - contents & links
      Detailed listing of this site's DV contents, and links to other sites.

    DV Technical Details
      The DV Formats Tabulated; standards documents & where to get them.

    DV FAQ - technical
      DV formats, sampling, compression, audio, & 1394/FireWire/i.LINK.

    DV FAQ - editing
      linear & nonlinear; hard & soft codecs; transcoding; dual-stream NLE.
    you are here >
    DV FAQ - etc.
      16:9; film-style; frame mode; slow shutters; image stabilization, etc.

    DV Pix
      DV sampling, artifacts, tape dropout, generation loss, codecs.

    Video Tidbits
      Tips & tricks, mostly DV-related; getting good CG.

 Home SW Engineering Film & Video Production Video Tidbits >DV<

    Contact me via email

    Last updated 2008.08.01