> the DV, DVCAM & DVCPRO Formats copyright © 1998-2007 Adam J. Wilt  
DV FAQ - editing search
What's new:
2007.06.29 - minor updating (major rewrite required, however; list of outdated material)

Topics on this page:

Didn't find what you wanted here? Try the other pages listed below...

DV - contents & links
  Detailed listing of this site's DV contents, and links to other sites.

DV Technical Details
  The DV Formats Tabulated; standards documents & where to get them.

DV FAQ - technical
  DV formats, sampling, compression, audio, & 1394/FireWire/i.LINK.
you are here >
DV FAQ - editing
  linear & nonlinear; hard & soft codecs; transcoding; dual-stream NLE.

DV FAQ - etc.
  16:9; film-style; frame mode; slow shutters; image stabilization, etc.

DV Pix
  DV sampling, artifacts, tape dropout, generation loss, codecs.

Video Tidbits
  Tips & tricks, mostly DV-related; getting good CG.

DV & Timecode

Does all DV have timecode, or just DVCAM and DVCPRO?

All DV formats have timecode.

Let's say it again, just so we're clear: all DV formats have timecode.

Each and every frame of a DV25 tape has a timecode value encoded alongside its video data. The timecode in 525/59.94 "NTSC" DV is drop-frame (DF) by default, while the TC for 625/50 "PAL" DV is of course non-drop frame (NDF).  It's part of the spec, and it's not an optional part, as far as I can tell.

Some 525/59.94 DV cameras and VTRs let you select either DF or NDF TC in the menus. Some DVCAM and DVCPRO gear lets you set the timecode generator with any of the options you're used to in broadcast gear, like free run vs. regen, preset values, and full control of the user bits. Most DV cameras and VTRs don't give you that level of control; they start each new tape at 00:00:00:00, and regen or jam-sync TC on every succeeding recording so that (when all goes well) a tape will have continuous timecode running throughout. But even so, it's real, pukkah, honest-to-goodness timecode.

Most nonlinear editors above the iMovie level read and store DV timecode when they ingest DV media. If you need to recapture your clips, and you logged your clips to begin with, you can recapture frame-accurately.

Playback and recording frame-accurately, i.e., for insert-edits onto a pre-existing tape, requires both a VTR with frame-accurate insert-edit capabilities and an editing program that can control it, typically using RS-422 control.  For the most part, this means a DVCAM or DVCPRO VTR, and such VTRs won't insert-edit on DV tapes.

Perhaps this is the reason that some people insist that "DV doesn't have real timecode"—they are confusing frame-accurate timecode with frame-accurate insert-editing capability. But we know better, don't we? 

Is DV timecode the same as SMPTE timecode?

No, technically speaking; yes, for most practical purposes (!).

There's a great deal of confusion about timecode. There are two different aspects of timecode that people mix up: how is it recorded on tape, and how is it used in editing. The first aspect is where "SMPTE" vs "RCTC" (Hi8) vs "DV TC" vs "Frame Code" (series 7 U-Matics) vs "CTL Time Code" (JVC SVHS) matters; it's largely a concern for historical reasons. The second aspect is what really matters: how does your editor see timecode. Nowadays, for the most part, how it's recorded on tape is irrelevant for this discussion.

Back in the dark, early days of linear editing with analog formats (the 1970s!), frame accuracy was not possible. Some clever folks came up with the idea of recording a unique code on every frame, so that edit controllers could repeatably reference an exact frame on tape. That developed into two timecode recording formats -- LTC and VITC -- that were formally standardized by the SMPTE and EBU, and adopted by manufacturers worldwide. The SMPTE/EBU timecode standards define where the timecode is recorded on tape, what amplitude the signal is, the encoding of the digital data, and so on. The standard also describes the time format of the timecode (HH:MM:SS:FF, two digits each of hours, minutes, seconds, and frames), and the format of "user bits", a separate set of hexadecimal digits the actual usage of which was left up to the individual.

LTC ("litsee") is Longitudinal Time Code, a 1 volt square wave laid down either on a linear audio channel or on a dedicated timecode track. It is comparatively simple to build LTC into a VTR or to retrofit it to a VTR without timecode, as it's technically simple and requires no mucking about with the video signal itself. However, it's difficult to read during some off-speed tape motions (as when shuttling or scanning the tape) and impossible to read when the tape is paused.

VITC ("vitsee") is Vertical Interval Time Code, is a series of black and white pulses encoded into one or two lines of the vertical interval of the video signal itself. VITC can be read even during pause mode (as long as the VITC line in the the video signal is readable) but it requires the rotating video heads to scan the tape, which isn't always possible during high-speed searches or shuttles. It's also more complex to implement, since you need to switch it into the video signal.

Back when proprietary multipin control cables were used to control VTRs, it was important to know that "SMPTE timecode" was used, since you had to use an external box to extract the timecode from the the LTC track or the VITC line, and adherence to the standard way of recording the timecode on tape was necessary to guarantee recovery of the signal.

In the past decade or so, however, most editing systems and most VTRs have been moving to standardized serial control protocols, such as RS-422 or LANC (actually, RS-422 is a wiring and signal spec, not a protocol per se, but most of the "RS-422" gear out there speaks the same language derived roughly from the original Sony BVU-800 control protocol, with minor variations between different machines). In such systems, timecode data flow across the same wires as the control data; it's up to the VTR to read timecode however it's written on the tape and turn it into a simple serial communications byte stream.

Furthermore, the SMPTE-spec timecodes aren't ideally suited to newer generation tape formats. LTC needs a fast-moving tape for proper data recording and recovery, and these formats just don't move the tape fast enough. VITC requires that the vertical interval be recorded; many digital formats including DV don't actually record anything other than the active picture area. Also, these formats already have digital data sectors on tape; why convert digital timecode to analog waveforms when you can record it as digital data to begin with?

Thus we have professional Hi8 with "Hi8 Timecode (but not really SMPTE timecode)" and consumer Hi8 with RCTC: "Rewriteable Consumer  Time Code". These are recorded as digital data in the subcode section of a Hi8 track. But an edit controller doesn't care; when it asks for timecode, it gets back something of the form "HH:MM:SS:FF", never you mind how it was recorded on tape!

Likewise, the DV formats do digital magic to store timecode, but when an edit controller asks for it, it gets the same data over the wire that it would from a Hi8 VTR -- or a 1" Type C VTR, or Digital Betacam, or 3/4", or whatever.

Adding to the confusion is the "SMPTE TC" (LTC) option for the EVO-9800 and 9850 Hi8 decks: This board takes the digital Hi8 TC or RCTC data and formats it into a 1 volt square wave signal as if it were coming off of an analog LTC timecode track. This allows the 9800/9850 to be used with edit controllers that don't understand serial timecode, but do their own recovery of it from the SMPTE LTC signal.

Are we having fun yet? High-end DVCAM and DVCPROHD (as well as other digital format) decks have this option built-in, allowing these VTRs to be used with editors that are expecting a noisy, distorted analog timecode and don't want nice, clean, serialized timecode data handed to them on a silver platter... really, though, I shouldn't be so snippy: while it sounds goofy from a technical standpoint, it provides backwards compatibility with a large installed base of very expensive editors, as well as a whole host of ancillary equipment that generates or takes in the SMPTE LTC signal. There are still occasions where having that LTC signal available on a BNC connector can be helpful, or downright necessary, such as when you're recording timecode that tracks the current time of day, or when you're jam-syncing timecode on one camera to match the timecode from another camera.

The bottom line is this: don't worry about whether or not the timecode recorded on tape is "SMPTE" or not. What matters is whether or not you have timecode, period (and DV does have timecode). Any modern-day edit controller should be able to use the timecode available over a serial protocol connection.

If you need SMPTE LTC or VITC I/O for other equipment (i.e., for a chase-lock audio synchronizer, an under-monitor display, or for jam-syncing of timecode from a common reference, or for recording of timecoded feeds from other sources), some DVCAM and DVCPRO decks offer "SMPTE timecode" LTC I/O. Some will even record the incoming VITC as DV timecode, and re-synthesize VITC on playback.

Non-linear editing

What's non-linear editing?

Non-linear editing (NLE) is editing using random-access video storage, so that you don't have to wait for tape to shuttle to see a scene at the other end of the reel. Nowadays, this almost always means computer-based editing where you've transferred the video from tape to hard disk, and you assemble a show by arranging the clips along a timeline on the computer screen. When you're done, you output to tape, which happens either immediately (if you've spent a lot of money on gear) or after a rendering operation (if you've spent less money).

The "big names" in hardware-assisted NLE are Avid (Media Composers of various flavors, models, qualities, and capabilities), Quantel (Harry, Henry, Harriet, EditBox, iQ, etc.), Apple (Final Cut Studio), and half a dozen more up-and-coming, hanging-in-there, and/or where-are-they-now companies. They typically supply turn-key systems in the $15,000 to $150,000+ range, even though some are built using open platforms such as MacOS, Windows 2000, Pinnacle Targa cards and Matrox DigiSuite cards, and the like. Sony and Panasonic each have two DV-native NLEs.

On the PC and Mac, at Prices For The Rest Of Us, the familiar names are Adobe Premiere, Apple Final Cut Pro, Canopus Edius, ULead Media Studio, Sony Vegas Video, and the like. These are software packages that work with (and are often bundled with) a variety of plug-in cards, including those from Canopus, Matrox, AJA, Blackmagic Design, and others

What's special about DV non-linear editing?

DV is compressed just enough to be able to stream into and out of garden-variety PCs and Macs, and the availability of inexpensive 1394 I/O cards and fast hard disks means that high quality video storage and manipulation on desktop computers is possible without having to spend a king's ransom on specialized RAID arrays and proprietary codecs.

DV can be stored and manipulated in native form, without transcoding to JPEG, MPEG, Wavelets, or the like. The same high quality seen on DV tape is maintained in the computer.

You can put together a DV editing system with 90 minutes of online storage for under US$1000, and have a workable system that produces broadcast-quality output. If you already have an appropriate PC, you can get into DV editing for under $50 (a 1394 card with some editing software included). If you have a recent Mac, it already comes with 1394 and a copy of iMovie.

Of course, you can spend a lot more, adding onscreen, full-resolution scrubbing; more storage; better machine control and the like. But the high video quality is there from the start, even in the sub-$1000 system. The advent of DV was a watershed moment in the evolution of affordable desktop editing.

Who makes non-linear editing stuff for DV? What gear is available?

The answer to these is changing almost on a daily basis; most of the specifics listed below are hopelessly out of date. These are exciting times.

Software-based: A variety of software systems are available for PCs and Macs, starting under US$100 (!)  for the board and editing software, and ranging up to US$1700+. These span the range from various consumer-orientedpackages bundled with 1394 cards (or included with Windows and Mac OS) to Adobe Premiere Pro, Apple Final Cut Pro, Canopus Edius, Sony Vegas, and Avid's Xpress DV.

These systems input and output DV using an IEEE-1394 connection, although if you have other formats and a DV VTR, you can first re-record the video on the DV VTR and then bring it into the system (most recent DV and DVCAM decks and camcorders allows real-time composite or Y/C transcoding to DV without first recording the image on tape). By the same token, you can output to analog video using the DV VTR as a digital-to-analog converter.

You can also get standalone analog-to-digital converters like the Canopus ADVC-110, Miglia Director's Cut, and Datavideo DAC-100.  They contain a hardware DV codec and convert between analog composite or Y/C (S-video) signals and a compressed DV data stream on 1394. With these boxes, real-time transcoding of DV to/from analog can be added to a "soft codec" system, making such systems more viable for use with analog sources, and bringing them closer to "hard codec" systems (below) in convenience.

Higher-end transcoders provide component YUV and/or 601 digital (SDI) connections. Laird Telemedia, Edirol, Convergent Designs, and Canopus (among others) make such boxes.

Hybrid: Hybrid systems, combining both software and hardware processing, run around US$1000 and up. These typically allow the use of analog composite or Y/C I/O with real-time transcoding to and from DV; DV is the native format used on-disk. They also may include real-time hardware acceleration of some video effects and filters. A hardware codec is used for analog capture and playback, and (when available) for real-time playback of unrendered effects to 1394. Software codecs are used to fetch the DV data from disk and decompress it for processing. Video processing may occur in software, hardware, or both.

The software supplied may be Premiere, Final Cut Pro, Edius, or something else. A separate, manufacturer-supplied capture/playback application can also be used with most products in this category.

(I discuss software and hardware codecs later in this FAQ.)

High end "heavy iron" [outdated; the big names remain Avid and Quantel; there's no money in making DV-specific high-end systems these days]: Pinnacle's liquid blue (formerly FAST blue) system provides "any format in, any format out" editing for US$60,000 or so. The captured video stays in its native form on disk (DV, M-JPEG, BetaSX, DV50, ITU-R-601, etc.; analog formats are transcoded to a digital format when captured) and is only transcoded when necessary to do effects between streams in different formats or when outputting to a different format.

Liquid blue has its own capture and editing application, developed with the experience gained from FAST's Video Machine line of products and incorporating user feedback. It's quite impressive, but perhaps a bit more expensive than readers of this FAQ are willing to put up with. :-) Pinnacle's edition DV package provides the same interface (derived from FAST's Studio software) with software-based rendering and a 1394 card for around US$700

Both Sony and Panasonic have Windows-based, turnkey systems for DVCAM (ES-3, using the former FAST Studio software; ES-7)  and DVCPRO (DV Edit, NewsByte) respectively, that exploit the added features of these higher-end DV formats such as 4x transfer and editing metadata (in DVCAM, "ClipLink" good/no-good shot markers and clip picons provided by high-end DVCAM camcorders; in DVCPRO similar data are stored as "Picture Link" information). Prices start around US$25,000 (no real-time 3D effects, no 4x transfer) and go up from there.

What about "dual-stream" and "real-time" DV editing?

Dual-stream systems are those that can process two (or more) streams of video and audio at the same time. Typically, these will use software decoding of the compressed video, and software and/or hardware processing of effects, transitions, and filters.

Poke around on DVLine or The Electronic Mailbox or similar sites to see what's currently shipping.

The whole idea behind dual-stream systems is to eliminate the rendering bottleneck every time you add a filter to a clip, attempt a transition between two clips, or overlay titles or graphics on a clip. With today's fast CPUs, it's simple enough to decode two simultaneous DV datastreams and feed them to a mixer chip (or a GPU, or compositing software), in effect an on-board SEG or DVE that can perform the desired transition. The resulting video can thus be displayed (or sent out as analog or digital uncompressed signals) in real time.

With enough processing power, or with specialized chipsets, you can also perform real-time color-correction or superimposition of still or moving graphics on the video.

But there's a catch or two -- notice that I said "with enough processing power." The devil is always in the details with these sorts of things...

One is that "dual" stream means two source streams; there's nothing mentioned about recompressing the resulting video program and storing it back on disk! The finished video comes out of the system and must be recorded on an external VTR; you'd need a third codec to recompress the finished program. Sometimes that's available (Canopus DVStorm, Matrox RTX.100), but sometimes not (most older, low cost hybrid systems).

Also, most of the low-end "real time" boards only have the necessary hardware to decode and mix two video streams and a graphic layer or two, nothing more. If you want to color-correct a scene, add a dissolve and a picture-in-picture, add an additional superimposed graphic, or the like, you may be back to rendering... in fact, even the high-end systems like the DigiSuite DTV have limits on how many simultaneous things they can do; if you add enough layers, filters, effects, and supers, you'll exceed the system's capacity for real-time performance. It pays to look very carefully at the spec sheets for any of the "real time" boards to see how much complexity they really handle in "real time".

Fortunately, though, even the simpler systems can use the dual-stream horsepower to accelerate rendering of effects and filters, meaning that even when they can't perform in real time they will greatly speed the creation of the final clip.

Exploiting real-time boardsets requires an NLE application that integrates with the hardware and knows how to drive it; you usually can't just add a dual-stream card to an existing system. There are real-time versions of Premiere, Final Cut Pro, Speed Razor, and other programs; Premiere and FCP can usually accommodate an add-on card using the supplied drivers, but many other NLEs (and older versions of Premiere prior to 6.0) are supplied preconfigured for a specific hardware setup.

One other gotcha: streaming multiple separate streams of DV video off of disk, and recording back in real time, may require a RAID array. Even if your existing disk subsystem has the raw bandwidth for two DV streams, remember that the two streams might be stored on different parts of the disk. Most modern ATA-100 and ATA-66 disks can handle two streams of DV, but start having a problem if you add much more. Upgrading a single-stream DV system to dual-stream capability may require switching to (comparatively) expensive RAID arrays with all the cost and potential complications they entail; at the very least, you may want to add a separate disk for real-time recording of the NLE's rendered results. While this is not a huge problem, bear in mind that the cost to add a RAID may easily outweigh the cost of the dual-stream board itself.

DV Magazine often reviews real-time systems, and you may be able to find these reviews online.

There's another kind of "real time" capability advertised; it's best seen in Final Cut Pro 3.0+, Avid Xpress DV 3.5+, or Adobe Premiere 6.5+. These NLEs provide real time preview of many operations on the computer screen only: you can see the results of a dissolve, a color correction, or many other effects and transitions on the computer's display, but your 1394 video feed will not show them without rendering. This sort of "real time" won't help you render and output the finished show, but it does greatly speed the creative part of editing: immediately seeing the results of your actions and the rhythm and flow of your transitions.

Sony's Vegas has its own "real time" capability: it previews on the computer screen and out 1394 in real time, but it will drop frames as necessary to maintain real time. You still have to render to see full-speed, full-quality effects, but the dropped-frame approach lets you see effects of arbitrary complexity output as video, and with the same pacing as the final product. It's a clever compromise.

Can I build a PC- or Mac-based NLE system myself?

Yes. If you don't mind opening the computer case and fiddling with the innards, you can buy one of the low-end or mid-range board sets and do it yourself. But be warned, it's often not a trivial task. Careful attention to detail and optimization of system configurations and drivers are often required. Also be prepared to download the latest drivers from the Internet; often you'll need new video card drivers as well as newer drivers for the brand-new 1394 board you have just purchased.

Part of the joy of an "open systems" approach to building an editing system is that the list of possible conflicts and incompatibilities between different components of the system is huge and mutable. Scan the vendors' websites for lists of known good and/or known incompatible combinations of chipsets, hard disks, SCSI controllers, and the like. If you're still in doubt, ask your local VAR (Value Added Reseller, the fellow you're going to buy the stuff from) about whether the stuff you're considering will all work together, or call the vendors directly and ask 'em if their board will work with your computer. One good tactic, if you're starting from scratch, is to settle on the DV card and software first, then buy a computer and the other components known to work with it.

Better yet, if you're a video producer and not especially interested in fiddling with the innards of PCs and Macs, have your VAR build a system to your specifications. Let them fight IRQ limitations and driver-incompatibility hassles -- and be willing to pay for it. If time is money for you, think about how much time it would take to resolve these hassles yourself . It took me the better part of three days to get my DPS Spark installed, working, and stable enough for my satisfaction, since Windows decided to reshuffle interrupts every time I rebooted, and I had an old Matrox Millenium driver that hogged the PCI bus. During that time I was only half as productive as normal: what's 1.5 days of your time worth? More recently, I built a dual-processor PIII box from scratch for EditDV 2.0 and Matrox DigiSuite DTV testing, and had four days of "fun" chasing an intermittent, flakey sound card problem.

There are plenty of capable vendors listed in the links section. On the Mac side, ProMax does a great job of putting systems together and supporting them. On the PC side, I'm partial to DVLine for their excellent website chock-full of tech details, their broad range of solid system configurations, knowledgeable staff, and very attractive pricing. I've sent a few clients their way and those clients have been happy. I'm not saying the other guys aren't as good, mind you, I've just had good results with ProMax and DVLine.

On the other hand, if you're a certifiable lunatic like me, just have at it! Just realize that it's still a "Plug and Pray" world inside that PC's case, and no, it's not an evil conspiracy against you when it doesn't work the first time. That's just the state of the art on the bleeding edge of desktop video technology...

Where do I learn more details about specific NLE configurations and products?

[Note: almost entirely outdated; here for hysterical reference only. is still around but the specialized Golden List / Silver List / 1394 card matrix are long gone; 1394 is a plug-and-play commodity product, of not already built into your computer]

Mac-based stuff: Check out The Golden List, put together by Ross Jones and posted online at the "Desktop Video Guide at". Ross' list contains links to all the major players on the Mac platform, with links to the manufacturers as well as a variety of more general-interest reviews, FAQs, and other DV-related info. Ross keeps his list up-to-date; that's a full-time job in itself and some of us suspect that he has no other real job...

PC-based stuff: Richard Lawler maintained The Silver List with all the PC-related information through Sepember 2001. 

Pat Leong's Matrix of 1394 NLE cards is a list of PC and Mac NLE boards, complete with comparisons of capabilities. Very comprehensive but check on the last update (date at bottom of page).

Also look at "The FireWire Page" at These folks discuss hard vs soft codecs in some detail and list the currently-available stuff. They should know what's available; they sell it!

Lurk on the DV-L mailing list. Follow all the likely-looking links at DVCentral. Talk to other folks with NLEs. Talk to your VAR. Talk to vendors. Cross your fingers. And welcome to the bleeding edge!

How much DV fits in a Gigabyte?

First, how big is a Gigabyte? Is it a decimal-derived 1,000,000,000 bytes, as some disk-drive vendors say? Or is it a binary-derived 1,073,741,824 bytes (1024x1024x1024) as the geeks say? Strictly speaking, the latter quantity is called a Gibibyte, a "binary Megabyte" is a Mebibyte (1,048,576 bytes) , and a "binary kilobyte" is a kibibyte (1024 bytes), abbreviated GiB, MiB, and KiB respectively. Thus there is about a 5% difference between Megabytes and Mebibytes, and a 7.4% difference between Gigabytes and Gibibytes.

This terminology is not yet widely used: most folks still use Giga- and Mega- and Kilo- regardless of whether they're using decimal or binary numbers. You usually have to look very closely at the spec sheets to determine how the numbers are derived. Sometimes "hybrid" numbers are used: some storage specifiers call a Megabyte one thousand 1024-byte blocks!

A raw DV stream (such as captured by iMovie, or FASTstudio DV before it became Pinnacle edition DV) takes about 3.6 Megabytes/sec; 1 Gigabyte of storage holds about 4 minutes 37 seconds of DV.

A second of raw DV takes about 3.43 Mebibytes/sec, and a Gibibyte holds 4 minutes 58 seconds.

Are we having fun yet? But wait, there's more...

The actual space taken by DV on disk depends on the recording format (raw DV, AVI, QuickTime) and the length of the clips: AVI and QuickTime add headers and "wrappers" to the raw DV data, and for very short clips these headers and wrappers can take up a fair amount of disk space, because they are very large relative to the small amount of DV data contained therein.

I ran some tests using Final Cut Pro 3.02 on Mac OS X 10.2, and got the following results for both captured clips and rendered clips, sorted by size: ?
File type Raw size, bytes MiB/sec MiB/min min/10GiB 229913398 3.654 219.26 46.7 229584422 3.649 218.95 46.8 227583669 3.617 217.04 47.2 227541621 3.617 217.00 47.2 40285398 3.842 230.51 44.4 39930822 3.808 228.49 44.8 37933269 3.618 217.06 47.2 37926453 3.617 217.02 47.2 6296358 6.005 360.28 28.4 5793174 5.525 331.49 30.9 3796197 3.620 217.22 47.1 3795749 3.620 217.19 47.1 2647750 63.127 3787.64 2.7 2240262 64.030 3841.82 2.7 155165 3.699 221.97 46.1 129893 3.713 222.75 46.0
For reference, I also wrote out raw DV streams; without "wrappers", these scale linearly with clip length:
PALrender1frame.dv 144000 3.433 205.99 49.7
NTSCrender1frame.dv 120000 3.430 205.79 49.8

These numbers, mind you, are computed for Mebibytes and Gibibytes! You probably won't get disks rated in binary units, so use this table more to see the variability in storage requirements with varying clip sizes and captures vs. renders.

Shorter clips are less space-efficient, since they still have to carry the full weight of the Quicktime header, and captures are less efficient than renders because QT can't optimize the data storage as well on the fly. For single frame captures, QT wrapper data comprise the bulk of the file, and you won't get too many on a disk! Even 1-second captures waste a lot of space, though renders are fine.

As the clip lengths get long enough that DV data comprise the bulk of the file, both NTSC and PAL DV converge to around 3.62 MiB/sec including the QT wrapper and header.

I normally plan on getting about 45 minutes per 10 Gibibytes of drive space; figure on about 42 minutes for 10 Gigabytes. More practically, plan on 4 minutes per Gigabyte. This gives you a little extra room, and you probably won't be caught short.

100 GB drives are now common and cheap. You can stick four such drives in most systems without any special efforts. Do the math: that's lots of DV storage!

Why is there a 2 Gig limit? How can I avoid it?

As of 2002 most of the 2 Gig limit problems have faded into the past; most current NLEs happily capture and output to DV streams, QuickTime files, or DirectShow AVIs as big as you like. But there are some operations (like exporting long clips to some web-compression programs) that can still run afoul of it, so knowing about where it comes from can be useful. 

Operating Systems such as Windows have maximum sizes they allow for a "logical drive". For example, Windows 95 running the FAT16 file system (or Windows 3.1, or MS-DOS) can't access any more than 2 Gigabytes on a drive. That's why you wind up partitioning that nice 9 Gig Drive into five "logical drives", four 2 Gigs and one stubby little 468 Meg drive ("9 Gigs" is specified for 1 billion [1000 x 1000 x 1000] bytes per Gig, whereas the logical drive sizes seen under Windows use 1024 x 1024 x 1024 bytes per Gig -- about a 7% difference in the resultant numbers).

If you have Windows95 OSR 2 or Windows98, you can format the drive with a FAT32 file system and store files up to 4 GB in size. With Windows NT or Windows 2000, you can use NTFS and store huge files (up to 2 Terabytes?).

MacOS Systems 7.5 and higher also have much larger maximum partition sizes than 2 Gigs; 4 Gigs starting with OS 7.5, and 2 Terabytes starting with OS 7.5.2.

The file format used for the stored video can also have a 2 Gig limit. On PC systems, the most common format is AVI (Audio/Video Interleave). The older, Video for Windows (VfW) AVI was limited to 2 Gigs. Older QuickTime files (Mac or PC) were also limited to a 2 Gig maximum file size, even if the disks the files are stored on can be bigger; QT 4.0+ can address larger media files, but application support (even as of 2002) hasn't always caught up yet.

There are tricks to get around these limits. Some involve using specialized codecs that use indirection (the AVI or QuickTime file stores pointers to other files instead of raw data, the "reference movie" approach). Final Cut Pro 1.0 used QuickTime reference files for seamless capture and playback without concern for the 2 Gig limit (later versions of FCP store much larger files as a single piece but still allow for capture as multiple referenced files).

NLEs like Matrox's DigiSuite lineup use the OpenDML / DirectShow extensions to AVI, allowing files without a 2 Gig limit.

Panasonic's DV Edit and NewsByte editors don't have any 2 GByte limitation. I don't know about Sony's ES-3 and ES-7. 

Also be aware that some networking protocols have a 2 Gig limit. AFP 2.0 (the Apple File Protocol used in "Classic" Mac OSes up through 9.2.2, the open-source Netatalk package) is limited to 2 Gig or smaller files, as is DAVE 2.X (SMB networking for Mac Classic). There may be others, too, but these are the ones I've encountered.

AFP 3.0 (OS X) and DAVE 4.X overcome the 2 gig limit.

What are SCSI-1, SCSI-2, Ultra-SCSI, etc.? What do I really need?

These are all peripheral buses for connecting hard drives (among other things) to computers.

SCSI-1 is the "original" SCSI. It's an 8-bit bus with a maximum 5 MB/sec transfer rate. As DV requires 3.6 MB/sec sustained, SCSI-1 is generally too close to the edge for reliable DV transfers. Remember, that 5 MB/sec rate assumes no hiccups, and your computer has more to do than just wait around to dump DV data to/from the SCSI bus.

SCSI-2, also known as "fast SCSI" or "fast narrow SCSI", doubles the data rate to 10 MB/sec. This is usually acceptable performance for a single stream DV capture and playback

Fast-Wide SCSI uses a 16-bit data path for 20 MB/sec peak transfer rates (for this, you need to use the 68-pin cable, not the 50-pin Centronics or DB25 cable for slower flavors of SCSI). Likewise, Ultra SCSI or SCSI-3 yields 20 MB/sec through faster data clocking. Fast-Wide or Ultra SCSI drives are fine for DV editing.

Wide Ultra SCSI (Fast Wide 20) combines the 20 MHz transfer rates with a 16-bit bus for 40 MB/sec, really quite a bit faster than needed for DV. There are even faster variants of SCSI, but these are exotic and expensive and are definitely overkill.

Want the big picture? Check out this old Adaptec page at for a comprehensive matrix of I/O technologies. This big table lists all the SCSI flavors and everything else from parallel ports to USB to SSA to Fibre Channel to -- yes -- 1394.

What about Ultra-DMA?

Ultra-DMA, also knows as UDMA, Ultra ATA, Fast ATA-2, or ATA-33, is an enhancement of the EIDE disk-drive interface, available on Macs since the blue & white G3s and on all current PCs: basically, anything from 1998 onwards should support UDMA. UDMA has been further extended for 66 MByte/sec (ATA-66) and 100MByte/sec (ATA-100) transfer rates; as of 2002 most any computer you buy will offer ATA-66 and/or ATA-100 drives as standard equipment.

UDMA drives are a lot cheaper that SCSI-3 drives, and are capable of stutter-free capture and playback of DV data. UDMA (ATA-33) allows best-case transfer rates of 33.3 MB/sec, compared with the 16.6 MB/sec best-case transfer rate under EIDE without UDMA (of course, this is only one of the bottlenecks in real-time DV work, which is why a fast raw transfer rate alone is not a sufficient indicator of DV suitability).

Win95 requires an upgrade to exploit UDMA; WinNT needs to have UDMA enabled; Win98 and Win2K support it fully in their default installations.

The early "blue & white" G3 Macs captured and played back DV without problems on UDMA drives, but not SCSI -- the PCI chipsets used (as of February 1999) appear to cause problems even with fast SCSI-3 drives. Apple released an update for the SCSI controller code in June 1999 that is supposed to solve these problems.

UDMA drives are backwards-compatible with IDE/EIDE controllers; you can drop a UDMA drive into an older computer and it will work. However, to get the level of performance needed for real-time DV work, you may need to have a UDMA-compatible controller with BIOS and OS support -- though I routinely play 9 minute clips from a 1997 Maxtor UDMA drive on a plain old 1995 EIDE controller with no dropped frames.

As of mid-2001, even 4200rpm IDE notebook drives are more than fast enough for a single stream of DV25. With recent advances in recording technology, the areal density of data on a drive has skyrocketed; one side effect of this is that even on slower drives, the amount of data passing by the read head on a single rotation of the disk has doubled or quadrupled. Combined with large on-disk memory buffers, playing or recording a stream of DV is child's play for modern drives.

Why doesn't my non-linear editor see timecode if it's already on the tape?

Unfortunately, some older and even some current DV NLEs do not capture timecode into the clips stored on disk. This is not a hardware problem, it's a problem with the capture programs used by these editors. As the market matures, expect more software to gain this capability (and if it doesn't, ask your NLE vendor why not).

The Canopus plug-ins for Premiere 5.1 (PC) capture timecode, though the stand-alone Canopus tools did not (I don't know if they do in current releases).  Premiere 6.0 on either Mac or PC captures timecode using the built-in DV support (QuickTime on Mac, DirectX on the PC). iMovie does not capture timecode, though Final Cut Pro does.

What are Type 1 and Type 2 AVI files (and why should I care)?

Microsoft defines two types of DV files as part of the AVI file specification. Type 2 contains two data streams, a "vids" video stream and an "auds" audio stream. This is the sort of AVI file used by VfW (Video for Windows) compatible applications, including most of the first-generation DV NLE products on Windows. In a Type 2 file, the audio buried within the DV datastream is separated out into its own track or stream. This allows current tools to get ahold of it and manipulate it, but there is a slight storage penalty as the audio stream redundantly replicates information buried inside the DV datastream being treated as the "vids" stream.

A Type 1 DV AVI has neither the "vids" nor the "auds" streams familiar to most VfW compatible applications, but a single "ivas" (interleaved video & audio) stream. This is more efficient in terms of storage, but not many NLEs can deal with it: as of May 2001 the following applications knew what to do with a Type 1 file:

  • Ulead MediaStudio 3.0 or later
  • MGI VideoWave II
  • Asymetrix Digital Video Producer 5.5
  • Microsoft Windows Media Player.
  • DirectX Media sample programs: GraphEdit, AMCap, VCDPlayer, etc.
  • Additionally, the capture/playback applications that come with a 1394 card that reads and writes Type 1 AVIs will be able to handle them.

    Type 1 AVIs are also blessed with the removal of the 2 Gig limit, through an extension of AVI called OpenDML, created by Matrox and approved by Microsoft, and part of the DirectShow architecture.  

    The DirectShow architecture can read and write both Type 1 and Type 2 files, though the Type 2 files are usually not backwards-compatible with the VfW architecture.

    So what? The good news is that starting with Windows 98 and Windows 2000, Microsoft is providing a standard DV codec as part of the DirectShow suite of technologies. The bad news is that the supplied codec will only write DirectShow files, and its quality is not as good as many of the 3rd-party codecs available.

    Older-generation products, like DPS Spark, Canopus Raptor/Rex, Pinnacle DV200/DV300, and the like (and this includes Adobe Premiere through at least version 5.1 running on those boards) read and wrote VfW Type 2 AVIs. Since the boards supplied their own codecs and VfW utilities, they tended to cost a bit more.

    Newer products appeared at very aggressive prices: 1394 cards from ADS Pyro, SIIG, and the like, all at a fraction of the cost of seemingly identical older cards. These cards aren't carrying the engineering costs of their own software codecs; they use the DirectShow infrastructure on Win98/Win2k to supply a DV codec. Because they rely on DirectShow, they read and write DirectShow files.

    Note: there's no quality difference between Type 1 and Type 2 files. The pix are the same in either case. The problems arise if you have a Type 1 file and want to work in an application that only reads VfW Type 2 files. Then you're stuck.

    We are in the tail end of a transition period on the Windows platform between the "pioneer" days of each vendor supplying its own DV infrastructure from the ground up, based on VfW Type 2 precedents, and the "embraced and extended" days where Microsoft has internalized the DV infrastructure and made it their own -- and incompatibly so in the process. I'm so glad they aren't a monopoly! :-) Seriously, though, the internalization of DV is not an evil thing; standardization allows those $29 capture cards to exist, and the Type 1 format has a certain engineering elegance to it. But while we're in the transition from The Old Ways to The New Ways, there will be some bumps in the road.

    One sure-fire to go between VfW and DirectShow AVIs is write the DV out to tape using one set of tools, and read it back in using the other. Depending on your NLE, it may be able to read one type of file and write the other, so you can "transcode" simply by opening the existing VfW Type 2 file and saving it back to disk as a DirectShow Type 1 (or Type 2) file. Premiere 6.0+, for example, easily does this.

    Read Microsoft's official description of the issue here, and get a rundown on the current situation (cards, NLEs, and compatibility) from Richard Lawler's always informative Silver List.

    Linear editing

    Can I use DV in linear editing?

    Certainly! Much of the fuss that's made over DV formats is in regard to non-linear editing, but it works fine for linear editing as well.  DV gear interoperates with Hi8, SVHS, Betacam, MII, D-5, and other formats using composite, Y/C, component analog, and serial digital I/O (see Technical Details for which VTRs offer what I/Os). It works fine with the editors and SEGs and DVEs and terminal gear you're used to using.

    What sort of linear editing gear can I get in DV? What sort of machine control is there? How accurate is it?

    Low end: The Sony and Canon camcorders as well as the DHR-1000 and various DVCAM VTRs  are all remote-controllable using the Sony Control-L (LANC) protocol. The Panasonic camcorders (some of them at least) have 5-pin Panasonic ("Control-M") ports. All work fine as edit sources.

    Some JVC DV camcorders offer "J-LIP" ports for remote control and editing. I haven't seen any editors that support J-LIP protocol directly (but see "mid-range" below).

    The DHR-1000 and DSR-30 VTRs have built-in 10-event cuts-only editors as well as separate audio and video insert-edit capabilities, allowing them to be used as the controller in bare-bones cuts-only LANC editing. These decks, while rated at +/- 5 frames accuracy, appear to be frame accurate better than 90% of the time. In-points on the DHR-1000 appear to be frame accurate all the time and there's no reason to expect that the DSR-30 is any different. Out-points may occasionally be off by a frame or two.

    If you don't want to use the built-in controllers on these decks, there are a variety of standalone edit controllers that talk LANC and/or control-M. Among these are Videonics' AB-1 Edit Suit and Video Toolkit, and TAO's Editizer (out of production but the software is still supported), all notable as being control-agnostic systems: depending on the cables used and the setups performed, these will control any mixture of RS-232, RS-422, LANC, and control-M VTRs (great for interformat editing). In my experience TAO Editizer's accuracy  is typically +/- 1 frame, with the actual in-point on the DHR-1000 being frame accurate but with the feeder decks being off by perhaps a frame about 20% of the time -- not bad, given that these decks don't capstan-bump and Editizer doesn't varispeed 'em in preroll. (Note that these editors typically only support assemble editing on LANC or control-M recorders; historically, that's all that LANC/control-M machines have been capable of in their Video8, Hi8, and SVHS incarnations.)

    Mid-range: you can integrate low-end gear with high-end editing systems by using protocol converters, so that the lowly camcorder or VTR appears to be a standard, RS-422 protocol edit source. Note however that for the most part these protocol converters allow the low-end decks to serve as edit feeders only, not recorders.

    LANC: Sony provides the IF-FXE2 LANC Interface Box, while TAO offers the L-Port 422 LANC to RS-422 converter. Addenda Electronics offers the RS-2/L and RS-4/L inline adapters to make LANC VTRs look like RS-232 or RS-422 VTRs to external controllers.

    Control-M: TAO has an improved L-Port 422 that also talks control-M (Panasonic 5-pin).

    J-LIP: JVC offers the SA-K38U Control Interface, designed to allow the BR-DV10u dockable DV recorder to be controlled by an editor using either the RS-422 or JVC 12-pin interfaces. It probably works with the consumer DV camcorders as well, although I haven't verified this.

    With all of these, the accuracy is likely to be in the +/- 1 to +/- 5 frame range depending on the edit controller used and the ballistics of the other decks involved.

    High-end: Most high-end DVCAM and DVCPRO VCRs use industry-standard RS-422 serial protocols for machine control and for assemble and insert editing (depending on the VTR). They are frame-accurate, no-nonsense machines you'd use in editing just like BetaSP, MII, DigiBeta, or D-5 VTRs.

    "Hard" codecs vs "soft" codecs

    What's a codec?

    A codec is a compresser/decompresser, a bit of software or hardware that takes raw video and compresses it, and can take the compressed video and decompress it back to raw video.

    Codecs exist for all kinds of compressed video, including DV, motion-JPEG, MPEG, Indeo, Cinepak, Sorensen, wavelet, fractal, RealVideo, vXtreme, and many others. (Indeo, Cinepak, Sorensen, RealVideo, and vXtreme are trademarks of their respective trademark holders.)

    What are "hard" and "soft" codecs?

    Hard codecs are hardware codecs, such as the Sony DVBK-1 or "DVGear" chip, the divio codec, or the C-Cube / LSI Logic DVxpress MX. You supply power and raw video at one end, and get compressed video out the other end in real time. Flip a switch and pump in compressed video, and raw, uncompressed video comes out.

    Soft codecs are software modules that do the same thing, such as the DV codecs supplied by QuickTime, Microsoft, or MainConcept. Modern computers are feasily ast enough that soft codecs can compress or decompress in real time or even faster. Typically, a Pentium or PowerPC running faster than 300 MHz or so will run a soft codec faster than a hard codec.

    Which codec is better?

    That depends on what you're looking for, and what you want to spend.

    One thing to keep in mind is that "hard" vs "soft" doesn't matter when it comes to picture quality: both can give excellent results. Be aware, though, that minor codec differences can cause accumulated errors over multiple compression/decompression cycles [Pix: multigen with different codecs]. For example, the old DVSoft codec used with early Mac and PC NLEs caused a considerable Y/C delay problem (the color information "drifted" across the image, misplaced from the luma). Not all DV codecs are designed the same way, as discussed by codec expert Guy Bonneau.

    When capturing from or or outputting to DV VTRs using a 1394 connection, it doesn't matter what kind of codec you have. A DV-based editor stores the same data on disk that travels across the 1394 wire; no compression or decompression occurs. Thus when you're doing capture or playback across a 1394 connection, all you're doing is a real-time data transfer; the codec isn't even in the loop.

    The codec comes into play when you need to:

  • Render transitions, titles, and effects.
  • Capture from or output to non-DV VTRs
  • It's here that the differences become apparent.

    Rendering transitions, titles, and effects: to add an effect (say, a dissolve or wipe between two clips), the system has to take the two source frames, decompress them, perform the mix, and recompress the resulting frame. The soft codec takes CPU power to run, but the CPU has nothing else to do while waiting for the frames, so it might as well be involved. The hard codec runs in real time, but the CPU, once it has set up the data transfers, has to sit and wait for the output anyway. In early 1998, various vendors claimed a 25% speed advantage of hard codecs over soft codecs, or a 30% advantage of soft codecs over hard codecs, or whatever... Too much depends on other factors, like the speed of the computer's CPU, bus and bus interface chipset, to decisively say that one codec will be faster than the other in effects rendering. However, as CPUs and buses speed up over time, the soft codecs (which, unlike their hard counterparts, aren't limited to running at real-time rates) have taken the lead in speed for rendering operations; Canopus uses software codecs for multiple streams of realtime decompression in the DVRexRT and DVStorm NLEs, only using the hardware codec to recompress the output back to DV.

    Capturing from or outputting to non-DV VTRs: hard codec systems come with breakout boxes that include analog (composite, Y/C, and sometimes component YUV) connections as well as 1394 connections. You can connect up any VTR format with analog I/O to the box and capture it in real-time or output to it in real-time. This makes it easy, for example, to bring legacy Hi8 or Betacam footage into the editor to intercut with newer DV material. You don't even need to have a DV VTR or camcorder around to use the system, as it has its own hard codec onboard.

    Soft codec systems supply a 1394 board for connection to a VTR, but offer no other inputs or outputs. For outputs, any 1394-equipped camcorder or VTR can be used to transcode to analog (composite or Y/C), so you can record the output of your NLE to Hi8, SVHS, BetaSP, or the like in real-time by using a DV VTR or camcorder as a transcoder (of course, you must have your DV machine present to act as the transcoder as there is no non-1394 output available).

    However, to bring non-DV material into your soft codec based system, you may first have to dub the material to a DV tape: some but not all DV devices allow for live transcoding of analog to digital without first recording the feed. You can also use a separate transcoder (discussed above), but in either case, you'll need that offboard bit of hardware to translate between DV on 1394 and an analog baseband signal.

    Codec Problems

    What's the "White Clip" problem?

    [If you really want all the background info, see Chris Meyer's superb article about luminance ranges in digital video, published in DV Magazine's April 2000 issue.]

    Digital video (for the purposes of this discussion) is stored as an 8-bit signal, with the brightness encoded in the Y signal. 8 bits allows values of Y ranging from 0 through 255.

    In 8-bit digital video as defined by ITU-R BT.601, black is encoded at Y=16, and white is Y=235. That leaves the values from 1-15 and 236-254 as "headroom" and "footroom" to accommodate ringing and overshoot in the signal (0 and 255 are reserved for synchronization). The 601 spec provides sample equations to convert 8-bit RGB synthetic images with values ranging from 0-255 into Y values ranging from 16-235. Most software DV codecs follow this mapping convention, which allows you to translate naturalistic computer pix into good-looking video. On a waveform monitor, these whites peak at 100 IRE (NTSC), which is the brightest value you want to feed to a broadcast RF modulator.

    The problem is that cameras can easily capture values that are "whiter than white"; peak highlights such as specular reflections, the sun, lights, or even bright clouds or white walls often exceed the nominal white value of 235. In the YUV space, the Y values range from 236 to 254; on the waveform monitor, whites can be seen going from 100 IRE up to almost 110 IRE. Such values are illegal for broadcast, but are perfectly acceptable in the camera: no one in his right mind turns down additional dynamic range!

    Unfortunately, if you take a DV clip with these "superwhite" values and run it through a software codec with the RGB 0-255 mapping, the superwhite values get hard-clipped to Y=235, because values above that range are limited to RGB's 255 maximum. This shows up, typically, as highlights in a scene that suddenly crush when a title is superimposed, or when a dissolve starts. At the end of the super or a transition, highlights suddenly jump up again. Depending on the footage, this can be very noticeable, or not visible at all.

    If it's illegal to broadcast superwhite, why allow it in the first place? Won't it have to be clipped before transmission?

    First, not all video is sent over the airwaves by RF modulation: a lot of productions are distributed as tape, as digital cinema, as webcasts, or as film transfers. In film work especially, preserving all that dynamic range (or latitude) is essential for getting the best image possible.

    Second, that extra detail in the headroom can be kept, even when going to air, by lowering the gain of the overall signal. You can do this in post-production on a shot-by-shot basis so as to preserve all the tonal gradations in each scene as required, or you can simply dub your distribution or air master through a processing amplifier at lowered gain to re-level the entire show (admittedly this is rarely done; most of the time a station will set up its proc amps for unity gain and a hard clip at 100 IRE -- Bruce Johnson of Wisconsin Public Television calls 'em "planers" because they plane off the high parts of the signal!).

    Third, since the source material being processed may have superwhite values, any "planing" of these values will be immediately and disturbingly noticeable. One workaround that people use is to simply render every frame in a scene, clipping all the superwhites, but while this works, it's inelegant and slow.

    One "gotcha" with white-clipping codecs is that everything always looks fine on the PC screen or Mac desktop: all the clips displayed on the computer's screen (with the exception of video-overlay systems like the Canopus cards employ) pass through the software codec and are clipped prior to display. You'll never notice anything odd going into or out of a transition or super, because every frame on the desktop has been clipped. When you render to tape, though, or view the FireWire output, you will see the popping, jumping white values in all their gory glory.

    In my opinion, white clipping should be dealt with by the editor, who can deal with superwhites in a variety of ways depending on creative and technical needs-- whites shouldn't be clamped or clipped by a software codec, which should by design be as transparent as possible.

    How can white clipping be avoided, and what are the drawbacks of doing so?

    Some codecs allow alternative mappings, the most common being a "straight-through" mapping so that Y=16 maps to RGB=16,16,16 and Y=235 becomes 235,235,235. No white clipping occurs; bright highlights survive rendering unscathed.

    There are two consequences to this. One is that the nominal brightness range in RGB is only 16-235 instead of 0-255; this represents a slight loss of precision in the RGB color space that may result in lower transcoding accuracy, especially over multiple generations. The other is that on the computer screen, exported still pix may look slightly washed out: blacks will only be dark gray, and whites will be light gray instead of crisp white.

    The first problem is sometimes visible, but usually not significant in practice, as far as I can tell (multigeneration tests using the better codecs with both 0-255 and 16-235 settings show only minor differences after five compression passes). And compared to the loss of all superwhites, it's a low price to pay. Compared to the losses and distortions incurred even by a theoretically perfect DV codec with its 4:1:1 or 4:2:0 sampling and its 5:1 DCT compression, the slight degradation of 16-235 RGB mapping is insignificant.

    The second problem is compensated for by learning to "read" the images correctly when the 16-235 mapping is used. While the washed-out pix may be disturbing at first, you can easily adjust for the look; most people make the shift quickly and without complaint. Furthermore, many NLEs export RGB stills at the 0-255 setting regardless of how they process pictures internally.

    The terminology or options used to select the mapping range will vary from codec to codec (where they exist at all). For example, DVPlus gave you "clamped" (0-255) vs. "full" (16-235) RGB mappings; FCP lets you select "superwhite" processing, and so on.

    Codecs that allow you to switch off white clipping (or work by default in a full-range color space) include Digital Origin (now discreet) SoftDV (PC, Mac), the Canopus codec (PC), QuickTime 4.1.3 and later (Mac using FCP), Avid XpressDV (Mac / PC), and Matrox's software codecs (PC).

    What other codec problems are there?

    Along with white clipping, extremely saturated colors may sometimes be clipped. The fix, again, is a "clamped" vs. "full" selection that trades off a minor reduction in RGB color precision against the ability to handle highly saturated colors.

    Some codecs are simply better than others at preserving the information in a scene without adding brightness, saturation, or hue shifts; Y/C delays; or mosquito noise or other artifacts [pix].

    The Apple DV codec in Quicktime 4.0.x was particularly problematic. It suffers from a combination of white clipping, poor luma and chroma fidelity, excess artifact buildup, and an odd vertical striping sometimes noticeable in flat solid areas when the clip is played back through a Sony hardware codec (!). Since both Premiere versions 5.1c and early versions of Final Cut Pro used Quicktime 4.0.x, DV rendering quality was less than perfect with these systems.

    Fortunately, ProMax delivered DV Toolkit 3.0 in late 1999. Brad Pillow took the original DVSoft codec and modified it to work with QuickTime 4; he also added the option to select full-range mapping for both luma and chroma, so white clip and color clip can be avoided. My initial test of this new "DVPlus" codec showed it to be of excellent quality. At last, Final Cut Pro could produce no-excuses quality with DV-native editing.

    In late 2000, Apple rewrote their DV codec for QT 4.1.3 and QT 5.x, resulting in one of the best (and fastest) software codecs available. Promax quietly dropped the DVPlus codec in 2001 as the Apple codec has removed the need for a 3rd-party Mac codec entirely.

    Many current codecs (i.e., Matrox, Canopus, Apple, Avid) are of superb quality, and the remainder (Microsoft DirectX 8, Radius/Digital Origin/Discreet) are quite tolerable.

    M-JPEG, MPEG-2, and DV50

    Can I transcode between DV and motion-JPEG, MPEG-2, or DV50?

    You can. Depending on the amount of compression used, you might not even see a difference.

    It seems to be generally accepted that M-JPEG compression at 3:1 is roughly equivalent in quality to DV's 5:1 compression. It's also worth remembering that DV and JPEG are both DCT (Discrete Cosine Transform) codecs; they tend to have similar artifacts and effects on pictures. (DV gets its additional compressive efficiency through block-level optimization of quantizing tables, whereas JPEG uses a fixed quantizing table for an entire image).

    Thus, one might venture to guess that whether one is compressing via 5:1 DV or 3:1 JPEG, similar amounts of damage are done to the image, and that transcoding between these two compression schemes might cause less degradation than the initial compression caused.

    Indeed, at NAB '96 Panasonic had hidden away in a corner a most interesting demonstration. A D-5 (uncompressed ITU-R-601) signal was fed to a component digital switcher on input #1. It was also taken, compressed via the DVCPRO codec, decompressed, and fed to input #2. The processed signal was further fed through a Tektronix ProFile DDR using JPEG at around 2.5-3:1 compression, and played back to input #3. That signal was again fed through a DVCPRO compression/decompression chain, and brought up on input #4.

    A wipe pattern was set up, and by pressing buttons one could see a split-screen of any two signals on the switcher. Remember, this was a digital component switcher, and the monitor was one of those gorgeous Panasonic digital monitors where the image data stay digital all the way to the modulating grid (really, these are amazing monitors; if you haven't seen one, you don't know how good video can look).

    The original D-5 image was deep, quiescent, lucent: as good as 525/59.94 images get. The first DVCPRO-processed image showed the usual sorts of DV artifacts we've all come to know and love, but it was still pretty darn good; you had to look closely to see any degradation.

    But that was it: the further stages of processing showed no noticeable difference. The initial DV compression had already thrown away the troublesome transients and difficult details. What survived the initial DV codec was a DCT-friendly image that suffered very little from further compression in the ProFile, and the ProFile-processed image ran through the final DVCPRO codec with ease.

    I'm not saying the images were identical; there were probably minor truncations and losses occurring in the ProFile's JPEG codec and in the final DVCPRO codec. However, these were very minor and visually imperceptible. Because the entire signal path was digital, the image stayed in registration throughout; there was no shifting of 8x8 DCT block boundaries nor were there level shifts and noise introductions as could occur in analog connections, both of which could degrade further compression. Moreover, the compression on the ProFile was very mild; it was at least as good, visually speaking, as the DVCPRO compression.

    So, it can be done. Bear in mind that the level of JPEG compression used is a big determinant of whether you can transcode successfully. If you're using low JPEG compressions of  3:1, 2:1, or less, and transcode in the digital domain (through a serial digital connection or software conversion, rather than via an analog connection to a JPEG codec), you will see very, very little degradation of the image. If you dump your DV data into the JPEG world via an analog connection, or if you use higher compression rates, you will see a progressively higher amount of degradation.

    Even so, there's always the risk of some loss. As a fellow said at SIGGRAPH '86, "Dealing with floating-point numbers is like shoveling sand: when you pick up a handful, you get a little dirt, and some sand trickles out..." and the same can be said about moving between different codecs.

    I-frame-only MPEG-2 is said to be comparable to DV format compression at the same bit rate. Thus 25 megabit MPEG-2 should yield results (and transcoding errors) similar to DV, and 50 Megabit MPEG-2 should be comparable to DV50.

    Which is better for editing: DV, M-JPEG, MPEG-2, or DV50?

    Ahh, now that's the question! And with systems like DraCo's Casablanca, Matrox's DigiSuite LE, or Pinnacle's ReelTime that work in M-JPEG but offer 1394 I/O; or the Matrox RT2000 that offers either 25 megabit DV or 25 megabit MPEG-2; or the DigiSuite DTV and LX that allow any combination of DV25, DV50 (DTV only), and MPEG-2 up to 50 megabits... what's going on???

    DV is good when you've shot in DV and stay in DV on disk: there's no transcoding required. DV is ideally suited to desktop editing because the data rates are viable on garden-variety UDMA disks and controllers; you can assemble a perfectly usable DV editing system for under US$2000 and produce excellent, broadcast-quality work (well, technically, at least; despite what the manufacturers would have you believe, no format or software guarantees to make you a creative genius).

    If you're shooting DV, why not stay in DV all the way? The sweet spot for this format (to borrow Panasonic's DVCPRO slogan) is "faster, better, cheaper", and you can't get comparable M-JPEG quality for DV prices, DV data rates, and DV storage requirements.

    On the other hand, DV's fixed data rate means that 25 Mbits/second is what you get: you can't use a DV codec to grab hours of low-res "offline" quality to disk for a rough edit.

    More importantly, you can't "bump up" to higher quality if your editing system is strictly DV-native, using DV as an acquisition format but editing in a less-compressed 4:2:2 system. Whether the difference is a visible one by the time your program hits a VHS cassette or an over-the-air analog transmission is arguable in many cases, but it is an issue to be aware of, especially if you need to protect as much quality as possible for DVD or future DTV usage.

    M-JPEG is a mature technology used in most high-end (Avid, Accom, Discreet, etc.) editing systems. It offers the ability to capture at different rates, so you can save on disk space for the offline work and redigitize the rough cut for the online clean-up. At the lesser compression levels it offers potentially higher quality; if you're doing a lot of multi-pass or multi-layer effects work, you'll wind up with fewer cascaded compression artifacts with a high-end M-JPEG system using 4:2:2 color sampling.

    M-JPEG will cost you more for the same level of quality, requiring faster disks or RAID arrays, and more of 'em. However, M-JPEG systems are attractively priced these days as all the hype is for MPEG-2 and DV-native systems; some deals can be had if you shop around.

    Systems like Casablanca, DigiSuite, or ReelTime are M-JPEG at the core, but offer a 1394 connection so that you can pipe your DV data in and digitally transcode it to M-JPEG. As I discuss above, this need not visually degrade the image, assuming the underlying data rate is high enough that low compression levels can be used. It's definitely going to be better than an analog connection between your DV source and the M-JPEG data on-disk; these systems may seem odd, but they make sense from a technical standpoint.

    You can now get editing systems using MPEG-2, usually the 4:2:2P@ML MPEG-2 flavor. Pinnacle has liquid silver and DC1000; both are MPEG-2 editors. Matrox's RT2000 can store video as DV or as 25 Megabit/second MPEG-2, while their DigiSuite DTV can store DV, DV50, and MPEG-2 at up to 50 Megabits/second.

    MPEG-2 at 25 Megabits should be roughly comparable to DV, though its 4:2:2 color sampling may be more beneficial for graphics. Are 25 Mbps MPEG-2's benefits worth the transcoding hit coming from DV? It's arguable: I've been comparing DV25 and MPEG-2 25Mbit, and can't say I see a huge difference one way or the other. Both have their artifacts, and their tradeoffs.

    DV50 or MPEG-2 at 50 Megabits will be clearly superior in quality to DV, albeit at twice the data rate (think about the disk space and disk speed necessary). Indeed, if you're originating on DV50 (D-9, DVCPRO50) there's no reason to go with anything other than a DV50 NLE, using SDTI in place of 1394 for the transfer. The only way up from DV50, practically speaking, is to to go totally uncompressed -- and nowadays, that's increasingly viable, what with fast disks and inexpensive arrays.

    So which is better, DV, DV50, M-JPEG, or MPEG-2? It depends on your needs, your target distribution methods, and your budget.

    As I said above, DV has a very nice sweet spot: it's cheap, it's readily editable on desktop (and laptop) computers, and the quality is comparable to Betacam SP. I've gone into the field with nothing but a VX1000 and a FireWire-equipped PowerBook and turned out corporate video clips that made clients happy. If you'll be sending pix back out to DV tapes, then there really isn't any reason to go with anything other than a DV-native system; DV and 1394 work so well together it's a natural match.

    But if you're going to a higher-quality edit master, it may be a different story. Joachim Heden (who started the whole debate over the differing qualities of assorted DV codecs) firmly believes in shooting DV in the field, but editing in low-compression M-JPEG (or MPEG-2 or DV50) before going out to D-9 or DigiBeta tape. Especially when using sharp-edged, finely-detailed graphics, as he does, DV can be problematic, with its reduced chroma sampling and the mosquito noise that sharp edges can create. Since his edit master will be a higher-end format that won't impose DV's limits on the final program, he sees clear advantages in editing with a less-lossy compression scheme. [He took me to task at NAB 2000 for not making this rationale clear -- so I'm making it clear now. Thanks, Joachim!] Going to a lower-compression format in editing, even when using 4:1:1 or 4:2:0 DV camera originals, can preserve higher quality through the postproduction chain. Nowadays (2000), it won't cost you an arm and a leg to get that higher quality, either -- but it will cost you more in hardware and in disk space if you want to avoid codec concatenation artifacts and keep the quality high enough to justify the added effort.

    On the one hand, few would try to post a national ad campaign on DV; the clients can afford (and expect the quality that comes with) Digital Betacam or better.

    On the other hand, JVC has belatedly entered the professional DV marketplace, after professionals failed to flock en masse to their superior DV50 format, D-9. As it turns out, 4:1:1 DV, even in post, is just fine for most news, corporate, and industrial users, and more than a few digital filmmakers are happily shooting and posting in native DV and then screening on 35mm or on HD video projection.

    So the answer is: there is no easy answer. Sorry!

    For what it's worth, some in the industry as of August 1998 predict that before too long there will be only two flavors of compression used in editing: DV (including DV50) and MPEG-2. Both format families are "native" capture formats (BetacamSX for MPEG-2) and MPEG-2 is the distribution format for American DTV, whereas M-JPEG introduces a  compression step that's neither native to an acquisition format nor used for distribution. The European Broadcasting Union, in Annex C of the SMPTE/EBU Task Force for Harmonized Standards for the Exchange of Program Material as Bitstreams Final Report, backs this up by recommending that DV family and MPEG-2 4:2:2P@ML family compression schemes be used in future networked television production. We'll see...

    Copyright (c) 1998-2002 by Adam J. Wilt.
    You are granted a nonexclusive right to print, link to, or otherwise repurpose this material, as long as
    all authorship, ownership and copyright information is preserved, and a link to this site is displayed.

    DV - contents & links
      Detailed listing of this site's DV contents, and links to other sites.

    DV Technical Details
      The DV Formats Tabulated; standards documents & where to get them.

    DV FAQ - technical
      DV formats, sampling, compression, audio, & 1394/FireWire/i.LINK.
    you are here >
    DV FAQ - editing
      linear & nonlinear; hard & soft codecs; transcoding; dual-stream NLE.

    DV FAQ - etc.
      16:9; film-style; frame mode; slow shutters; image stabilization, etc.

    DV Pix
      DV sampling, artifacts, tape dropout, generation loss, codecs.

    Video Tidbits
      Tips & tricks, mostly DV-related; getting good CG.

 Home SW Engineering Film & Video Production Video Tidbits >DV<

    Contact me via email

    Last updated 2005.08.28