25 January 2008
All
Every other month, readers of the Adobe Edge newsletter are greeted with an invitation to view the Edge newsletter and its featured video. The Edge video is a relatively new addition to the newsletter, and despite there being over a million subscribers, the video's production is a two-person collaborative process that involves Edge editor Julie Campagna and myself.
In this article, I tell you how we do it—from conceiving the topic, planning the shoot, and choosing recording formats to encoding and posting the video to the streaming server. You'll get a sneak peek into our rather unique process, as well as some good tips on how to prepare for a successful video production.
Julie Campagna has served as editor of the Edge since 2001 when it was a Macromedia publication. Through subscriber feedback (e-mail and in person) and Edge metrics, this woman has come to know the Edge audience very well. So when we sit down to talk about story ideas for the featured video, she usually has two or three solid ideas. I usually make a suggestion or two about how we could shoot each idea, and we settle on the one that excites us the most. Then we discuss goals for the piece.
Ideas are golden, but goals drive a project. Every time we plan a video, we think about the goal, the objective—the takeaway. Having a goal established up front helps ensure that we don't get off track during the interview process and during post-production when we're trying to stitch it all together.
With the Edge featured video, our overall goal is to inspire and inform. But each piece has its own specific goals, and we establish those objectives up front: What is it about the subject of the video that is inspiring? What do people really want to know about it? What is it that no one knows about, but is really fascinating? You get the picture.
Next, we figure out the logistics. Where can we shoot the video? Whom do we want to interview and why? Where should we shoot? Will we need permission to be on the property? What type of gear do we need? While we figure out the logistics, we take a realistic look at budget, timeframe, and resources to make sure we can produce the video on time and within budget.
Preproduction is a long, detail-oriented process that includes everything that must happen before the lights are turned on or anyone gets in front of the camera.
Thankfully Julie does most of the tedious work already mentioned: permissions, schedules, and so on. This is her job—partly because we usually shoot at the Adobe offices in San Francisco and she has all the contacts, and partly because she's the one with the story to tell. She arranges the schedule with the people we need to talk to and does the truly essential work of preparing each person to interview on camera. Even though it takes her a good deal of time to prepare subjects for an interview, it makes a huge difference in the quality we get from our interviewees—and it saves us time during production.
During preproduction, I deal with the arrangements for the shoot itself, such as identifying and renting the camera gear and finding the best locations to shoot (see Figure 1).
Before I started working on the Edge in 2006, I had been thinking of making the leap from standard definition (SD) to high-definition (HD) video. I figured I'd get there by the summer of 2007. But I made the leap sooner than expected. I had to. Dan Cowles, long-time video production guru at Macromedia and now Adobe, told me in no uncertain terms that I would be shooting the Edge featured video in HD—and I needed to learn to work without tapes.
I had no HD experience, but how different could it be? It's lights, camera, action, right? Yes and no.
HD is a general term that covers all the high-definition formats, including HDV (which I explain later). When people at the higher levels of this business refer to "HD," they are usually talking about material acquired using the XDCAM-HD, DVCPRO-HD, and HDCAM formats. (Wikipedia has an exhaustive article about high-definition video.)
I usually shoot the Edge in DVCPRO-HD (which I refer to as HD in this article). I use a Panasonic HVX-200, which records to a pair of glorified flash-memory cards called P2 cards (hence, the tapeless workflow).
Once we fill these cards up, we transfer the footage files onto a media drive attached to my last-generation Apple PowerBook G4—it's the last model with a large-size PCI slot that accommodates the P2 card. Then we load the P2 cards back into the camera and continue shooting.
Keep in mind that when you're using tape and you've shot 60 minutes of material, it takes about 60 minutes to import it for editing. With a tapeless workflow, the material you just shot can be transferred as fast as you can copy the information from hard drive to hard drive. At first this all seemed a bit strange, but I soon got the hang of it. Once I did, I discovered the one hitch to a tapeless workflow—no tape.
When you're shooting in HD, you're using expensive P2 cards. So, without tape, how do you archive the footage? You certainly won't archive using a $700 P2 card that holds only 8 GB of material. (Currently you can get $900 P2 cards that hold 16 GB.)
Unfortunately, the options for archiving HD material are neither inexpensive nor convenient. DVCPRO-HD takes up about 1.24 GB for every three minutes of video. At the end of a shoot, we usually have about 40 minutes of material. You can either buy stacks of external drives and make duplicates of every archived project on at least two drives or invest in some kind of server solution with redundant backup—such as RAID storage boxes. These boxes usually contain several hard drives that store files redundantly across the drives to maximize performance, or for backup purposes. The downside is that the smallest boxes cost around $1000.
Another option involves playing everything out to tape—but then you defeat the benefit of the tapeless workflow. This introduces a variety of other difficulties as well, such as reconnecting your source material at a later date if necessary, because you won't have those nice, self-contained files to reference anymore.
To be fair, drives are getting cheaper. Even server-style RAID boxes are becoming more affordable. One bright spot in the RAID sector seems to be that companies that have specialized in USB 2.0 RAID storage boxes are catching on to the fact that there is a great market for scalable-priced RAID solutions with eSATA connections for editing as well as storage. eSATA—a type of transfer protocol for external devices—has a 3 GB-per-second throughput and is very attractive for using an external drive to edit with, as well as to archive HD material.
Still, it's hard to beat a $12 MiniDV tape and a 50¢ CD with your edit files on it.
According to Wikipedia, HDV is an inexpensive, high-definition video recording format. It's inexpensive because it captures footage to MiniDV tape instead of P2 cards or magnetic drives.
With HDV, you get many of the benefits of HD without the thorny issue of archiving to hard-disk media. On the up side, the same three minutes that would consume 1.24 GB in HD takes up 570 MB in HDV. On the down side, HDV is a more compressed version of HD. When you write HD video to a MiniDV tape, you are compressing the image a lot. It still looks great but you can tell when there's more compression on an image. It's similar to watching video online when you switch from high-bandwidth viewing to low-bandwidth viewing. The difference between HD and HDV is nowhere near that big but you can tell them apart, especially when HDV is shot in low light. (You can correct for this in your NLE's color corrector tool but that'll add to the time it takes to cut the piece and will add a great deal of grainy noise to your pictures.)
When you shoot in a well-lit interview situation, however, HDV can look great. I tend to use HDV for anything that will live only on the web, or if I am certain that the video will not be projected onto a big screen. We used the Sony HDV camera shown in Figure 2 for the Talkin' CS3 in NYC shoot, for example.
HDV is also especially convenient when we are on the road and need to move quickly. It's much easier to swap out for a new tape than take 15 minutes to dump the footage to a hard drive and erase your cards just to shoot new material. For example, during MAX 2007, Julie and I ran around conducting developer interviews all day and into the evening. These interviews were later posted, sometimes within hours, on the Adobe Developer Connection. I shot the interviews using the surprisingly good Canon HV20 HDV camera (see Figure 3), and we were pleased with the results.
I love shoot days. Yes, I sometimes have to get up ridiculously early. Yes, the equipment can be heavy. Yes, sometimes things go horribly, laughably wrong. But for the most part, it is a great, flow-inducing experience to work with a group of people and make something that, ideally, many people will watch to gain an understanding of something they did not have before.
A typical shoot usually starts with a meeting. Julie and I go over the plan we discussed in preproduction and any changes or issues with scheduling, location, or the people who will be on camera. Even when you have planned everything, something almost always changes, so we often need to improvise. I like to get to the location early so I have plenty of time to set up and deal with anything that goes off-plan.
I spend the time double-checking the camera, setting lights, testing microphones, arranging the set, getting water for the talent and myself, making sure I am fed and caffeinated, and going over the plan of what we are shooting and the list of shots I want to get.
In the end, it is a pretty straightforward process. Julie handles most of the interaction with the on-camera talent (usually designers, developers, product managers, or engineers) and I make sure we get good pictures and sound. We also cross-check each other and make sure we get enough information from our subjects to be able to edit, as well as to serve the overall concept of the video.
In most situations—whether it's for an Edge video or a Developer Connection video—the talent sits down and I make sure they are well-lit, set the focus on the camera, and see if they need any makeup, specifically powder to combat the shine on their nose or forehead. Julie warms them up on the topic and runs through how she'll be asking the questions. After we do a short sound check, we start rolling.
During the interview, we listen to how the talent answers the questions and determine whether we understand them. Interviewees are instructed to incorporate the question into their answer so we can cut out the interviewer and let these smart people tell their story directly. I usually like to get a master shot for each question they answer and then go to a closer shot of each question again, or some detail shots of their hands or eyes while they talk to allow for some latitude when editing.
I make sure to listen carefully to all interviewees during the shoots so that I understand what they say. After a take is over, I might even ask some questions about things I don't fully understand. This ensures that we ask enough questions during the interview to clarify the on-camera responses. As you can imagine, actually understanding what is being said is incredibly handy when editing an interview!
Post-production is everything that happens after the final shot—from editing to preparing and posting the finalized video. It holds some of the most interesting parts of the whole process as well as some of the most tedious.
The tedious includes watching everything I just watched a couple hours ago (while shooting). I usually like to let a couple of days pass before reviewing the footage, if possible, so I can take a fresh look at the material. I name all the clips so I know what they are when I'm editing. I have a system that works for me, and I keep everything clean and organized in my edit project, which makes it easier to keep my thoughts clear as well.
Before I go any further, I have to confess: I use Apple Final Cut Studio. I have always used a Mac, and while I was learning multimedia arts in Amsterdam, Adobe stopped supporting Premiere Pro on the Mac. So I have used Final Cut Pro from the beginning, along with tools such as Adobe Photoshop and Adobe After Effects software, of course. However, the release of Adobe Creative Suite 3 brought Premiere Pro back to the Mac—and the Creative Suite 3 Production Premium bundle I just picked up is happily getting me reacquainted with Premiere Pro.
Editing the Edge featured video is unique in my experience. It's truly collaborative. I make a first rough cut of the interview or story, compress it for the web, and upload it to my server. Julie then downloads it and starts editing the video for content. This stage is what we call a "radio edit," which is just an edit of the story according solely to what is being said, with no real concern for visual aspect.
Because Julie can edit video using either Premiere Pro or Final Cut Pro, she doesn't have to try to convey edits by writing out long, detailed instructions or having to meet with me to view the footage. Plus, she gets a good idea of how the story will flow.
When Julie is finished with her radio edit, she uploads her project edit file to the server. Using the original cut as a reference, along with her project file, I produce a proper first cut—which includes cutaways, smooth audio edits, and music. I send her that version, and the real review and approval process begins.
At this point, the bulk of the edit is done, and the last 10% (which can sometimes take as much time as the first 90%) involves changes back and forth that we usually communicate via e-mail. Once the actual content of the video is approved, I do the last of the audio work to even out the sound, finish the music editing and the color-correction, and output the final file.
Often this would be the last step of making a video for a client—burn your final, full-quality file to disk and drop it in the mail or FTP it to them. But because this video needs to go out on an FLV server, it needs to be encoded for viewing.
When I started producing the Edge featured video, it was easy to encode FLV files for final delivery because a template had already been created. I could just drop the final file into the template and out came a great-looking FLV file. But then I did the Talkin' CS3 in NYC video, which was the first Edge featured video I shot using HDV.
I've touched on some of the basic differences between HDV and the HD we usually shoot in, but for encoding purposes there are only a few things to pay attention to. The first is the size of the picture. The Edge is usually shot in 720p, meaning the size of the frame is 720 pixels tall by 1,280 pixels wide. The image is captured in full frames, like film, rather than interlaced half-images, like TV. When we shoot HDV, it is typically 1080i, meaning it is 1080 pixels tall by 1920 pixels wide, and it is interlaced.
The next difference is the frame rates. The HD we shoot is 24 progressive frames per second, which is also modeled after film—whereas the HDV is 30 interlaced frames per second.
For the "Talkin' CS3 in NYC" video, I figured since the sizes of the picture were the same dimension (16:9 or "widescreen"), we wouldn't have an issue making the larger HDV picture the size we usually output to, so I left that alone. That way, the frame rates could be set to anything we wanted. I experimented and tested different settings to make sure it worked. The first, original setting worked great on my desktop, so after final approvals, I sent the file to be posted for delivery.
Soon Julie started receiving e-mails from unhappy Edge subscribers who were either unable to view the video or were having an extremely frustrating viewing experience.
I realized I had to quickly make changes in the template to accommodate the differences of the new HDV shooting format.
The first thing I noticed when researching the problem was the difference in source frame rates. As I extrapolated from Encoding Flash video, an Adobe Design Center article by Scott Fegette and Tom Green, it is better to keep the compressed frame rate ratio an even multiple of the source. The original FLV template for 24 frames per second (fps) video encoded it to 18 fps, or three quarters of the original. I ended up encoding the 30 fps video at 15 fps, or half the source frame rate, to make it easier for buffering. I also deinterlace the HDV file—either during exporting or during encoding—to produce a nice, clean picture in the final FLV.
The second issue I discovered was that the frame size in the player on the page itself was set to a non-standard size—the dimensions were off for widescreen HD (not really 16:9). It worked before because you can actually encode a video to a nonstandard size and have it play back fine in an embedded Flash Player. But if you add this to the frame rate issue, I discovered it has some nasty side-effects, such as a poor viewing experience.
In the end, the experience taught me that while Flash Player is ubiquitous and therefore most anyone should be able to view FLV files, you have to do the research up front to ensure you're using the right settings. Figures 4 and 5 show my encoding settings for the high-res and low-res versions of the Edge videos, respectively. Also be sure to check out the FLV bitrate calculator developed by Robert Reinhardt to help determine the optimal bitrate at which to encode your Flash video files.
After the FLV files are made, Julie transcribes the video and sends the transcription to me. I then prepare the closed-caption files—basically transcribe the audio line by line in an XML file—and pass them back to Julie to post to the server. (Read Michael A. Jordan's article, Captioning Flash video with Captionate and the captioning-supported FLVPlayback component skins, for more information about adding captions to your Flash video projects using Captionate or cue points.)
Delivery of the final FLV files comes shortly before the Edge is pushed to the Flash Media Server–powered streaming servers for a brief round of testing. After that, the e-mail blast begins.
Working for the Edge newsletter is fantastic. I am always learning something from the immensely talented people we interview and from the process itself. Here are a few tips I have discovered that may seem basic, but they continue to inform my practice and workflow:
If you have questions or suggestions, or if you would like more specific information on how we produce the Edge featured video, please send us e-mail.

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License
Note: This article is a slightly expanded version of what originally appeared in the Edge newsletter.
| 04/23/2012 | Auto-Save and Auto-Recovery |
|---|---|
| 04/23/2012 | Open hyperlinks in new window/tab/pop-up ? |
| 04/21/2012 | PNG transparencies glitched |
| 04/01/2010 | Workaround for JSFL shape selection bug? |
| 02/13/2012 | Randomize an array |
|---|---|
| 02/11/2012 | How to create a Facebook fan page with Flash |
| 02/08/2012 | Digital Clock |
| 01/18/2012 | Recording webcam video & audio in a flv file on local drive |