back

Project profile: How we produce
the Edge featured video

by Jamie Wright

Every other month, you are greeted with an invitation to view the Adobe Edge newsletter and its featured video. The Edge featured video is a relatively new addition, and in the last year, my company has been largely responsible for the production of this video. In short, it's a two-person collaborative process that involves Julie Campagna and me.

In this article, I tell you how we do it — from conceiving the topic, planning for the shoot, and choosing recording formats, all the way to encoding and posting the video to the streaming server. You get a sneak peek into our process as well as some good tips on how to prepare for a successful video production.

Concept

Julie Campagna has served as editor of the Edge since 2001, when it was a Macromedia publication. Through subscriber feedback and Edge metrics, this woman knows the Edge audience. So when we sit down to talk about story ideas for the featured video, she usually has two or three solid ideas. I often make a suggestion or two about how we could shoot each idea, and we settle on the one that excites us the most. Then we discuss goals for the piece.

Ideas are golden, but goals are important. Every time we plan a video, we think about the goal, the objective, the takeaway. Having a goal established up front helps ensure that we don't get off track during the interview process and during post-production when we're trying to stitch it all together.

With the Edge featured video, we try to inspire and inform. That's our main goal — but each piece has its own additional goals, and we establish those objectives up front.

Next, we figure out the logistics. Where can we shoot the video? Whom do we want to interview and why? Where should we shoot? Will we need permission to be on the property? What type of gear do we need? While we figure out logistics, we take a realistic look at budget, timeframe, and resources to make sure we can produce the video on time and within our budget. 

Preproduction

Preproduction is a long, detail-oriented process that includes everything that must happen before the lights are turned on or anyone gets in front of the camera.

Thankfully Julie does most of the tedious work — partly because she has all the contacts and partly because she's the one with the story to tell. She arranges the schedule with the people we need to talk to and does the truly essential work of preparing each person to be interviewed on camera. Even though it takes her a good deal of time to prepare subjects for the interview, it makes a huge difference in the quality we get from our interviewees — and it saves us time during production.

Jamie deep into set-up

Jamie checks the camera, sets the lights, tests microphones, and arranges the set.

During preproduction, I deal with the arrangements for the shoot itself, such as identifying and renting the camera gear and finding the best locations to shoot.

HD and the tapeless workflow

Before I started working on the Edge in 2006, I had been thinking of making the leap from standard definition (SD) to high-definition (HD) video. I figured I'd get there by the summer of 2007. But I made the leap sooner than expected. I had to. Dan Cowles, long-time video production guru at Macromedia and now Adobe, told me in no uncertain terms that I would be shooting the Edge featured video in HD — and I needed to learn to work without tapes.

I had no HD experience, but how different could it be? It's lights, camera, action, right? Yes and no.

HD is a general term that covers all the high-definition formats, including HDV (which I explain later). When people in the business refer to "HD," they are usually talking about material acquired using the XDCAM HD, DVCPRO HD, and HDCAM formats.

I usually shoot the Edge in DVCPRO HD (which I refer to as HD). I use a Panasonic HVX-200, which records to a pair of glorified flash-memory cards called P2 cards (hence, the tapeless workflow).

Once we fill these cards up, we transfer the footage onto a media drive (attached to my last-generation Apple PowerBook G4) and then load the P2 cards back onto the camera and continue shooting.

Keep in mind that when you're using tape and you've shot 60 minutes of material, it takes about 60 minutes to import it for editing. With a tapeless workflow, the material you just shot can be transferred as fast as you can copy the information from hard drive to hard drive. At first this all seemed a bit strange, but I soon got the hang of it. Once I did, I discovered the one hitch to a tapeless workflow — no tape.

Archiving

When you're shooting in HD, you're using expensive P2 cards. So, without tape, how do you archive the footage? You certainly won't archive using a $700 P2 card that only holds 8 GB of material.

Unfortunately, the options for archiving HD material are neither inexpensive nor convenient. DVCPRO HD takes up about 1.24 GB for every three minutes of video. At the end of a shoot, we usually have about 40 minutes of material. You can either buy stacks of external drives and make duplicates of every archived project on at least two drives or invest in some kind of server solution with redundant backup. Or you could play everything out to tape, but then you are defeating the benefit of the tapeless workflow — and this introduces a variety of other difficulties as well. To be fair, drives are getting cheaper. Even server-style RAID boxes are becoming more affordable. Still, it's hard to beat a $12 MiniDV tape and a 50¢ CD with your edit files on it.

HD vs. HDV

Sony

We used this Sony HDV camera for the "Talkin' CS3 in NYC" shoot.

According to Wikipedia, "HDV is an inexpensive high-definition video recording format." It's inexpensive because it uses MiniDV tape instead of P2 cards.

With HDV, you get the benefits of HD without the thorny issue of archiving to hard-disk media. On the up side, the same three minutes that would consume 1.24 GB in HD takes up 570 MB in HDV. On the down side, HDV is a more compressed version of HD. When you write HD video to a MiniDV tape or even to P2 cards, you are compressing the image. It still looks great but you can tell when there's more compression on an image. It's similar to watching video online: you switch from high-bandwidth viewing to low-bandwidth viewing. The difference between HD and HDV is nowhere near that big but you can tell them apart, especially when HDV is shot in low light.

Canon

We use the Canon HV20, which can shoot in 24p, when we need to move quickly.

When you are shooting in a well-lit interview situation, however, HDV can look great. I tend to use HDV for anything that will live only on the web, or if I am certain that the video will not be projected onto a big screen.

HDV is also especially convenient when we are on the road and need to move quickly. It's much easier to swap out for a new tape than take 15 minutes to dump the footage to a hard drive and erase your cards just to shoot new material. For example, during MAX 2007, Julie and I were running around conducting developer interviews all day and into the evening. These interviews were later posted, sometimes within nine hours, on the Adobe Developer Connection. I shot the interviews using the surprisingly good Canon HV20 HDV camera, and we were pleased with the results.

Production

I love shoot days. Yes, I sometimes have to get up ridiculously early. Yes, the equipment can be heavy. Yes, sometimes things go horribly, laughably wrong. But for the most part, it is a great, flow-inducing experience to work with a group of people and make something that, ideally, many people will watch to gain an understanding they did not have before.

A typical shoot usually starts with a meeting. Julie and I go over the plan we discussed in preproduction and any changes or issues with scheduling, location, or the people who will be on camera. Even when you have planned everything, something almost always changes, so we often need to improvise. I like to get to the location early so I have plenty of time to set up and deal with any curveballs.

I spend the time double-checking the camera, setting lights, testing microphones, arranging the set, getting water for the talent and myself, making sure I am fed and caffeinated, and going over the plan of what we are shooting and the list of shots I want to get.

In the end, it is a pretty straightforward process. Julie handles most of the interaction with the on-camera talent (usually designers, developers, product managers, or engineers), and I make sure we get good pictures and sound. We also cross-check each other and make sure we get enough information from our subjects to be able to edit as well as to serve the overall concept of the video.

In most situations — whether it's for an Edge video or a Developer Connection video (yes, we produce those too) — the talent sits down, and I make sure they are well-lit, set the focus on the camera, and see if they need any makeup. Julie warms them up on the topic and runs through how she'll be asking the questions. After we do a short sound check, we start rolling.

During the interview, we listen to how the talent answers the questions and determine whether we understand them. Interviewees are instructed to incorporate the question into their answer so we can cut out the interviewer and let these smart people tell their story directly.

I make sure I understand what they are saying to ensure their answers will make sense to the Edge audience. I'm also the person who edits the piece, so I really need to understand the content.

Post-production

Post-production is everything that happens after the final shot — from editing to preparing and posting the finalized video. It holds some of the most interesting parts of the whole process as well as some of the most tedious.

The tedious includes watching everything I just watched a couple hours ago (while shooting). I usually like to let a couple of days pass before reviewing the footage, if possible, so I can take a fresh look at the material. I name all the clips so I know what they are when I'm editing. I have a system that works for me, and I keep everything clean and organized in my edit project, which makes it easier to keep my thoughts clear as well.

Before I go any further, I have to confess: I use Apple Final Cut Pro. I have always used a Mac, and while I was learning multimedia arts in Amsterdam, Adobe stopped supporting Premiere Pro on the Mac. I have been using Final Cut Pro ever since, along with tools such as Adobe Photoshop and Adobe After Effects software, of course. However, with the release of Creative Suite 3, Adobe has restored Mac support for Premiere Pro. Just this month, I purchased Creative Suite 3 Production Premium, and I am getting reacquainted with Premiere Pro.

Tag-team editing

The actual edit for the Edge video is unique in my experience. I make a first rough cut of the interview or the story. I then compress it for the web and upload it to my server. Julie then downloads it and starts editing the video. This stage is what we call a "radio edit," which is just an edit of the story according solely to what is being said, with no real concern for the visual aspect.

Because Julie can edit video using either Premiere Pro or Final Cut Pro, she doesn't have to try to convey edits by writing out long, detailed instructions or having to meet with me to view the footage. Plus, she gets a good idea of how the story will flow.

When Julie is finished with her radio edit, she uploads her project file to the server. Using the original cut as a reference, along with her project file, I produce a proper first cut — which includes cutaways, smooth audio edits, and music. I send her that version, and the real review and approval process begins.

At this point, the bulk of the edit is done, and the last 10% (which can sometimes take as much time as the first 90%) involves changes back and forth that we usually communicate via e-mail. Once the actual content of the video is approved, I do the last of the audio work to even out the sound, finish the music editing and the color-correction, and output the final file.

The End? Not quite

Often this would be the last step of making a video for a client — burn your final, full-quality file to a disk and drop it in the mail or FTP it to them. But because this video needs to go out on an FLV server, it needs to be encoded for viewing.

When I started producing the Edge featured video, it was easy to encode FLV files for final delivery because a template had already been created. I could just drop the final file into the template and out came a great-looking FLV file. But then I did the Talkin' CS3 in NYC video, which was the first Edge featured video I shot using HDV.

I've touched on many of the differences between HDV and the HD we usually shoot in, but for encoding purposes there are only a few things to pay attention to. The first is the size of the picture. The Edge is usually shot in 720p, meaning the size of the frame is 720 pixels tall by 1,280 pixels wide. The image is captured in full frames, like film, rather than interlaced half-images, like TV. When we shoot HDV, it is typically 1080i, meaning it is 1,080 pixels tall by 1,920 pixels wide, and it is interlaced.

Squeeze settings

Encoding settings for the high-res version of the Edge featured video.

The next difference is the frame rates. The HD we shoot is 24 progressive frames per second, which is also modeled after film — whereas the HDV is 30 interlaced frames per second.

For the "Talkin' CS3 in NYC" video, I figured since the sizes of the picture were the same dimension, we wouldn't have an issue making the larger HDV picture the size we usually output to, so I left that alone. That way, the frame rates could be set to anything we wanted. I experimented and tested different settings to make sure it worked. The first, original setting worked great on my desktop, so after final approvals, I sent the file to be posted for delivery.

Soon we started receiving e-mails from unhappy subscribers who were either unable to view the video or were having an extremely frustrating viewing experience.

I had to quickly make changes in the template to accommodate the differences of the new shooting format.

The first thing I noticed when researching the problem was the difference in source frame rates. As I extrapolated from Encoding Flash video, an Adobe Design Center article by Scott Fegette and Tom Green, it is better to keep the compressed frame rate ratio an even multiple of the source. The original FLV template for 24 frames per second (fps) video encoded it to 18fps, or three quarters of the original. I ended up encoding the 30fps video at 15fps, or half the source frame rate, to make it easier for buffering.

Low Squeeze settings

Encoding settings for the low-res version of the Edge featured video.

The second issue I discovered was that the frame size in the player on the page itself was set to a non-standard size — the dimensions were off for widescreen HD. It worked before because you can actually encode a video to a nonstandard size and have it play back fine in an embedded Flash Player. But if you add this to the frame rate issue, I discovered it has some nasty side-effects, such as a poor viewing experience.

In the end, the experience taught me that while Flash Player is ubiquitous and therefore anyone should be able to view FLV files, it is best to do the research up front to ensure you're using the right settings.

After the FLV files are made, Julie transcribes the video and sends the transcription to me. I then prepare the closed-caption files and pass them back to Julie to post to the server.

Delivery of the final FLV files comes shortly before the Edge is pushed to the servers for a brief round of testing. After that, the e-mail blast begins.

Final thoughts

Working for the Edge newsletter is fantastic. I am always learning something from the immensely talented people we interview, from the process itself, and from your e-mail. Here are a few tips I have discovered that may seem basic, but they continue to inform my practice and workflow:

If you have questions or suggestions, or if you would like more specific information on how we produce the Edge featured video, please send us e-mail.


Jamie Wright is owner of San Francisco–based Lekker Media and former creative director of Boom Chicago Video Productions, the video production arm of Boom Chicago of Amsterdam.