First of all, I’m not the expert. I was once an expert in video production, and much of that experience still applies, but getting down to the nuts and bolts has been a major re-education. I worked on video almost every day for a decade or more, but that was more than a decade ago. It was before HD, much less 4k UHD. It started before digital editing was powerful or particularly reliable. I spent fortunes on systems that lived with open cases and box fans blowing on the chips just to keep them cool. I worked with used 3/4″ tape decks from TV station surplus sales. I fined tuned my ability to push buttons in a hopeful effort at getting within a second of where I wanted the edit to occur. I used and killed hard drives that weighed 15 pounds and stored less than a freebie flash drive from a trade show. Ah, the early days.
Now, all I need is a decent computer (could be a laptop), a piece of editing software, and enough storage to hold the footage.
All of us grassroots video producers in those days were chasing the holy grail of broadcast quality. Standard definition was the only definition and broadcast quality was about color accuracy, being able to carry that signal though the edit process without ending up with smeared video mud. Mostly we failed. The only way to true success was through a very expensive gateway that was dominated by stations and networks that not only could afford the equipment but had all the means of distribution under their collective thumb. Raise your hand if you grew up with 3 channels on your floor cabinet TV…
Now your cell phone probably shoots pretty good video, you can find free (but limited) editing software, and a YouTube account anxiously awaits your next post.
After a whirlwind tour of the state of the art in video, some basic rules bubble to the surface:
One, it’s remarkably cheap to make credible video these days at quality and resolution that far exceeds the high five-figure cameras I used way back when. Accurate editing is no longer even an issue. Honestly, it wasn’t really a problem when I stopped making video a decade ago, but the modern ability to manipulate footage has made enormous progress and offers up a lot more room for creative decisions.
Two, relatively high quality still costs relatively large amounts of money, but the entire cost/quality scale has dropped radically, especially compared to shooting on 35mm film, the only way to achieve a filmic look back in the day. If you recall, George Lucas created a big stir when he decided to shoot the entire Star Wars prequel trilogy on digital cameras – code for staggeringly expensive video cameras. We all know how that turned out. The modern equivalent is impressive across a broad range of cost, but in general, the quality of “film” costs more like a used car than like a new mansion. Good news.
Three, the line between film (which was a dream in the old days) and video is very blurry. Most high end television and a large chunk of feature films no longer even use film even though they look exactly how we expect film to look. They are technically shooting video on the set. Low end television, obviously shot on video, still looks like it was shot on video. The point is that there is no radical split between video and film these days. It’s simply a matter of process and technique applied to equipment that allows enough room to achieve a filmic look. If you know what you are doing, that equipment line extends all the way down into GoPros and cellphones, with some nerd-talk limitations too numerous to mention.
Four, nothing about making a good film has changed. It’s still about telling a story that matters, at least to someone.
Now for the nuts and bolts, and why there’s such a large number attached to my GoFundMe page.
First, audio before video. My documentary is about our relationship with dogs, and features Old Dog Haven as the tip of the sword. Like any story, it requires a number of elements, all working in concert. Yes, it’s a video, and a lot of effort is being spent on the quality of the video. However, anyone who has produced video for a living understands that great video is nothing without great audio. Right now, I have a good microphone on my primary camera. This leads to three questions: One, is the audio good enough? In many cases, yes. In edge cases, no. I shot some quasi-interview footage at the Walk for Old Dogs, and the result was good as long as I was close to the subject and there was no major background noise. It also had the real drawback of no backup audio. Backup audio can take many forms, depending on the camera and microphones. In a perfect world, there are at least two video sources for any given interview and two audio sources. The video sources allow me to edit from one shot to the next to keep it interesting, and the multiple audio sources allow me even more flexibility. Let’s say that one audio source is a lavaliere microphone and one is the audio from the camera five feet away. In the edit, I have the ability to choose from both of these audio sources and mix between them to create the most clear and natural sounding audio for the interview. Most importantly, if one recording fails for some reason, like a jet flying over or a garbage truck rolling through the neighborhood, I have the ability to use the other source to cover the problem. Are there ways to clean up problem audio? Sure, Does it cost more than recording good audio in the first place. You bet, in both time and money.
Second, video quality. As you read this, be aware that I am simplifying drastically. Modern video quality depends a lot of factors. First is the camera. My current generation cell phone shoots amazing video, and the audio is impressive as well. So why not shoot the entire film on my phone? It has been done. Well, modern phone video is good but it comes from a minuscule camera buried in a phone and is largely dependent on the software that processes the video before it gets recorded. This is a great thing if you want to throw an off-the-cuff video on Facebook. It’s a drawback if you want to mesh the video with other cameras in the course of making a large scale presentation that looks like a film.
It gets into some technical issues, like bit-rates and codecs, but the long and short of it is that if you want to make a long-form film, having more control is better. At the ODH walk event, I shot on one camera, and Kelly shot on his GoPro4. Both kinds of footage came out very well, but they do not match. If my goal is to tell a seamless story without the distraction of two entirely different looks, I need the control to make the footage match. If anyone has spent time with a GoPro, you know that it shoots amazing footage in bright sunlight on a clear day. The colors are punchy and vivid, and the footage is crispy and sharp. This is a beautiful effect for video, as in, welcome to the part where I jump off a sandstone cliff and plunge into huge surf where a surfboard is waiting for me to shred (or whatever surfers do). As part of a seamless film, well frankly, it needs to be toned down. The usual method for controlling the over-punched effect is to shoot in something known as log. In GoPro terms, it’s called Protune. This is a method of shooting the footage without all the punchy effects, the high contrast, the vivid colors, the in-camera sharpening. The downside is that you can’t throw the video up on YouTube in one fell swoop. The upside is that it captures a bit more dynamic range than the normal method and allows you a lot more latitude in the edit. You can more flexibly define how the final footage looks. This is an important factor when trying to make footage from multiple, different cameras look like they are shooting the same scene.
Nerd ALERT! Then there’s the codec issue. Most lower end cameras shoot some variation of h.264, which is a codec (encoder/decoder), a particular algorithm for compressing the footage through the camera before it gets crammed onto an SD card. As it turns out, h.264 is good enough for this purpose, and it’s good for delivery of the final video. What it’s not good for is the in-between, the actual editing of footage. The reason is that h.264 is an interframe codec, meaning the compression considers the earlier frames and the later frames for each frame of video. It’s very efficient for storage of camera footage, but it places a high demand on the CPU of the computer doing the editing. Every time you look at any given frame of the video, it has to look at multiple frames to display that frame. This takes time, which equates to delays when actually trying to make edit decisions.
What’s the solution? First, I had to throw out my favorite editing software of many years. Even in its most current version, it choked on h.264. Of course, it could be my hardware causing the problem, but the reality is that I am running a heavyweight desktop machine. Yes, it’s a few years old, but it’s a serious, high performance chunk of hardware. Each individual hard drive is suspect, and I do have a few performance duds in the box, but I’m not using those for editing. I’m using the best single drive solution I have right now. If it’s not the hardware, then I need to look at the codec. It turns out that there are a multitude of good editing codecs, and every last one of them takes up more hard drive space than the camera original h.264. My first test filled up a terabyte of hard drive space like it was nothing. I worked through the options until I found one that had less-than-ludicrous storage requirements. Needless to say, my powerhouse machine for writing, photography, graphics, and CAD finds itself begging for more and faster hard drive space.
That’s without reliable backups. Imagine you go out for several days and shoot lengthy interviews of dog rescue volunteers and the related B-roll footage of the dogs under their care. You fill up multiple SD cards and take all that footage back to the editing machine. You copy it across to the hard drive and because you need the SD cards for the next shoot, you delete the files from the cards. The next day you’re happily editing away on all that interview footage and the hard drive picks that very day to release the magic smoke and grind to a halt. Without a backup that you made at the same time you imported the footage, you just wasted three days of your time, hours and inconvenience for each of the interview subjects, any money it took to get to the locations, and far worse, you just lost any of those magic moments that happen during interviews, moments that will never happen again.
Currently, I do have a backup drive just for footage, but at the rate I’m going, it will be full by the end of August.
Then there’s the camera itself. Thanks to support from some mighty kind folks, I have a camera that is almost perfect for capturing dogs in their element. Does that make it perfect for everything involved in this documentary? Unfortunately, no. The camera in question can record audio from the good microphone I mentioned earlier, but it cannot record the optimal audio for a solid interview. It is very good up to its limits but those limits do not include high resolution recording of colors. Why does this matter? Let’s say you are standing up for an interview and you are trusting me to make you look good. Higher quality recording of the color-space allows me to do a lot of adjustment to your facial tones and that includes some selective sharpening to minimize age. If I record at low end camera specs, I have very little room to adjust. If I record with better camera encoding, I can do quite a lot to make all my interview subjects look great.
Luckily dogs don’t care. With a modicum of love, they are all convinced they are perfect. This brings me to the other end of this camera point. The dogs.
If you spend as much time among dogs as I do, you understand that they operate at a higher speed than we do. They make decisions, change facial expressions, and interact with us faster than we do among ourselves. This demands the use of high frame rates. If I shoot a dog at the normal television 30 frames per second and slow it down, the software is forced to interpolate and artificially blur the action. If, on the other hand, I shoot at 60 or 120 frames per second, the software does not have to guess at what happened. The information is already there. At 60 fps, I can cut the speed in half. At 120 frames per second, I can cut it down to quarter speed and all the frames are available for the software to pick and choose. This means that I can slow the dog footage down to the point where we can see them at the speeds they actually play and interact. In a film about dogs, you can understand why this is important.
So, I take the output resolution of the whole project, I look at cameras that serve that resolution at high frame rates, I consider the quality requirements, and I end up with a certain camera.
I start with the amount of space I’ve already used for the footage I’ve gathered, extrapolate that into the plan for this film, and throw in some performance requirements to come up with a bundle of hard drive solutions. There are multiple approaches to the problem, but like everything else with this project, I aim squarely for the best cost/benefit solution.
Audio is a bit more complex. If I fail to gather enough support, then I aim for an add on solution to the camera I have now. If I get enough support, I can build most of the audio solution into the camera I would use for the interviews. In a perfect world, I would do both for all the redundancy I can manage. I’m pretty serious about those magic moments. Missing one, or capturing one that I can’t use because of bad audio would hurt – a lot.
This just to lend a glimpse into what it takes to make a quality film that involves a mass of footage and at least two species. (You never know when a cat will photo bomb the whole scene.)
There are two points remaining. One is that there are far more expensive ways to do it, and two is that the story is worthwhile. Please help me finish it.