Should i edit in h 264




















The average external hard drive is only just barely fast enough to play that back. Here are some rough guidelines for common data storage speeds. There will always be certain models that underperform or overperform. Shooting in log is a way of preserving as much of your dynamic range as possible. This allows a scene that has bright highlights and dark shadows without blowing out the highlights or crushing the blacks.

Blown-out highlights are a particularly nasty side-effect of shooting on video instead of film. So shooting in log can help make your footage feel more cinematic.

The most common way to do that is to add a LUT to your footage. This means that your editor will need to apply the appropriate LUT to all of the clips when editing. This can be annoying to manage, and it can also slow down the computer a bit.

This is because it needs to first decode each frame and then apply the LUT before displaying it. The color of two shots may influence how you intercut them.

That way, the editor is always working with footage that has good contrast and color and never has to bother with LUTs. Note that you should only do this if you are using a Proxy workflow, not a Direct Intermediate workflow described below.

The main downside of transcoding your footage before editing is simply the time it takes to do the transcode. When I worked at Khan Academy, our founder would regularly record short video messages to send out to people. Who were often on very tight schedules. Just a few cuts, maybe some music, a title, and I was done. Generally, I would do most of the transcoding overnight, often with multiple machines running at the same time. There are two common ways of working with intermediate codecs:.

You can optimize for editing speed and storage convenience instead. After the shoot, the raw files are backed up and put in storage. When choosing a proxy codec, you want to go for one that does not use temporal compression aka inter-frame compression or long-GOP compression , and you want to pick one that has a lower bitrate. The good news is that most editing software today can switch between the camera files and the proxy files in just a couple clicks, so you can even go back and forth if you need to.

Everyone knows how to handle them. That used to certainly be true, but nowadays both codecs work very smoothly on all modern editors including Premiere Pro. The only significant difference between the two for a proxy workflow is the fact that you may have trouble creating ProRes on a PC, while DNxHD is very easy to create cross-platform. Regardless of which of the two codecs you pick, you also have to pick which flavor you want. Start off with the smallest ProRes or DNx codec in the same resolution as your capture codec.

If you have lots of extra storage space, think about using the next largest flavor. This means that you transcode your camera files into a codec that is both good for editing and very high-quality not very lossy. The key to picking a good Direct Intermediate codec is to make sure that you are preserving all of the information from your capture codec. An intermediate codec will never make your images better more detailed explanation below , but it can definitely make them worse if you choose the wrong codec.

The important thing is to understand the details of your original footage and make sure that your intermediate codec is at least as good as your capture codec in each area. You want an intermediate codec that is at least and 8-bit. Going beyond these values e.

We have 4 options to choose from that are and bit. You might think that all you need is to match the camera bitrate Mbps , but you actually need to greatly exceed the camera bitrate. This is because h. Because h. In order for ProRes to match the image quality of h.

ProRes will probably do just fine, but if you have lots of storage space, then going up to ProRes HQ will have a slight edge. Part of the reason why the Direct Intermediate workflow is common is because it used to be a lot harder to use a proxy workflow. The main exception is when you have a lot of mixed footage types.

If you have multiple frame rates and frame sizes in the same project, switching back and forth from the proxies to the capture codecs can be a headache. If you are using some third-party tools to help prep and organize your footage before you start cutting, those can also make the relinking process more tricky.

One common example might be software that automatically syncs audio tracks or multicam shoots. If you were to include the LUT in your transcode for Direct Intermediate workflow, you would be losing all of the benefits of recording in log in the first place. This is very important, because it is very commonly misunderstood, and there is a lot of misinformation online. Transcoding your footage before you edit will never increase the quality of the output.

There are some extra operations that you could do in the transcode process such as using sophisticated up-res tools that could increase the image quality in some cases, but a new codec by itself will never increase the quality of your image.

That includes going from h. It also includes going from 8-bit to bit. And going from to This is a photo of a rose reflected in a water droplet. Now what if I take a photo of my monitor with a Red Helium 8k camera. This is a beast of a camera. The Red camera has more megapixels, right? I have a file that is technically higher-resolution, but it does not capture any more of my subject the rose than the first one did.

You are making a copy of a copy, taking a photo of a photo. The big caveat is that, if you are doing any processing, any transformation of the image adding a LUT, for instance , then you definitely do want to transcode into a higher-quality codec, which will retain new information.

Not ideal for editing. The downside is that you would need about Peanuts for a big facility, but a significant investment for a solo editor. So you might decide to use a Proxy workflow instead and transcode your files to the ProRes Proxy 4K format. Then your footage would only take up 2. You can then easily edit off of a single hard drive, and your workflow gets a lot simpler. For instructions on how to calculate bitrates and file sizes, check out this article: The Simple Formula to Calculate Video Bitrates.

You might decide to transcode the footage even further down to ProRes Proxy HD, which would shrink your footage down to just GB, which becomes more feasible to send over the Internet if you have a fast connection. When the edit is all done, you just re-link your project back to the original camera files and export. The big question at this point is whether you want to color-correct straight on the original camera files, or whether you want to transcode.

In order to make good decisions about color, you need the highest quality image that you have available, because you need to be able to see exactly what you have to work with.

This is certainly a simple option. If you did a proxy edit, you can relink to the camera files for the finishing process and go to town. This will give you maximum image quality, but remember how the camera files can be slow to work with? Encoding takes care of the first step in production which is organizing the audio and visual data associated with a video. That data still needs to be packaged for delivery. The package, known as a container , most commonly used is MP4, though it is not the only one.

MOV is another relatively common container for H. Depending on the software you are using, choosing a coding standard and video file format can range from no-brainer to dizzyingly complex. This is because some software offers a wide array of different encoding standards and container formats, while others simplify choices to a few popular options.

A term you may hear when discussing video encoding is codec. Codecs are the technology and programs used to encode or decode a digital data stream or signal i. Now work is being done to implement a new, more efficient standard, H. Ultimately, the goal of H. It is likely that H. Technology companies need to implement the codec in their software and a big part of that is working on patent and licensing issues so that the technology can be packaged and sold to consumers.

It also takes more processing power to use this new encoding, and the machines in the hands of consumers have to catch up. But, for now, H. To scratch the surface, ProRes files are loosely compressed, and your GPU finds it easy to process them during both editing and playback.

This is not the case with H files, and they do require more processing than ProRes. Developed by Apple Inc. ProRes uses QuickTime MOV format that, as mentioned above, makes the files gigantic in size that further occupy a significant amount of space on a disk. However, ProRes files come with a plethora of benefits, some of which are listed below:. With the first version that was approved and released in , H technically written as H. However, there are certain downsides of H format that many professionals who use post-production tools like Final Cut Pro X, Adobe Premiere Pro, etc.

Even though most used post-production applications allow H to ProRes transcoding using their built-in export features, sometimes the process takes a significant amount of time, or the settings box has numerous confusing options that newbies find hard to understand. Wondershare UniConverter originally Wondershare Video Converter Ultimate bridges this gap by providing one of the simplest user interfaces, and the pre-configured presets that you can use to transcode H or any other video format to ProRes without any hurdles or complications.

Launch Mac version of Wondershare UniConverter on your macOS, confirm that the Converter tile is selected from the top, click the icon from the center, use the box that appears to go to the container that has the file you want to transcode H to ProRes, select the video, and click Load to import the clip to the app.



0コメント

  • 1000 / 1000