A DCP (Digital Cinema Package) is the highest quality digital format for exhibiting a film theatrically, and the worldwide standard for digital cinema projection. Delivered on a specially formatted hard drive for compatibility with digital cinema servers, DCPs present each and every frame of your film with pristine clarity, matching — if not surpassing — the quality of 35mm prints.
We are able to offer the most competitive price for the mastering and duplication of DCP’s. Dvideo Productions can prepare DCP files for playback on a Digital Cinema Projector, now an industry standard at a lower cost than 35mm film projection. Available in 2K (Flat 1990 x 1080 or Scope 2048 x 858) and 4K formats. Working with the Greenwich Film Festival and the rest our clients, we help them deliver finished files for movie theater projection.
Wondering what video encoding really means?
What is Transcoding?
Video transcoding is the technical term for changing one digital video format into another, including AVI, WMV, MOV, FLV, MPEG-1, MPEG-2, MPEG-4, and many other video formats.
You can now have any of your videos transcoded into the MPEG-4 video format for upload and playback on any Apple and Windows devices.
What is video encoding?
Video encoding is the process of compressing and potentially changing the format of video content, sometimes even changing an analog source to a digital one. In regards to compression, the goal is to take up less space. This is because it’s a lossy process that throws away information related to the video. Upon decompression for playback, an approximation of the original is created.
Why is encoding important?
There are two main reasons why video encoding is important. The first, especially as it relates to streaming, is that it makes it easier to transmit video over the Internet. This is because compression reduces the bandwidth required, while at the same time giving a quality experience. Without compression, raw video content would exclude many from being able to stream content over the Internet due to normal connection speeds not being adequate. The important aspect is the bit rate or the amount of data per second in the video. For streaming, this will dictate if they can easily watch the content or if they will be stuck buffering the video.
The second reason for video encoding is compatibility. In fact, sometimes the content is already compressed to an adequate size but still needs to be encoded for compatibility (this is often and more accurately described as transcoding). Being compatible can relate to certain services or programs, which require certain encoding specifications. It can also include increasing compatibility for playback with audiences.
What are codecs?
Video codecs are video compression standards done through software or hardware applications. Each codec is comprised of an encoder, to compress the video, and a decoder, to recreate an approximate of the video for playback. The name codec actually comes from a merging of these two concepts into a single word: enCOder and DECoder. Typical video codecs include H.264 and others. Although these standards are tied to the video stream, videos are often bundled with an audio stream, which can have its own compression standard. Examples of audio compression standards (often referred to as audio codecs) include MP3, AAC, and more.
These Codecs should not be confused with the containers that are used to encapsulate everything. MKV (Matroska Video), MOV (short for MOVie), .AVI (Audio Video Interleave) and other file types are examples of these container formats. These containers do not define how to encode and decode the video data. Instead, they store bytes from a codec in a way that compatible applications can playback the content. In addition, these containers don’t just store video and audio information, but they also store metadata. This can be confusing, though, as some audio codecs have the same names as file containers, such as FLAC.
What’s the best video codec?
For high-quality video streaming over the Internet, H.264 has become a common codec, estimated to make up the majority of multimedia traffic. The codec has a reputation for excellent quality, encoding speed and compression efficiency, although not as efficient as the later HEVC (High-Efficiency Video Coding, also known as H.265) compression standard. H.264 can also support 4K video streaming, which was pretty forward-thinking for a codec created in 2003.
What’s the best audio codec?
Like video, different audio codecs excel at different things. AAC (Advanced Audio Coding) and MP3 (MPEG-1 Audio Layer 3) are two lossy formats that are widely known among audio and video enthusiasts. Given that they are lossy, these formats, in essence, delete information related to the audio in order to compress the space required. The job of this compression is to strike the right balance, where a sufficient amount of space is saved without notably compromising the audio quality.
So what are the recommended codecs?
Favoring compatibility, H.264 and AAC are widely used. While neither is cutting edge, both can produce high-quality content with good compression applied. In addition, video content compressed with these codecs can reach large audiences, especially over mobile devices.
A common technique for compression is resizing, or reducing the resolution. This is because the higher the resolution of a video, the more information that is included in each frame. For example, a 1280×720 video has the potential for 921,600 pixels in each frame, assuming it’s an I-frame (more on this in a bit). In contrast, a 640×360 video has the potential for 230,400 pixels per frame.
So one method to reduce the amount of data is to “shrink” the image size and then resample. This will create fewer pixels, reducing the level of detail in the image at the benefit of decreasing the amount of information needed.
This concept has become a cornerstone to Adaptive Bitrate Streaming. This is the process of having multiple quality levels for a video, and it’s common to note these levels based on the different resolutions that are created.
Interframe and video frames
One video compression technique that might not be widely realized is interframe. This is a process that reduces “redundant” information from frame to frame. For example, a video with an FPS (frames per second) of 30 means that one second of video equals 30 frames, or still images. When played together, they simulate motion. However, chances are elements from frame to frame within those 30 frames (referred to as GOP, Group of Pictures) will remain virtually the same. Realizing this, interframe was introduced to remove redundant data. Essentially reducing data that would be used to convey that an element has not changed across frames.
Now you know the basics of encoding and why it’s done! You will now walk away knowing how content is compressed, while not overly impacting perceived quality.