Thursday, December 25, 2008

More General Knowledge...courtesy of Quantel

Quantel has an excellent glossary of DTV/HDTV terminology. I find myself searching for technical definitions often and this is one of my frequent stops...

Quantel Digital Factbook


TimK

Saturday, December 20, 2008

General Knowledge: Colorspace from Charles Poynton

Charles Poynton has a great note (formatted as a FAQ) concerning color space here.

Some really straightforward explanations of terms frequently misapplied...

TimK

Friday, December 19, 2008

Into the way back machine...

A break from the future for a moment, let's look at some history.

An interesting look at early educational television...at Johns Hopkins.

The topic is formatted for viewers, but it's a good look back at what television was like before NLEs and tapeless workflow. (...well, technically I guess it was tapeless workflow...)

Hosted by John Astin.

...from the Research Channel. Enjoy.

Thursday, December 18, 2008

What IS a video camera anyway?

EOS 5D Mark II

So...it had to happen. Actually I suspect it is happening/has happened more than once...but here is a credible example of a television commercial for a real client (Ford Motor Co.) shot on a digital SLR.

Make your own judgements on the quality, but this will continue to reshape how we think about image acquisition...


TimK

Tuesday, December 16, 2008

The HP Dreamcolor Display, 30 bit color...

The HP Dreamcolor monitors have been creating a bit of buzz around the post production industry. These displays utilize an LED backlight to increase the color precision of the monitor to 30 bits. (10 bits/channel as opposed to 8 bits per channel, which is how most cold cathode fluorescent backlit LCD displays seem to be defined.)

(See the blog post from December 11th, linking to Martin Euredjian's assessment of LED backlight deployment in LCD panels for another perspective.)

If DreamWorks finds the technology acceptable for their work, it would seem that it probably achieves its claims, but extensive testing is still happening and as with most new products these days, the monitor itself has had some software upgrades to improve performance...

Below, the Hard Forum Review seems pretty thorough, though the reviewer mentions he doesn't have all the hard measurement tools he'd like to substantiate a few subjective analyses...but still considerable work put in and some off-brochure info... Dithering seems to be something of concern.


The Hard Forum Review

An HP person chimes in on the Creative Cow thread below. Someone mentions the dithering mentioned in the Hard Forum review and Dan Bennett from HP says he's seen it but there's a firmware upgrade for the monitor, though the Hard Forum reviewer seems to show he has the latest firmware...

The Creative Cow.net Apple Color Forum thread

HP's FAQ on the Dreamcolor monitor is here:

The HP Dreamcolor FAQ

For reference: PNY NVIDIA's two 30 bit display cards capable of driving the Dreamcolor:

QuadroFX 4800 (one Dual-link DVI port)


QuadroFX 5800 (two dual-link DVI ports)

There are other display card vendors of course, and I've tried to scan the product lines of ATI and Matrox for 30 bit capable cards...while ATI appears to have some products with 30 bit support on PC only (24 bit on Mac), it's difficult to discern if Matrox has a product or not. I'm almost positive I've encountered a description of a Matrox product that supports 30 bit...I'm checking further.

(Update: After a few email exchanges with Matrox, it appears that the instances where Matrox display cards talk about display cards with numbers larger than 24 bit, they are referring to 32 bit, which would be 3-8 bit color channels an 8 bit alpha channel, not 30 bit color precision, which would be 3-10 bit color channels.)


TimK

Sunday, December 14, 2008

Bayer Sensors...Patented 1976...but things have changed a bit...

If you spend any time working with television or feature or high end television spot production, you've probably heard at least one person express wonder or disdain, or perhaps complete faith or doubt in the relative merit of a Bayer Pattern sensor.

The original patent, granted on July 20, 1976, has the sensor primarily as a Y'CC (color difference) sensor with the green photosites shown here being Y' and the red and blue photosites being "C1" and "C2" (as opposed to naming a specific color difference structure).

(image is courtesy of Wikimedia)


However, Bryce Bayer changed the proposed sensor layout for an RGB implementation in the patent. Today, most well known Bayer implementations in the 'digital cinema' technology area (RED, Silicon Imaging, etc.) are RGB, and photosites are distributed as illustrated in the first visual. However, the original vision for a Bayer pattern RGB sensor in the patent was actually what you see here:

You'll notice that blue is sampled far less... Apparently the arrangement was meant to mimic human visual acuity, which is of course sensitive most to green, less to red and least to blue. The sensor's layout would have spatially sampled 50% Green, 37.5% Red, 12.5% Blue.

It would've been curious to see how such a sensor performed. Though it would've been maddening to develop a low pass filter for it...

(I don't know of any implementations of this highly asymmetrical system, but I don't work much with still imaging or highly specific industrial applications...if you know of one, please add it in the comments.)

Original Bayer sensor patent #3,971,065.



TimK

Thursday, December 11, 2008

LED Backlights, Our Salvation for Reference LCD Panels?

Many of us in the post production industry have been coping with the relatively limited options in reference monitors these days.

A recent glimmer of hope for many of us was the idea of an LED backlit LCD display as opposed to a cold cathode fluorescent, which would expand the color gamut of an LCD display to something we could more easily utilize for precision color decisions...

(Caveat: Yes, the links above are wikipedia...they are starting points only for those who are unfamiliar with the concepts, not the final indisputable words on these topics...wikipedia, like used cars, can be a great deal, but should always be shopped with caution...)

Martin Euredjian of eCinema Systems, makers of LCD panels for the post production industry, has some thoughts on this approach. As it turns out, eCinema had given this idea a shot some time ago. It may be time to look for a new technology to be the next step up for the LCD panel.

Martin has a whitepaper that he points out is, in his judgement, 'not a finished piece'. we'll have to call this a sneak peek.

Thanks to Martin for the insight and permission.


TimK

Tuesday, December 9, 2008

Quick Entry...Jan Ozer on Adobe CS4 and 64 bit systems

Anyone wondering about just how much advantage Windows Vista or Apple Macintosh holds for Adobe CS4 users over Windows XP...may want to check out Jan Ozer's article at Digital Content Producer.com.

The results appear to be...significant.


TimK

Human Visual Acuity...Shoot it and they will...probably see it.

With all the carrying on about pixels in our digital imaging world, it's interesting that nobody has thought about upgrading the last step in the image viewing process...the human eye.

If you read up on the topic (or just Google it I guess...), you will find that human visual acuity is often described as "cycles per degree". To any broadcast engineers out there, this sound suspiciously like "line pairs per picture height"...and it's very similar. Humans have a given field of view...with each individual varying somewhat of course. Military pilots in many western countries are required to have a 147 degree peripheral range...unscientifically, I can see dark objects out to about 150 and bright tennis balls, flashes of light and signs that say 'beer' out to about 180 (though my wife thinks I have full 360 for signs that say 'beer').

Within this field of view, humans can only discern a fixed amount of 'resolution' or cycles per degree ( a 'cycle' being much like a line pair...one dark, one light alternating...at the point they become blurred and the light/dark turn visually to grey, you've exceeded the resolving power of the visual system). The images we watch on a screen can only be perceived with all their inherent resolution if they fill enough of our field of view to occupy enough 'degrees' to account for the image resolution.

The data on human visual acuity paints a picture that is...well, not as sharp as many people think.

Panavision's John Galt remarks in his excellent presentation in Panavision's "Demystifying Camera Specs" , Segment 4, that the visual frequencies they assess most closely to determine if a 35mm flm image is 'sharp' is "20-21 cycles per millimeter" which on a 24mm wide film frame (super 35mm stock, no sound track), he reminds us, comes out to be about 1000 pixels across. More precise (higher) frequencies are visible of course, but to a human viewer these are the 'make or break' aspects of image sharpness in this visual system.

(To understand more about the principles of how different visual frequencies are captured and retained...watch the whole series of videos. This is, in my opinion, the clearest explanation of what resolution and perceived sharpness really is that I've ever encountered...I have a permanent link in the list of links to the right... For some info on how the eye responds to different color frequencies, I find this article at Olympus Microscopy to be helpful)

As far as the maximum 'cycles per degree' attainable in the human visual acuity system, research tells me that is...somewhat unclear. I have seen information that quotes everything from 30 to as high as 60 cycles per degree...I did see that dogs apparently come in at about 12 cycles per degree (I've moved the dog bed in the living room closer to the television).

BobAtkins.com has an interesting article, focusing primarily on still photography, but I thought the graphic and the study outlined actually has bearing on our discussion. You can see that regardless of the maximum visible resolution of the human visual acuity system, our vision is at peak effectiveness at about 6 cycles per degree (moving my chair past the dog bed...).

So the interesting take away from much of this information is that our perception of 'sharpness' has less to do with peak resolution than we think. Someone with a 25" diagonal television image 10 feet away may actually have trouble discerning between 480p, 720p, and 1080p material all else being equal (and you thought that your cranky old neighbor was just too tight to buy an HDTV capable monitor). HDTV didn't catch on in Japan before the U.S. because everyone has bigger television monitors...it's because they have smaller houses.

As we all obsess over the pixel count in the sensors, video file codecs, and monitors we use...let's keep in mind that the human eye has its own limitations, and the television monitor or projection screen isn't the final stage in the visual chain.

...and since most of us produce material for humans...



TimK

Monday, December 8, 2008

Color Subsample Notation


The numbers get tossed around with impunity these days. The first number is usually 4 and the closer the next two are to 4 the better…right? Well, while that statement is basically true, there’s a lot more to it than just that. The number is really a ratio. (click on the chart for a large visual)

“4” in the first slot is meant to represent the baseline of four pixels and these ratios only apply to digital video signals. The physical arrangement of the four pixels in question isn’t really referred to in a standard way any longer but originally it was supposed to represent 4 horizontal pixels. The second number and third number are frequently, but erroneously assumed to represent the relative sampling value for each of the color difference channels. Actually the second number refers to the sampling frequency of both difference signals horizontally and the third number was originally intended to indicate the sampling frequency of both difference signals vertically, though the system was developed without really considering vertical subsampling systems like 4:2:0. In the current system, the third number is either the same as the second number as in 4:2:2 and 4:1:1 indicating no vertical subsampling…all the vertical color difference samples are there in each column that has a horizontal color difference sample. In ratios where the third number is zero, the “0” indicates that there is a 2:1 vertical subsample in addition to the horizontal color difference subsample.
4:4:4
A designation of 4:4:4 would mean that there is a discreet sample for each of three color channels making up the signal for each pixel. While this could apply to either RGB or one of the color difference color spaces used for video, 4:4:4 would most often be seen with an RGB signal. Even though 4:4:4 could refer to a Y'CbCr color sample, RGB does not subsample one color channel in relation to another, so 4:2:2 (or 4:1:1, etc...) would never refer to RGB.
4:2:2
This number is most prevalent in high-end video formats and refers to a discrete sample for Y’ on every pixel and samples for each color difference signal is sampled at one value for every two pixels. While in theory this sounds like the elimination of a lot of information (a third actually) compared to 4:4:4, the human eye prioritizes the detail in the luma portion of the image and most humans would be hard pressed to see the difference between a color Y’ CB CR image in 4:4:4 and one in 4:2:2. In fact, 4:2:2 is good enough that most video types that are designated as “uncompressed” are actually color sampled at 4:2:2.
4:1:1
Most users of NTSC DV are familiar with this color sampling scheme. For every four Y’ samples, there is only one sample for CB and CR. This creates a 4x1 four pixel horizontal “block” with common color difference values, though each pixel has a discreet Y’ value so the pixels aren’t identical. While DV footage is used extensively, even in broadcasting, it can be a challenge for special effects and compositing as chroma keying and green and blue screen work requires a lot of subtle tonal variations to create smooth irregular vertical edges. Canopus and Matrox each created custom methods of decode for DV to attempt to better interpolate the four pixel horizontal spread for better keying, and many software keyers have similar measures in place. It's intersting to note that even though 4:2:0 subsampling is thought by many to be somewhat inferior to 4:1:1, 4:2:0 (compression set aside from color subsample for a moment) can actually be slightly easier to composite as there is only one pixel of interpolated value in either the vertical or horizontal direction, while 4:1:1 interpolates 3 pixel values horizontally.
4:2:0
PAL DV users and anyone who outputs to MPEG has seen this number. Many who may initially interpret the notation as a Y’ sample for each pixel, a CB sample for every two pixels, and no samples whatsoever for CR can find it confusing. In reality, there are the same number of color difference samples as NTSC DV with the pixels arranged differently. Also confusing: all the color difference sample sites for the various approaches to 4:2:0 are not standard. (see chart) JPEG/MPEG-1 structures the samples so that they’re sited in the center of the four pixel block. MPEG-2 sites the samples between pixels vertically, and PAL DV sites the difference samples on alternating lines. Even with the color difference samples sited differently for different applications of 4:2:0, you could say there are still four pixel blocks that net out to the same amount of color difference samples as 4:1:1 and simply picture these 4 pixel “blocks” as square (2x2) instead of a horizontal line (4x1) like NTSC DV’s 4:1:1.
4:2:2:4, 4:4:4:4
As if all this isn’t complicated enough…you could add a number. 4:2:2:4 or 4:4:4:4 refer to 4:2:2 or 4:4:4 color sampling with the addition of an alpha channel for keying purposes. The fourth channel would carry an 8 bit or 10 bit (depending on the image format) grayscale map indicating relative transparency of each pixel in the image. The alpha number is always the same as the Y' sample.
3:1:1
As if the strange way the second and third value seem to fall in these ratios isn’t confusing enough…now we see a ratio where the first number has changed. This ratio appears when referring to HDCAM pictures which on playback are 1920x1080, but actually record 1440x1080 to tape. In my opinion the most confusing aspect is not so much that there is a different baseline number, but whether or not that number is a proportion of “4” in itself as 1440/1920 is 3 of 4. I suspect the interpolation to 1920x1080 4:2:2 (this is how the manufacturer presents the specs on the playout picture) and the color difference subsampling ratio of 3:1:1 are separate issues and their mathematical scale to full raster 1920x1080 is most likely coincidental. 3 equates to 1440 horizontal Y’ samples and 1 is a ratio to 3 designating 480 horizontal color difference samples. This notation is NOT on the chart as it does not exist anywhere but in file storage, and the end user can only access HDCAM footage as 4:2:2 SDI output without a proprietary post solution.

TimK

Saturday, December 6, 2008

8 Bit vs 10 bit Color Precision

Often when 8 bit vs 10 bit color is brought up, many confuse the increase in precision with an increase in ultimate dynamic range. Moving from 8 bit to 10 bit will increase the amount of data you save, but it will still exist inside the color range you're using.

Every 'bit' has an 'on' and 'off' state if you will...the zeros and ones that make up data. One bit would be the equivalent of ancient terminal CRTs...each pixel is 'lit'...or not. Each bit we add gives us another 'on/off switch', which when you
multiply the 2 states of the first bit by the 2 states of the second bit...you have 4 possible states or gray shades in our case...add another bit to increase to 3 bits and double that giving you eight states, or 'shades' if you're referring to imaging...

4 bit=16 shades
5 bit=32
6 bit=64
7 bit=128
8 bit=256
9 bit=512
10 bit =1024

An RGB '8 bit file' in typical use, refers to '8 bits per color channel'...frequently referred to as a 24 bit file with the combined channels, and adding an 8 bit alpha channel gives you a 32 bit file in the case of a Targa or similar file format.

The difference between 8 bits per channel and 10 bits per channel is significant. The possible values inside each channel quadruple. You have the same color range subdivided into four times smaller increments. The subtlety of shades within the palette is increased significantly when all color channels are combined to create the complete palette.

256 x 256 x 256 = 16,777,216 total colors, 8 bit palette

1024 x 1024 x 1024 = 1,073,741,824 total colors, 10 bit palette

Next time you have a project that will require significant color correction, keep in mind how much finer control you'll have over gradients and subtle differences by moving to a 10 bit workflow. Even if your source material is 8 bit (which most video is, even in HD), using a 10 bit pipeline for post will improve you results.

TimK


As Promised... General Colorspace Information

We hear a lot about color space. RGB, Y'U'V', Y'/R-Y/B-Y, Y’/Cr/Cb…it ranges from logical to alphabet soup. In the end, the color in our images is made from three component signals combining to create a color image. What those three component signals are…well, that depends. (There are colorspace definitions outside of the colorspaces I've listed here, of course. I just tried to include the colorspaces most media production pros will encounter...)

RGB
The easiest place to start might be RGB. It will be review for almost anyone with any computer background as RGB logically stands for “Red/Green/Blue”. RGB color images are produced by combining three channels of image information, each containing the intensity information for each individual color, Red, Green, and Blue. This colorspace is typically used in computer displays and projectors. It’s used in the creation of the visual content developed for video games or the web. Conventional analog and digital video has traditionally not used RGB color space, causing many long-time professionals to discount the general accuracy of the video your NLE shows you on your computer desktop monitor. However, with the growing use of large LCD and Plasma displays RGB is becoming at least one of the display methods you may want to (or be forced to…) employ through the post process.



HSL
Hue, Saturation, Luminance” is found most often inside computer software for image manipulation or video color correction. It is fairly simple to picture as a color wheel where the color red is in one direction, moving around the wheel to yellow, green, cyan, blue, magenta, and back to red. While the color order would seem to directly mimic the vectorscope display, in HSL the individual colors are evenly spaced like numbers on a clock face while a vectorscope is showing a color difference signal where the positions of the primary and secondary legal colors are not equidistant.
However, the vectorscope model is indicative of how the hue and saturation portions of an HSL color wheel work. The “compass” direction from center indicates the hue and the distance from center indicates saturation. Also similar is the way the HSL color wheel doesn’t display luminance changes, but the brightness is adjusted via each color channel somewhat similar to an RGB model.

CMYK
For those of you who have dealt with this color space strictly as an irritant when you receive your client’s logo art from their printer and forget to convert it before spending 10 minutes trying to figure out why it won't load into After Effects, we’ll just touch briefly on what it is. Printing ink is based on reflected light whereas the RGB color model is constructed to create an image from a light source.
RGB is based on the color we perceive when we look directly at a light source (spotlight, television screen, LCD monitor…etc.) and is referred to as “Additive Color.” The primary colors are Red, Green and Blue while the secondary colors are Cyan, Magenta, and Yellow. When all colors are combined in an additive color environment (all colors of light are 'on'), you get white, when all color is gone (lights off), you get black.
CMYK turns everything on its head, as it's based on “Subtractive Color.” This is color we perceive once light has bounced off of something. Usually the color we see is the only color not absorbed by the object. So the car isn’t really green, the surface of the car absorbs most of the spectrum of light that hits it, but rejects green light and it bounces off, becoming visible to us. Since we are turning everything around, you can picture combining all the colors in a paint store (or as a kid, in your water color tray...or on the living room carpeting...) and getting something that is usually undesirably close to black. Removing all pigment from paint gives you…white. Just to cap off the confusion, the primary colors are Cyan, Magenta and Yellow, hence the “CMY” with “K” indicating black ink. Of course since all else is opposite, Red, Green and Blue are the secondary colors in subtractive color.

Color Difference Systems
I explained my rather unconventional way of visualizing Color Difference systems in general. (see the explanation and illustration from the post "Y's and Whats..." below). What follows is a description of the various common designations we all encounter in the field and their intended definition.

Y’UV
I’ve even spouted this designation off when I am referring to all color difference, non-RGB television color space, but it’s just laziness on my part as it really isn’t correct.
It does refer to color difference video color space, but it specifically refers to the color difference signals B’-Y’ and R’-Y’ after scaling used in an intermediate step in the creation of an analog composite NTSC or PAL signal. Though “Y’UV is frequently misused in referring to digital video, and particularly incorrect in cases where it is paired with a subsampling designation such as “4:2:2”, as it has nothing to do with component video or digital video.

Y’ PB PR
Y’ PB PR is the description for the coding for analog component video. PB and PR correspond to the B’-Y’ and R’-Y’ signal after scaling to this standard. In the case of analog, digital color sampling ratios like “4:2:2” don’t apply, but in this system the color difference components are lowpass filtered to approximately half the luma bandwidth.

Y’ CB CR
Digital component video changes the designation somewhat. Y’ CB CR references the scaling method used in digital video files on disk or tape. While the analog component color difference signals would be lowpass filtered to preserve bandwidth, their digital counterparts would instead be subsampled if efficiency was required, creating the basis for the typical ratios expressed with numbers like “4:2:2” or “4:2:0”. (More on that in later posts.)

Y’ I Q
An obsolete system established in the early days of color television. What was intriguing about this system is the normal vectorscope color orientation was tilted 33 degrees, and the “I” and “Q” axes (which correspond to the ‘U’ and ‘V’ components, but the axes are actually switched) can still be seen on some vectorscopes to this day in that configuration.
There's some stuff to keep your mind working over the weekend...
TimK

Friday, December 5, 2008

The Y's and Whats...Color Difference Concepts

Color came along after television had established itself as a black and white medium. Therefore video color space was developed to carry the black and white picture and color separately from the earliest days of color television. This basic approach also made color television signals inherently backwards compatible with black and white televisions as the older black and white models could simply ignore the color component of the signal and still show a serviceable picture.
Most users who look at the notations for color difference systems try to apply it to what they know about RGB. I’ve had users look at the “Y’/B-Y’/R-Y’” designation on their Betacam deck or their DVD player and ask if the green signal is buried in the “Y’” channel because they see the “R” and the “B” which obviously correspond with “Red” and “Blue”.
Well...not exactly.
The “Y’” represents the luma, or black and white portion of the video signal. The “R-Y” and “B-Y” in the case above represent the two color difference signals. (Color difference systems do vary and there are several non-interchangeable notations which I'll cover in another post. ) In any case, the two color difference values can be demonstrated with one illustrated concept as they are all applied the same way with the exception of the scaling factors










On a vectorscope, the further from the center the display shows a given signal, the more saturated with color it is. The “compass” direction it extends out from the center determines its “hue” or what “color” the color is. Upon looking a bit closer at that same vectorscope, we see that there is a vertical and a horizontal axis that run through the very center. The vertical and horizontal axis might be designated as R-Y’ and B-Y’, or PB and PR, or you even may see perpendicular axes tilted 33 degrees counter-clockwise from vertical and horizontal labeled “I” and “Q” .
My sort of quirky take on this to help people visualize the concept is that one could think of these color difference signals as the axes of “latitude” and “longitude” creating the “coordinates” that tell the video system where to find the proper hue and saturation to create the designated color for a pixel that already has brightness or luma information supplied by the Y’ component.
So instead of attempting to picture the abstract concept of color difference signals as hue/saturation with no luma, it might be more useful to think of them as a set of map coordinates telling the system where to “retrieve” the proper color…and the vectorscope is the color "world."
(As with all things technical about digital video, my personal reference bible on the subject is Charles Poynton's "Digital Video and HDTV Algorithms and Interfaces." I highly recommend it as a source of unbiased and thorough information.)

TimK

Thursday, December 4, 2008

Why the HPX-170 is NOT an HVX200a without a tape drive...

Anyone who knows me has probably witnessed me taking issue with Panasonic's sales hyperbole (when I'm not taking issue with Sony's sales hyperbole...or anyone else's) surrounding their line of DVCPROHD cameras.

This is not one of those times.

At 5,695 USD, the AG-HPX170 is the least expensive small camcorder with HDSDI out. This connector seems to be worth as much as the entire camcorder for most manufacturers. (Canon's step between their entry level HDV camcorder and the next step up, which adds primarily HDSDI, nearly doubles the price) The HDSDI out allows the camcorder to be used as a 'baseband' capture device to capture live, uncompressed, or use an external recorder such as Convergent Design's Flash XDR. This is a feature that is not included on the HVX200a. This may be the single most important versatility feature on small camcorders as we go forward, allowing the largest variety of recording formats to be used downstream from the camera if the on-board recording mechanism is inappropriate for a given application.

At a full 2,000 USD less than Sony's PMWEX1, the 170 is a worthwhile consideration for those who have a comfortable history with Panasonic's small camcorders and even with the EX1's 1/2" CMOS sensors with higher native resolution beckoning, one can realize value from the P2 cards (and experience) they already own. This is not to mention Panasonic's clear edge in the HD professional production marketplace, which is "buy the small P2 camera and rent a Varicam 3700 or an HPX 3000 when you need it" using the same P2 media and the same post workflow (using DVCPROHD of course)...try that with a Sony Z7 and an F23 or F900...

Panasonic has taken some flack for the sensor in these cameras (some from yours truly, though I took more issue with the early marketing strategy of appearing to try to hide the spec as opposed to explaining it) , which creates an HD image from super-sampling a sensor lattice that has less photosites than the ouput image has pixels. Anyone (myself included) who has actually used the HVX camcorder knows that the aesthetic of the images created is simply better than the specs give it a right to be. (Frankly, I find this to be the case with the vast majority of handheld HD camcorders in the pro marketplace, HDV included.)

When you add in the 170's waveform and vectorscope monitoring on-board (also not available on the HVX200a...or anything else in this price range AFAIK) and a slightly wider lense...

....and you really examine how many HVX200 users have really utilized their tape drive in the time they've owned the camera...you realize that the 170 isn't the 200's little sibling at all...it's the next generation.

For small, handheld HD cameras, the HPX170 will prove to be a pretty good value to those who find it fits with their needs.

TimK

Wednesday, December 3, 2008

Panavision's must see videos on camera specifications

I know this isn't new info for many of you, but the Panavision guys have done the most complete job of interpreting camera and acquired image specifications to-date in my opinion.

The video is in 7 parts...download them and watch them...

(The point at which the bayer sensor configuration is compared to the Genesis can seem like it's making a case for a particular workflow, but overall, the info is incredibly solid and nearly unavailable anywhere else as far as I can tell.)


Demystifying Digital Camera Specifications

Stu Maschwitz has a take on the new RED lineup

Too Much is Not Enough

As always, Stu keeps his eye on the prize...producing more data is great, but are more pixels the same as more information?

One thing is for sure, Jim Jannard is changing everything...as he's promised.

TimK

Now Open for Business


(thump, thump) "Is this thing on?....

So, I've finally done it, I've started my own blog dealing with issues of digital video production and the pieces of it that I find interesting.

I'll be posting stuff as I get up to speed here...and I'll warn anyone who pops in that I do try to be objective, but the tools I use are the tools I know best and occasionally that sort of thing will be evident.

I hope my personal thoughts are useful to someone out there as I start my own personal, ongoing braindump into the great cyber noise chamber... Stay tuned, and thanks for stopping.

TimK