Thursday, November 26, 2009

December 5th, New York, RED Workflow

The Red Digital Camera has few ambivalent observers. The camera is embraced or dismissed intensely by many in the production industry. As it is with many things, the real truth is always somewhere in between depending on your circumstances.

The post production workflow has become a critical factor in evaluating the merits of any of the new generation of Digital Cinema cameras.

Studentfilmmakers.com is hosting a RED Post Production Seminar on December 5th in New York, featuring William Donaruma of Notre Dame, Leigh Herman from Sony, and yours truly handling the Adobe version of the post workflow.

If you're in town, stop by...

TimK

Saturday, November 21, 2009

Animating a Human Face from TED

One of my favorite destinations on the web is TED.com. It's a site that has a wide variety of presentations and lectures and often the only common thread that runs between them is that they're all fascinating.

Paul Debevec talks about animating a synthetic human face.


TimK

Monday, November 16, 2009

POV Cameras...GoPro HD on the way!

As my last few posts have indicated, I've been struggling with finding the right tool for a POV (point-of-view) mountable camera for several applications including my motorsports work.

The GoPro cameras that I have have it all...the size...the mounting hardware...the functionality. What my GoPro cameras don't have is HD. In fact they're a bit below full SD.

I've explored using some other small cameras, (Blog post October 6, 2009) but awkward body designs and narrow lens choices have been a challenge to overcome.

We've been making the standard GoPros work (Blog post August 12, 2009), but even with the really interesting images that the cameras are capable of, for video production purposes ten years into the 21st century, the resolution os a disadvantage.

GoPro has an HD product now listed on their website...though it looks as if it isn't shipping yet.

I know I'll be bookmarking it and ordering when it's available.

(I am not given any promotional consideration by GoPro, Creative Labs, or any other manufacturer associated with this post.)


TimK

Tuesday, October 6, 2009

Vado HD...from Creative Labs

Creative Labs makes some interesting stuff. Most of it aimed squarely at the technology consumer market.

Below, I focused on the GoPro POV cameras that I use for my motorsports work. They're great cameras with well-made mounting hardware.

The problem is that I work primarily in high definition and the GoPro's video size of 512x384 is hard to utilize without really aggressive upscaling...which really isn't possible to do without featuring the obvious low resolution of the pictures. GoPro is working on an HD version, but no word yet on when it will be available.

Upon some additional searching, I decided to examine some cameras that fall into the category of the "Flip" camcorders that are out there as many of them are 720p capable. I stumbled upon a unit from Creative Labs called the Vada HD.

The camera has an on board USB connection for offloading and it charges from the USB port as well...not as convenient as the over-the-counter lithium AAAs and SD cards that go into the GoPro...but I gain the 1280x720p framesize.



This is also not to mention that the camera's form factor is that of a cel phone with an on-board camera and comes with no super-useful mounting hardware. In fact, since it's really meant to replace the family camcorder, it doesn't even come with mildly useful mounting hardware... It comes with a somewhat curious latex "boot" that I suppose is a very useful barrier against liquids and perspiration getting into the camera, though the viewscreen, lens and USB pigtail remain uncovered and available...and I suppose technically, still vulnerable. The unit is oriented vertically and has a threaded mounting hole off to one side. I'm working on fabricating a reasonable way to mount this in a car without the obblong weight of the thing causing strange stresses and vibrations. It may very well end up mounted upside down.

The GoPro comes with the waterproof shell that I've verified myself-underwater.

The Vada is of course, creating aggressively compressed video. The device has 8 GB of storage and for most casual use, as well as for the POV use I anticipate for it, I suspect it will be fine. I've included some frame grabs in the post for you to examine (click on the images for full size).



The PC utility that comes with the device seems to work well...for those who want to head directly to YouTube...you can do that right in the utility.




I've noticed with the GoPro that the Motion JPEG AVI files seem to agree with Sony's Vegas Video without much problem but my most frequent tool, Adobe Premiere Pro, can't seem to run them properly under any circumstances I can create. I haven't completed my tests on a post path for the Vada H264 material, but I'm going to be on that in the very near future.

...and of course I'll need to figure out how the heck to mount it.

TimK

Creative Labs Website

GoPro Camera Website

Wednesday, August 12, 2009

Fun doesn't have to be HD

I've been out a while I know...I've been on vacation. My family and I vacation near Munising, Michigan where we camp on the Southern shore of Lake Superior. One fringe benefit of the area are the numerous shipwrecks that are in the area, many of them are in shallow enough water to simply view over the side of the boat.

This year, I thought it might be fun to take my GoPro POV cameras and see what they can do underwater. I have two units with 170 degree view lenses, and these little cameras come with an underwater housing included. Since I'm not a scuba diver, I decided to simply mount the cameras on an ordinary paint roller handle extension, one shooting video and the other shooting stills.

While the video is relatively low resolution by today's standards (512x384), the still images are far larger (2592x1944).





These little cameras are a lot of fun and were a great way for all of us to get a better view of some of the more interesting sites that we could have only shown the family through a dive or a snorkel in some extremely cold water...


At 200.00 USD, these devices are the least expensive fun that I know of...







These cameras record to a standard SD card and run on two AAA batteries (lithium works best as you might guess). An underwater housing is included and depending on which package you buy, many mounting methods are available.

GoPro website









Tuesday, July 14, 2009

Cut!...It's my wife calling.

OK...just when you thought you'd seen it all with hand cage/matte box rigs for DSLRs (Red Rock's really elaborate rig is pictured), you realize you ain't seen nothin' yet.

We seem to be moving in the same opposing directions that the HD television-buying, yet YouTube obsessed consumer is... We like video to be of great quality...or really convenient. And we're willing to completely compromise one for the other.

The people at Zacuto in Chicago have seen an opportunity and seized it. Zacuto has been known for very solid camera accessory rigs for some time now. They are a rental company that also does their own hardware development.

The new iPhone has video capability-what it doesn't have...is a handle. So, here it is. This is the ZGrip iPhone Pro. Zacuto has made a fairly informative video that shows the capability of the grip system in use here.

You can mount the iPhone on a tripod, on a long accesory rod to gain elevation... it looks as if you can create a 30 pound accessory kit for your 135 gram smart phone in no time.

I'll be holding out for the teensy swing away matte box and french flag myself.

Once an iCinematographer has his or her rods, tripod, jib, audio recorder and other accessories assembled, the iPhone may be the only cel phone that is actually too bulky to carry on a commercial flight with all its accessories...

Don't forget to set the ringer to 'silent' before you call "Action!"

TimK

Monday, July 6, 2009

More on DisplayPort...Part 2 of 2

Part 1 of this topic is here.

While the case for moving forward from VGA and DVI is a fairly obvious one for many of us, the logic on why we need DisplayPort in a world where HDMI (High-Definition Multimedia Interface) has taken hold, may be more subtle.

HDMI has become the standard for high definition television displays and, by extension, the devices that connect to them.Its ability to support multiple audio channels, nearly any video or computer display format and as of version 1.4, an option for 100 Mbit/s Ethernet connectivity would seem to make HDMI a clear contender as the omnipotent display connection choice for all entertainment and data display applications.
So what is gained from adding DisplayPort to the landscape?

In the list of advantages over DVI, both HDMI and DisplayPort can carry audio and each has the ability to use RGB or Y’CbCr colorspace (VGA and DVI are RGB only).

There are several reasons why DisplayPort may be better in certain circumstances…and in cases where there are several reasons for anything, one of them is often ‘money.’

In our scenario, the cost factor referenced most by manufacturers is HDMI’s licensing fees. The cost of licensing HDMI in the PC display space is apparently not as practical as it is for the consumer television market. DisplayPort is a royalty-free, VESA (Visual Electronics Standards Association)-defined standard.

Another application that makes DisplayPort technology attractive is “chip-to-chip” interface for use inside a device that has an integrated display (think laptops and smartphones, currently using low voltage differential signaling or LVDS), as well as a “box-to-box” for connecting external displays. This creates interesting opportunities down the road for external displays to become lighter and thinner (and less expensive) by jettisoning the considerable electronics dedicated to scaling and other “receive signal and deploy pixels” sort of duties inside the display and making the display “direct drive.” Manufacturers can also cut costs by standardizing on one method of driving integrated and external displays. HDMI is designed as a “box-to-box” connection only.

As our requirements for computer display performance continue to expand, ideally our next connectivity standard would be able to grow as well. HDMI has a lot of advantages over DVI, but one limitation the two share is having an external clock. This limits the ultimate speed and bandwidth of the pipeline to the predetermined maximum rates already set in the architecture. In a case like this, the standard needs to be revised to extend the capabilities of the protocol as in the case of HDMI 1.3 increasing the clock speed to 340 MHz over the 165 MHz in HDMI 1.2 to enable support of WQXGA displays (the 2560x1600 of 30” LCDs most typically) . DisplayPort embeds its clock in the data signal itself, making it scalable, along with data payloads, to the physical limits of the pipeline.

So…we know some of the advantages of DisplayPort…what are the limitations?

First, HDMI is backwards compatible with DVI and you can drive an HDMI display with a DVI output. DisplayPort can be adapted and converted to HDMI or DVI, but of course the signal would have to be compatible with the destination. In other words a Y’CbCr signal could be sent through an adapter from DisplayPort to HDMI, but DVI can only handle RGB.

Second, HDMI supports xvYCC or “extended-gamut YCC” whereas DisplayPort does not. xvYCC is a color space that utilizes the full gamut of RGB grayscale, which would use all values 0-255 in an 8 bit grayscale versus a typical television gamut which would confine legal values to 16-235 under BT.601 and BT.709.

Third, HDMI supports Dolby TrueHD and DTS-HD Master Audio, which is one reason why HDMI is very entrenched in consumer products. For computer displays used in post production environments, support of these formats is far less an issue.

In the real world of motion visual post production (much of it which can no longer be described as “film” and some it even awkward to designate as “video”…) , both standards have some foothold.

The HP DreamColor display has been causing many of us in the image-handling world to reconsider the configuration of our systems to be able to monitor Deep Color…in this case, 30 bit color precision (10 bits per channel, effectively a palette of 1 billion colors). The DreamColor will connect to DisplayPort or HDMI 1.3 outputs…along with DVI-Integrated. Of course, DVI will only work with 24 bit RGB signals, but it’s a clear sign that DVI’s epitaph isn’t quite written yet. (The DreamColor also has S-video and composite video inputs…a bit of a trailer hitch on a Ferrari in my mind.)

Several manufacturers have released HDMI in/out cards for use as ingest/output devices for video editors taking in material from an HDMI-enabled camcorder, and several manufacturers have added HDMI capability to their computer display cards.

AJA Video Systems recently came out with their “LHi” line of video cards, which not only features all the traditional video industry standard interfaces such as HDSDI and analog component video, but now includes HDMI in/out.

DisplayPort has been adopted in varying degrees by many other manufacturers, and has seen a commitment as the next-generation display solution in NVIDIA’s line of professional display cards and Apple Computer’s laptops, as well as a fair number of their consumer desktops.

As for myself, I do color correction work and I also do conventional post production and editing work and I see Deep Color devices and workflows as a way to gain precision in my work. HDMI will likely be a very neat and clean way to drive a television display to view output in that environment, but I look forward to the sort of technical and economic advancements that DisplayPort will enable for those of us who need a standard that will stabilize yet remain extensible.

…and who among us wouldn’t love to add just one more cable type to the rack in the closet?

www.displayport.org
www.hdmi.org

TimK

Saturday, June 27, 2009

Why Display Port? Part 1 of 2


If you have been involved with computers for any amount of time, you may very well be reaching your point of maximum tolerance for sheer number of connection and slot types available. As the pace of advancement has accelerated in computers themselves, advancement and evolution of the methods of interface with these systems continues to accelerate in kind.

Display interface may not have been changing with quite the speed of hard drives and other peripherals, but for those of us who use computers in video post production, it’s become a chore to determine how to construct a signal path and monitor combination that can be considered “evaluation” quality as CRTs fade into the sunset.


When VGA (Video Graphics Array) was introduced in 1987, it was certainly a step up from its predecessor, EGA, which could choose 16 colors, and added the ability to choose those from a palette of 64 possible shades over its predecessor, CGA.
Of course, then we moved to Super VGA, XGA, WQVGA, WXGA, WSXGA, WUXGA, WQXGA…and on and on…ever increasing pixel count and color precision.



Of course, as displays changed, requirements changed in the way we fed them a signal as well. Flat panel displays had pixels instead of a CRT’s analog scanlines. Enter the Digital Visual Interface, or DVI connector in 1999.



Of course, the great thing about the standard DVI connector is that there are 5 different models. They are each just different enough to make many technicians misidentify them about 50% of the time, but to make them incompatible about 75% of the time.
A DVI-I connector (“I” for “Integrated”) can carry analog as well as digital information to be backwards compatible with analog displays. A DVI-D connector is “Digital” only. The fact that there even IS a DVI-A (“Analog only”) connector defies logic as the reason for DVI’s development was to overcome previous analog display cabling limitations. There is an M1-DA connector that integrates USB with digital and/or analog signals, and of course, the DVI-DL (“Dual-Link”) is what is necessary to run those spiffy 30” LCD displays at full native resolution because of its additional payload capacity.

All this has been expanding the capabilities of our computer displays quite rapidly over the last decade, but for video production professionals, 24 bits per pixel has started to become a bit limiting (DVI Dual-Link does have the capability to convey 48 bits/pixel in specific applications).

As displays have shown up with 30 bit color precision, (see more about HP's DreamColor here) in a new pipeline was needed. HDMI can handle the color, but as a data display driver, it didn’t quite have the necessary flexibility.





TimK


Thursday, June 11, 2009

One of my favorite explanations of color balance...

The fine folks at Cambridge in Color have a very helpful explanation of color balance. (among other things)

The concepts behind light and color are key to the work we do with images both in the field and in post. Check out the article which includes several solid visuals...

TimK

Tuesday, May 5, 2009

A Primer on 3d Modeling...for those of us who don't do it for a living.

(Sorry for the gap in posts. I've been traveling for an extended period.)


Over the years, I've dabbled a bit in 3d animation but I am a neophyte relative to anyone who does it on a regular basis.

That's why Jeff Brown from FireMist Media is in my rolodex. When I have a need for something in the way of complex 3d animation, he gets a call from me.

Jeff has put together a white paper on 3d modeling to give those of us on the outside a bit of insight into the general concepts of 3d animation. For those of us who purchase this sort of work from vendors, this article will certainly help you begin to cut through much of the jargon involved in 3d work.

Download the PDF from the link here.

TimK

Friday, April 3, 2009

4 cores? 8 cores? How about 240 processor cores?

NVIDIA has been making some pretty heavy-duty display cards for professionals for a number of years now. I use 2 NVIDIA Quadro dual head cards to drive my 4 monitor post production workstation. The acceleration provided for visual effects preview is extremely helpful in getting more work done in less time.
With all the focus on bigger and badder CPUs in our workstations, one of the more intriguing advancements in computer muscle is happening somewhat quietly. That would be the advent of parallel processing over a much larger group of processor cores.
While I'm absolutely positive we'll continue to see advancements in CPU power, one CPU core represents the capability to process one operation at a time...at an incredible speed of course. When you add more physical processors, you gain processing power but your limitation becomes how well the math can be sectioned up between two processes and the energy expended to figure out how to divide the operations up-and on the back side, reassemble the results into one unified dataset.
When you have multiple logical cores on one physical wafer, you now have the ability to do multiple operations using each logical core, limited by the pipeline that gets the operations on and off the chip as well as the efficiency of the code to divide and reassemble data. Multiple physical processors would involve multiple "pipes" to get data on and off each processor, gaining some extra torque over an equal configuration utilizing the same number of logical cores on one physical processor.
For operations like 3d animation or complex visual effects where the data that needs to be streamed onto the processor and the math involved is small in relationship to the processing necessary, multiple physical or logical cores are of immediate benefit. In video editing applications, adding processor cores can be helpful where large amounts of decode and encode operations are necessary, say when editing highly compressed HDV or AVCHD footage. In applications where the material is less compressed, or even uncompressed, multiple processor cores become less of an advantage as the dataset that needs to be moved becomes larger, but requires less processing, moving the speed burden to hard drives and buss speed.
NVIDIA has recently started focusing on their CUDA technology. CUDA is what gives software manufacturers a way to tap into the processing architecture of NVIDIA's powerful graphics cards to complete processes that may or may not be display or graphics related. NVIDIA uses parallel processing to get the speed from their configuration. While the cores may be smaller, there are a LOT more of them. The Quadro 5800 card for instance, has 240 processor cores. One example of utilization of this kind of processing is the CUDA-enabled RapiHD™ H.264 encoding plug-in for NVIDIA Quadro cards.
(Wikipedia's take on parallel processing-good general info.)
A way to picture the relative capability of an 8 core CPU and a 240 core GPU might be to picture five decks of cards being dealt out. With eight dealers, each dealer has 32.5 cards to distribute...with 240 dealers, each one distributes 1.083 cards. Even when we take into account that the 240 processor cores are smaller, the share of the load they have to carry is MUCH smaller and the processing is all happening at the same time. The 8 dealers may be very fast but they can't throw 32 cards out at the same time and expect them to fall neatly in front of each player in the proper rotation...they have to go one at a time. They may be dealing cards out of a pitching machine at a velocity that could severe a human limb, but the cards still have to be handled one at a time-serially. In the case of the 240 dealers in parallel, they also handle the cards one at a time with the one caveat that all but 20 of them are only handling one card with one destination. Which way do you think would be faster?
I think that GPU based processing is one of the most interesting areas of computer processing to keep an eye on... With GPUs becoming available to handle instructions along with ever more powerful CPUs, I don't think that the exponential growth in computing speed and power will be leveling off anytime soon. This technology is even being deployed as the primary processor in specialized workstations...learn more about Tesla here.

TimK

Monday, March 23, 2009

Sasquatch...The Loch Ness Monster...and Magenta

I came across this article on biotele.com (a site that calls itself the "neurostimulation portal" which seems appropriate, looking at the home page design) that explains why...spectrally...magenta does not exist.

It's an interesting article for those of you who are interested in color theory and how the human brain perceives color.

TimK

Wednesday, February 25, 2009

Slumdog Millionaire wins "Best Film"...without much film at all.

Slumdog Millionaire is truly one of those inspirational filmmaker stories. We never get tired of an underdog and a low budget film about a poor person in India is about as un-commercial as you can get in the feature film industry.

The recognition for the film (to date) includes:

2008 Camerimage Film Festival "Golden Frog" Award for best-in-class visual aesthetic and technical values

National Board of Review of Motion Pictures "Best Film for 2008"

American Society of Cinematographers nominates Anthony Dod Mantle, BSC for Outstanding Achievement Award for Slumdog Millionaire.

A total of four 2009 Golden Globe Awards






Nominated for ten Academy Awards

Received 8 Academy Awards including Best Picture, Directing, Cinematography and"Film" Editing.


In addition to being amazing, it's also a "film" that was acquired, in the majority, digitally. (depending on who you're quoting and whether or not you refer to raw footage or finished screen time the figure sits around 60-70%).

There has been a LOT of buzz around some Digital Cinema Cameras, but this one has been a bit under the radar.

The Silicon Imaging 2K and 2K Mini are now officially cameras of legitimate capability... Actually they really aren't two cameras. The 2K Mini is the sensor assembly alone and works by ethernet link to a fast laptop, which constitutes the 'processing' portion of the camera body, complete with viweing LUTs and an onboard version of Iridas SpeedGrade embedded for viewing picture adjustment. The full SI 2K takes the mini and inserts it into a full camera body with all the processing and storage on board, including the embedded version of SpeedGrade, no laptop required. If you have the full SI-2K, you already have the Mini, just detach it from the full body and connect it to the body via ethernet and you have a dash cam or POV camera with a full body on .

The camera uses a Super 16mm sized Bayer sensor (for more on Bayer, see my blog post on the Bayer patent from 12-14-08).

The workflow-favored format for the camera to write to is CineForm RAW, which is a high quality, compressed RAW motion picture format that has several post workflow options for editing directly inside an NLE without 'flattening' the image.

(**I should probably point out that I was involved a bit with this camera's testing prior, and shortly after its release, as well as involved with CineForm products, so I am probably not without my biases...)

Congratulations to everyone involved with Slumdog Millionaire, and congratulations to Silicon Imaging and CineForm.

Links for more info:

Slumdog Millionaire at IMDb

Silicon Imaging


CineForm

Saturday, February 14, 2009

So what qualifies as 'High Definition' Video?

One basic criteria often referenced for high definition video is having at least four times the resolution of standard television (double the pixels horizontally and vertically). However...

A significant source of confusion for production professionals migrating into HD delivery is that the accepted picture sizes for digital television distribution vary and not all are “high definition” by any modern standard we could apply. In the United States, the ATSC or Advanced Television Systems Committee has issued “Table 3” of picture formats for digital television, specifying 640x480, 704x480, 1280x720, and 1920x1080 as digital video display resolutions. Both the 640x480 and the 704x480 formats have interlaced and progressive variations at multiple frame rates, with the progressive scan versions often referred to as “extended” or “enhanced definition,” the spoken shorthand being “EDTV.” (HighDef.org has the chart, as well as a simplified version here.)

The two accepted frame sizes for HD, 1920x1080 interlace/progressive, and 1280x720 progressive are 16:9 aspect ratio picture formats.

1280x720 is not 4x as many pixels as standard definition video (it even falls short of 3x), but it is used as a progressively scanned image. Most professionals would agree that progressive scanning does allow for the perception of very high detail with the tradeoff being progressive scanning does sacrifice some aspects of the smooth motion of interlaced scanning at an identical frame rate.

Advocates for the 1280x720 progressive format point out that while 1920x1080 images may be larger in size, when the 1080 frame is interlaced the picture delivers only half of the picture content every 60th of a second, while “720p” can refresh the entire image every 60th of a second. In this scenario then, 720p would deliver only slightly fewer pixels over time than “1080i,” and it would be temporally “sharper.”

The comparison and debate over spatial resolution (frame size) and temporal resolution (frame rate) and whether one trumps the other didn’t begin here and it won’t end here either. However, it's probably significant to note that one large stock footage library has ceased placing new footage acquired as anything less that 1080...and the trend for program format for an increasing number of broadcast and cable networks is to simply require a 1080i master with the stipulation that footage may be acquired as 720p (not mastered) with prior clearance... Many local stations will undoubtedly drag their feet on upgrading local origination to HD resolutions in the present economy, but mastering you content in HD maximizes shelflife.

Each professional needs to take a hard look at what sort of projects they do and what kind of delivery requirements are involved. It does look as if mainstream program distribution is definitely pointing toward 1080.

Saturday, February 7, 2009

John Galt of Panavision chats a bit in the pasture...


John Galt, Panavision's Senior Vice President of Advanced Digital Imaging (and I'm happy to say, a professional acquaintance of mine) has taken some time to chat with CreativeCow.net on pixel counts, dynamic range and a really interesting new technology that Panavision has in the queue called 'DynaMax' (Think HDR in motion).

Some of this discussion does come back to the point of what the human eye actually sees... The December 9th, 2008 entry here on the blog has some additional thoughts... see "Shoot it and they will...probably see it"

As always, I refer those of you with questions to watch Panavision's excellent videos on camera sensors, image sharpness and modulation transfer function, which are permanently linked in the right margin.

TimK

Tuesday, January 27, 2009

AAF Explanations from...the AMW Association

I apologize for the break in postings...I've been on assignment.

As we all approach the world of universal project and media structure...more or less, it's always good to review why in the world we are doing this to ourselves and at least remember that the whole idea was ultimately to make everything easier.






The Advanced Media Workflow Association is the name of the body that is the keeper of the AAF and MXF standards that manufacturers seem to typically adopt, but not completely or in a standard way...which of course leaves the main role of these standards (transportability between applications and environments) absent in many circumstances.

Eventually however, we all know that intellectually the adoption of open standards in a uniform manner will benefit the entire production community and therefore we'll hold out hope that some day manufacturers may actually agree and implement accordingly...

In the meantime, the AMWA has an excellent AAF explanation document available for those of you interested in a better understanding of what AAF is ...and isn't.

Download the PDF here.


TimK

Friday, January 9, 2009

Want to hear about the REAL beginning of the Internet?


Vint Cerf (pictured) was around way back when the idea of what we now know as the internet was being visualized and being concept-tested. I know it isn't related to imaging directly, but this is an interesting presentation by a really engaging speaker on a topic that affects all of us.

Trust me, it's a great tech war story, you'll like it. It runs an hour and you can download it and watch it when you have some wait time.

...on one of my favorite sites, The Research Channel.

TimK

Tuesday, January 6, 2009

General Knowledge: Quick Visual Guide to RAID Levels

JetStor has (in my opinion) one of the most easily understood visual guides to RAID Levels and how they work.







The selection buttons are a little small, but they're just above the illustration.

Enjoy.

TimK

Saturday, January 3, 2009

Dalsa announces a 48 megapixel image sensor.

Dalsa has introduced an image sensor that is 48mm x 36mm and has a photosite lattice of 8000 x 6000. The CCD is a Bayer pattern in the modern RGB configuration. (See the post "Bayer Sensors...Patented in 1976...")

The technical paperwork in here. The claim is that even though each photosite is reduced in size by 30% from the previous generation sensor, the signal-to-noise ratio improved by the same amount.

Dalsa also has an excellent tech paper on how image sensors work here.

Luckily our data storage expands as voraciously as our pixel counts...

TimK

Friday, January 2, 2009

More excellent reference material: CCD vs CMOS image sensors

I'm dusting off all my favorite reference material and sharing it with you this Holiday Season.

The fine people of Dalsa have some of the most detailed CCD vs CMOS sensor comparisons I've seen. Make sure to also check out the two links on the right side that take you to additional papers on the topic in PDF form.

More news from Dalsa tomorrow...a small hint...pixel count.

Enjoy.

TimK