Saturday 7 January 2012

Computational Photography

 Courtesy:: Economist

The principle of focusing rays through an aperture onto a two-dimensional surface remains the same from 5th century BC Camera obscura to latest digital camera.

Computational photography, a subdiscipline of computer graphics, conjures up images rather than simply capturing them.

The best example of computational photography is high-dynamic-range (HDR) imaging. It combines multiple photos shot in rapid succession, and at different exposures, into one single picture of superior quality. Apple added HDR as an option in the iPhone 4.

 HDR idea was developed by  Marc Levoy of Stanford University, with his colleague Pat Hanrahan in 1996. They wrote a journal paper that  described a way of simplifying ligh field mathematically. It is now a reality because "You are getting more computing power per pixel."

 Dr Levoy developed SynthCam and Frankencameras that can improve photos taken in low-light conditions. On June 22nd Ren Ng, a former student of his at Stanford, launched a new company called Lytro,  which promises to launch an affordable snapshot camera.

He used an approach is known as light-field photography, and Lytro's camera will be its first commercial exploration. In physics, a light field describes the direction of all the idealised light rays passing through an area. In Dr Ng's camera light field is created using an array of microlenses inserted in between an ordinary camera lens and the image sensor. By calculating the path between the lens and the sensor, the precise direction of a light ray can be reconstructed.

This ray tracing concept also derived from computer graphics. In computer graphics , the technique is used to paint realistic reflections of one artificial object on another, among other things.

for detailed and more information go to :: http://www.economist.com/node/21522976