Gcam is a Google X project that was created to fit a really small camera lens into the Google Glass project. It was later used to the camera for the Google Pixel. However, its evolution doesn’t end there.

Now, Gcam may actually be moving to a new horizon in camera technology, one where the hardware for a camera is not needed at all. The Gcam relies on a different approach to photography. It tries to find solutions in software to problems that exist in the real, physical world.

Gcam, the camera to end cameras
Gcam, the camera to end cameras

There are other such solutions, that use software techniques to better the image results, but this is the only one that works like a real camera and not like a filter. A camera works by collecting light particles, photons, on a lens. The information regarding the photons is captured and converted to digital form by a sensor.

This information is then sent to a processor to process, store and/or display. Simple enough. With the Gcam however, the concept is different. Instead of telling the device what we want to snap a photograph of, why not allow the machine to learn about its environment?

What if the device knows where it is pointed at? If it had a library of pictures, which it could manipulate to resemble what the user wants it to see, would it need its eyes: the camera lens? Such are the questions that Google researchers are trying to answer. The entire concept may seem futuristic and far-fetched, but it is a possibility.

Machine learning and photos are not new to each other. Google already uses such techniques in its Google Photos service. Neural networks have been used to show how computers can learn from images and merge them to make a photorealistic and authentic result.

LEAVE A REPLY

Please enter your comment!
Please enter your name here