What we can learn from the history of tech in photography [Episode 4]
Digital photos, smartphones, photo manipulation, AI… These are just some of the tech breakthroughs that have turned the world of photography on its head. Meero stands at the intersection between technology and photography. We often reflect on the history of these tech revolutions and how they were first perceived. As tech enthusiasts, we see these innovations as great opportunities. But we also know that big changes often come with their share of fears. Our teams wonder about the impact that these changes can have on professionals and the general public alike and strive to understand them from a historical perspective.
In this series of four episodes, we’re revisiting the big revolutions that changed the history of photography… and the many controversies they brought about.
Catch up on the first three episodes:
Computational photography, the photography of the future?
What’s the difference between a camera and a computer? The question might seem obvious. It's not.
Photography is usually associated with the world of optics. A lens is used to capture light. The light leaves a trace on a photosensitive surface, creating an image that is more or less consistent with reality. It’s the basis of the process that enabled Frenchman Louis Daguerre to market the first prototype of a consumer camera almost 200 years ago.
But these two centuries that separate us from the very beginnings of photography have seen the tremendous rise of computer science, from the work of Alan Turing in the 1930s to the beginnings of artificial intelligence. Computer science has revolutionized many common human practices, and photography is no exception.
In this series of articles, we've talked about the rapid collision of digital and photography. Since then, computers have continued to play an increasingly important role in our photographic processes: the arrival of the Internet, for instance, has made it possible to share our pictures instantly.
In recent years, some experts have been saying that the future of photography lies on the side of code. Why?
When smartphone manufacturers face an impossible challenge
On October 4th, 2017, Google introduced its smartphone, the Pixel 2, a device created with an emphasis on artificial intelligence: machine learning algorithms are used to optimize all its functions. But its most revolutionary advances are focused on—you guessed it—the camera.
The self-proclaimed "best smartphone camera" is a jewel of computational photography. By taking multiple shots of the same scene, the Pixel 2 is able to provide better pictures and more importantly, to understand the depth and composition of a scene. This is a massive shift from the optical process that made the Daguerreotype such a success!
The Pixel 2 allowed Google to briefly take the upper hand in the fierce battle between smartphone manufacturers. Since their conquest of the amateur photography market, the competition has been intense. How to integrate ever more powerful cameras into ever thinner and sleeker phones? A thin smartphone body can’t accommodate the optical components necessary to compete with the best SLRs.
Necessity is the mother of invention. Smartphones are limited by the laws of physics, perhaps, but their algorithms have more tricks up their sleeves. Apple, Google, Samsung, Huawei, and others quickly grasped that the battle is being fought on the side of computational photography.
A science with a long history
But what exactly is computer photography? Let's turn to Marc Levoy, a Stanford professor who pioneered the discipline and also participated in the design of Google's Pixel camera, for a definition:
“Computational imaging techniques that enhance or extend the capabilities of digital photography [in which the] output is an ordinary photograph, but one that could not have been taken by a traditional camera.”
Computers have been involved in photography since the dawn of digital photography: for half a century now, digital processes have been taking over from pure optics to perfect the rendering of the image. Digital cameras have not waited for machine learning to put their computer performance to work to recreate images faithful to reality.
Image manipulation, too, is as old as photography. For the past thirty years, professional and amateur photographers alike have been able to turn to editting softwares to adjust their photos: saturation, contrast, and exposure can all be optimized through digital means. In 2011, Instagram innovated its own version of image manipulation by offering filters capable of masking the poor quality of certain photos in one click. And it didn’t take long before these filters were integrated into the smartphone cameras themselves.
The boom in computerized photography that we have seen over the last decade is simply a continuation of an old movement, but with one key difference: nowadays, retouching is done automatically and in real-time by machine learning algorithms.
A flood of photos
How does a reflex camera create an image? The shutter opens and closes at a speed that the photographer chooses and the sensor retains one simple frame.
That's not really the way it works on the smartphone side. Their cameras take pictures continuously, from the moment the app is launched. When the user presses the button, the app refers to the images captured a few seconds prior. This stream of images provides the context of the photo: valuable information that is used to optimize the rendering.
This method, called stacking, is what sets HDR and HDR+ photos apart. It makes it possible to reproduce an editting technique popular with expert photogaphers: bracketing. But it does it in real-time. This is one of the pillars of computer photography.
Have you ever wondered why your cell phone has multiple sensors? By taking the same picture at slightly different angles, the device has more information about the composition of the image and its depth. Combine this with information gathered at slightly different times, and the smartphone now has a wealth of data that allows it to recompose better images and edit them intelligently. Because machine learning algorithms are always more powerful when they have access to more data...
One of the most surprising applications is the Bokeh mode, available on some smartphones. Smartphone devices do not have the physical components necessary to produce this light blur effect in the background. However, their algorithms are capable of recreating it from scratch, simulating it by identifying and isolating the main subject of a photo.
Increasingly, cameras are looking more and more like powerful computers capable of assisting the photographic process. It’s an additional tool that will undoubtedly lead to great advances. But like most of the innovations we've talked about, it will still leave room for art, human expertise, and each photographer's own inventiveness.