What is photography, really?

We take dozens of them a day: to record the good moments, to recall details that we might forget, to compare our outfits, to choose the color of our new sofa. Today, taking a picture is a commonplace gesture if ever there was one. 

It's true, we don't think twice before waving our smartphone cameras about. In two centuries of existence, photography has established itself as a reliable and simple way to capture reality. Its main advantage: to show things as they are. But with the great advances in retouching algorithms, it's hard to believe that things are still that simple. 

There’s been concern about retouched photos for the past two decades, but in recent years things seem to be accelerating. High-performance apps can transform your selfies in seconds. But that's not all: realistic photos can now be completely artificially generated. In photo terms, there’s a ’before’ and an ‘after.’ There used to be reliable photos, created by photographers who wanted to infuse them with a little bit of their humanity. After, there are fake photos, doctored by machines... 

It’s tempting to think that way... but if you know how a camera really works, you soon realize that things are far from being that simple. What if cameras, even the most basic ones, were always a bit deceitful?

The fabulous journey of light particles

Let's take a look at this familiar process that has allowed you to keep souvenirs of all your birthdays, or to immortalize your cat's prowesses. When you think about it, it's impressive: how can you capture a given scene exactly as it was? Well, you can't, not quite. What the sensor of your digital camera does is transform light into an electronic signal. 

Let's clarify a bit: a camera lens is an assembly of convergent and divergent lenses, each with a particular role. In a Reflex (or DSLR) camera, the light penetrates the lens, passes through the assembly of lenses, and is reflected on a mirror. An image appears on the viewfinder. When you press the button, the mirror retracts, thus allowing the light to reach the sensor (this is why you won’t see anything on the viewfinder as the picture is being taken). This is how the image can be formed on the photosensitive surface of the sensor. The sensor is ‘photosensitive’ because it transforms light particles (called photons) into electrical particles (electrons). It can do this thanks to an electronic component called a photosite: each photosite indicates the quantity of light that hits it. In practical terms, each photosite corresponds to a pixel of the photo. When we assemble the information collected by all the photosites, it becomes possible to reconstruct the image. What was first optical information becomes digital information. Simple, right?

An incomplete vision of reality 

So there you go for the general concept. In practice, it's quite a bit more complicated. The goal of the camera is not to capture all the photons that penetrate the lens. No, the goal is to recreate an image that looks like the ones our eyes perceive. It is, therefore, necessary to exclude the rays imperceptible to humans: ultraviolet or infrared. 

But that's not all... These famous photosites, the ones that collect light, are not sensitive to the wavelength of the photons they receive. In other words, they are impervious to color: they see the world in black and white. 

How is color photography possible then? A filter is placed in front of each photosite that lets the photons of a particular color pass through: red, green, or blue. As we know the location of each filter, we know which color each photosite corresponds to. The set of these small filters is called a Bayer Filter except for Fuji cameras that use another type of filter and the Sigmas that have photosites with the three colors. You may not know it, but the human eye is more sensitive to green than to other colors. So there are twice as many green filters in a Bayer Filter as there are red or blue filters. This is another example of "image manipulation" which aims to make the images produced by the camera look like the images we perceive with our eyes. The photos produced by our camera are already "filtered." Because if they weren’t filtered we wouldn’t recognize them.

Reinventing reality

At this stage, our photons have already traveled a lot. But the image we get is far from the one that will end up on our screens. It is a patchwork of pixels in different degrees of blue, red, or green. To smooth all this out, the camera applies a process called demosaicing. This process, which is the first step in turning a RAW file into a JPEG-type image, can be performed by different types of algorithms: each camera manufacturer has its favorite. 

This means that even at the most basic stages of creating an image, certain editorial choices must be made to optimize the rendering. Different demosaicing algorithms give us different results. 

After demosaicing, another editorial process comes into play: the white balance. An algorithm detects the regions of a photo that appear white, applies a certain brightness to them, and then deducts the other colors. 

These two processes are the most basic. The camera, whether more or less advanced, can do others such as correcting the exposure or removing noise. It also sometimes corrects spatial distortions due to the curvature of the lens.  

That means that the photo that the camera delivers is, technically, already manipulated. Editing isn’t a step that begins at post-production. Post-production is there to correct the imperfections that automatic retouching could not intercept. 

No such thing as unedited photos 

In September 2020, large forest fires changed the color of the Californian sky. The clouds of smoke, masking the sun, gave the sky an orangish hue. You may remember in the news, many residents tried to capture this catastrophic sight with their smartphone cameras. However, it was impossible to transcribe the color of this sky on their screens: it appeared gray and subdued on the photos. 

This curious phenomenon is a reminder of what photography really is. Cameras are not windows open to the world. They are machines that capture information to artificially reconstruct a plausible representation of reality. In this particular case, the algorithms of smartphones tried to attribute a normal color to the sky. When a picture is taken, a dozen or so rapid processes are triggered to manipulate the light. It would therefore be unrealistic to contrast edited photos and "neutral" photos.

Recent advances in computer vision and artificial intelligence are new steps on this spectrum. Like cameras, they are tools that allow us to account for reality by manipulating it so that it corresponds to this or that criterion that we would have set. Our editorial approach is all the more crucial as these tools are powerful. Making a sky bluer, a smoother surface, or a brighter white, all of this can be automated, just like the demosaicing or white balance processes are today. 

We now have powerful tools to put our editorial choices into action: but we first need to determine what the vision we want to execute is. And this is something algorithms will never be able to do for us.

Stay updated

Get the latest photographic news and inspiration