If you shoot where there is enough light, follow the basic rules of composition, watch your focus and keep the camera steady, then you can safely leave the camera on automatic and won’t need to manipulate the light. But if you want your audience to see your subject in a particular way, to create a specific effect, or if you are stuck with bad lighting conditions, then you’ll need to understand how the light, your subject and the camera interact with each other. To understand this interaction it helps to understand how the camera works.
It can be said that cameras are like mechanical eyes. Real eyes work like this: light enters your eye through the pupil – a black circular opening that can vary its size in the center of the iris. Behind the pupil is a lens, which brings the light into focus. By focusing the light, the lens transforms it into a pattern of light and darkness, which is recognizable as an image of the objects in the eye’s field of vision. This pattern is then projected onto the retina. The retina, containing millions of photoreceptor cells that are sensitive to light, converts the image into electrical impulses that are sent to the brain through the optic nerve. And voila, we see.
Mechanical eyes or cameras are designed to imitate this process. Instead of a pupil and an iris, the camera lens has a shutter with a diaphragm, which opens to form an aperture or opening. The camera’s aperture is often referred to as an iris, because they serve the same function. The lens of the camera is in front of the iris rather than behind it, as it is for our eyes, but it still does the same job of focusing the light into a pattern or image, which is then projected onto the light sensitive material at the back of the apparatus. Instead of a retina, the light sensitive material used in a camera is film or an imaging sensor (charge-coupled device or CCD chip) that collects electrical charges when exposed to light, which it then transfers to a digital format. The result is not an image in our brains, but rather a photograph, strip of motion picture film, digitized videotape, etc.
Your eye and your video camera work pretty much the same way and yet the resulting image is very different. Why? The human eye has much greater resolution and contrast range, adjusting to shadows and registering details in dim light that a camera might never pick up. Because of this greater contrast range, humans can make out 2000 shades of gray. Film can make out only 21 shades and video even less than that, a mere seven shades for standard definition video. Having two eyes focusing on the same thing at the same time, we are able to see things three-dimensionally; the images the camera creates are flat or two-dimensional. Our eyes also make adjustments for the color of the light, which has a tendency to tint everything yellow or blue depending on the light source. Video cameras use filters to compensate for the differences in light color. Finally, our brains determine that some objects are dark and others are bright based on how much light they reflect. The camera or rather the camera’s light meter, assumes that everything is supposed to be middle toned, regardless of the percentage of light reflected.
Because of these differences, we are not able to shoot video that match the images we see with our eyes, but understanding these differences can help us to compensate for them. It’s often when we assume that the camera sees the same way that our eyes do that we end up with so-so images. Tips that should help you avoid a number of common pitfalls can be found on the following pages. A good start is knowing how to white balance your camera.