Skip to main content

Opinion: We may now be just two or three years away from an iPhone replacing a DSLR

I’ve always been a keen photographer. At 14, my father bought me an old fully-manual film SLR, and my aunt gave me her old darkroom equipment, so my bedroom became a darkroom with a bed in the corner.

When the first DLSRs hit the market, I waited impatiently for them to drop below the $1000 mark. That early Nikon D70 was replaced by a D3 which I still have today. If you had suggested then that it could be just a few short years before cameraphones could replace a DSLR, I’d have laughed.

But camera technology has developed at an astonishing pace. That D3 already spends most of its time gathering dust in a drawer. My Sony a6300 compact camera delivers near-identical results in most situations. And the camera I use most on an everyday basis is my iPhone.

There are just four remaining pieces of the puzzle before an iPhone can replace a DSLR, and it looks to me like we’re just 2-3 years away from cracking all of them …

Let’s look first at what the D3 – a professional level DSLR with four-figure lenses – delivers that the Sony a6300 compact camera doesn’t. I can answer that in three words: low-light performance.

When I need to take photos of people in dimly-lit surroundings, it’s the D3 I turn to. There’s a simple law of physics at play here: a larger sensor has a lower pixel density for any given resolution, and that results in less noise in the shot. The more pixels you cram into a small sensor, the worse the low-light performance gets.

But the Sony a6300 gets close. It has an APS-C sensor, which is large by compact camera standards, and does a perfectly good job of night shots, where a long exposure allows it to suck in enough light to produce a clean image.

There are other benefits to a professional DSLR, of course. Things like dual cards, to guard against lost shots; weatherproof housing; longevity; and so on. But for most amateur purposes, the gap between even a pro DSLR and a good compact camera is a rather small one these days.

The personal proof is that I love to take travel photos, and used to lug my D3 around on trips. As of a couple of years ago, I switched to the a6300 instead.

So if good compact cameras get close to DSLR performance, what’s the gap between them and an iPhone? Four things:

  • Low-light performance
  • Shallow depth of field
  • Choice of focal lengths
  • Long exposures (mostly for night shots)

Let’s examine each in turn.

Low-light performance

The low-light performance of a full-frame DSLR against an iPhone is, if you’ll excuse the pun, night and day. The above shot, for example, was taken in a castle, in a room lit only by some exceedingly dim bulbs and a small window. An iPhone would have captured only a motion-blurred grainy mess.

I said earlier that there’s a simple law of physics at play here, and that’s true, but there are a couple of riders.

First, sensor technology improves all the time. Every year, DSLR manufacturers push the ISO limit (the amount by which they can amplify the signals from sensors) to new levels. iPhones have of course lagged a long way behind, but we’ve seen similar progressions there. So eventually, the tech would push things to the point where an iPhone sensor could match the performance of today’s 35mm sensors.

Second, there’s the multi-camera approach taken by the L16 device we mentioned yesterday. Take two small sensors, capture half the image on both and stitch them together and you’ve effectively doubled the size of your sensor. Repeat until you run out of room for cameras. Because each camera only captures part of the scene, you can keep the pixel density – and thus noise levels – low.

So the low-light problem will be solved, one way or the other. Two to three years to get to the same performance as today’s DSLRs strikes me as a very achievable goal.

Shallow depth of field

When photographing people in particular, you want the person to stand out against the background. This is most commonly achieved by having the person in sharp focus and the background blurred, using a technique known as shallow depth of field. Above is an extreme example: a macro lens used to ensure that the reflection in the eyeball was in focus with very little else.

The iPhone 7 Plus achieves this by cheating. It takes two photos using two different lenses, and uses the parallax effect – the difference in vantage point between the two lenses – to figure out how far away things are. It then artificially blurs the background.

That artificial effect is ok, but an experienced photographer can tell the difference. And blow the shot up large enough and anyone will see artefacts around the edges. So far at least, it’s not a great approach.

It will get better – maybe so much better over the next two to three years that it will be all that is needed. But as a keen photographer, I’d prefer to see the real thing.

Genuine shallow depth of field is a function of three things: sensor size, lens aperture and focal length. For razor-thin depth of field, you want a large sensor, a wide lens aperture and a long focal length. Portrait shots are typically shot on a full-frame sensor with an aperture of around f/2.8 and a focal length of around 70-100mm.

Sensor size, we’ve addressed above. Lens aperture isn’t an issue – the iPhone 7 lens has an aperture of f/1.8. And the optimal focal length range for portraits is achievable today. The iPhone 7 Plus already has two different focal lengths in the lenses used; adding others is just about cost and space.

So one way or the other – whether optically or artificially – an iPhone delivering acceptable shallow depth of field within two to three years is again eminently achievable.

Choice of focal lengths

In the above shot, I’m using a telephoto lens to zoom in from a distant rooftop, and also to visually compress the distance between the various famous London landmarks.

But this one has already been addressed: just add cameras. While a key benefit of DSLRs is the ability to swap lenses, in practice I rarely remove the Nikkor 24-70/2.8 from my D3. I know many other photographers for whom the same thing is true. 24-70mm allows for a fantastic range of images from wide-angle scenery to portraits and detail shots. So an iPhone that can emulate this range would satisfy most needs.

Of course, the hyphen is important: you can’t just stick a 24mm and 70mm lens onto an iPhone and call it good. You need to deliver focal lengths between the two. But again, for most needs, not many. I’d say four would meet the ‘good enough for most people’ requirement: 24mm, 35mm, 50mm and 70mm. There’s many a professional photographer who’s shot their entire life’s work using those four focal lengths.

(I am, of course, talking 35mm equivalents – the actual focal lengths needed to achieve the same framing would depend on the sensor sizes used.)

So four cameras would meet the focal length requirement.

Update: There’s also some interesting mix of optical/computational tech that could address the usual depth challenge with longer focal length lenses.

Long exposures

For night shots, and some other specialist effects, you need the ability to leave the shutter open for far longer than is possible with today’s cameraphones. A typical exposure length for the kind of cityscape shot seen above is 30 seconds.

Here you have two issues. The most obvious one is that night shots are the ultimate low-light shots. Both the problem (noise) and the solution (multiple low-pixel-density sensors) is the same.

There is a second issue: the longer a sensor is switched on, the more heat is generated by the circuitry, and the more interference this produces. But this is a problem camera manufacturers figured out long ago: you take one long exposure for the photo, then you close the shutter and take a second equal-length exposure of nothing. The second shot will only contain the interference, so you subtract that data from the first photo to remove the interference.

The densely-packed circuitry in a smartphone makes the problem worse, but a Google engineer has shown that this can be solved by stacking multiple short exposures. There are already night photography apps that take this approach, though none of the ones I’ve tried work well. But his efforts, using a more sophisticated computational approach, show what can already be achieved using today’s sensors if you add enough processing power.

So this too will be solved.

In short, I can see no reason why an iPhone in 2-3 years time couldn’t match the performance of today’s DSLRs for 99% of occasions.

A couple of final points. First, I’m not sure I ever envisage professional photographers making the switch (even though some already have). DSLR tech will of course continue to evolve, so they will always stay ahead. Second, we’re already seeing moves beyond conventional photography. Some sports media, for example, records video of games and take frame-grabs from those, eliminating still photography altogether. Light-field cameras may also completely revolutionize photography.

But for most of us, I think our iPhones will – within a few years – be the only camera we need. Am I right? Do you think it’ll take longer to get there? Or do you think it will simply never happen? As always, take our poll and share your thoughts in the comments.

Photo: ValueWalk


FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear