How LiDAR Works, and Why It’s in the iPhone 12 Pro
How Tech Works

How LiDAR Works, and Why It’s in the iPhone 12 Pro

How will Apple get people to upgrade to an iPhone 12, now that most of us are home all day and don’t need a portable powerhouse? The world’s most valuable public company has some ideas: magnet-locking charging cables, pro-spec cameras, and (for the Pro models) LiDAR. Two of those features make sense to most people. The other one might sound like the skill of knowing when someone is lying.

LiDAR, at the moment, is more intriguing than day-to-day useful in a retail device. It helps cameras focus faster in low-light conditions, and enables some neat visualization, or Augmented Reality (AR), apps. But what it might really do is spur the use of 3D-scanning as a natural complement to pictures, helping Apple create better maps of indoor spaces. Or it could wind up as the next short-lived Force Touch.

It’s a good day to visit what LiDAR is, how it works in an iPhone, and how it’s not the same thing Apple uses for Face ID. There’s a lot to learn about—inside, once again, a phone you might want to upgrade to when you need to. Ahem.

What LiDAR is

Image of military office alongside a color-coded LiDAR map of same office.
LiDAR mapping out an extremely-military-looking office (U.S. Air Force).

LiDAR is a weird kind of acronym, like a half-inverted-backronym. It originally was just “Light” jammed into “RADAR,” with the weird 80 percent capitalization because RADAR is itself an acronym. LiDAR is like RADAR, but with light instead of radio waves—you could leave it there if you needed to close this tab because something is burning. But once engineers and scientists figured out what you could do with LiDAR, they started retroactively renaming it. The National Ocean Service, for example, declares “Lidar” to be “Light Detection and Ranging.” Scientists: they do not work in marketing.

LiDAR is essentially bouncing focused light (usually non-visible infrared) off of things, measuring how quickly the light bounces back, then doing the speed-of-light math to figure out how far that object was from the sensor. With enough light beams, a sensor can use all the distances to create a three-dimensional map of objects. The more beams, the more complete the map. You can use tons of LiDAR beams to map the Earth and its oceans, a few focused beams to catch speeding cars, and lots of applications in-between.

Still from video of a cat being scanned by LiDAR
Light returns from the human arm a bit faster than the cat in our infrared view of the iPad Pro’s LiDAR sensor.

Right now, you, the non-cop, non-earth-scientist person, are most likely to encounter LiDAR in robot vacuums and “self-driving” cars. If you still have your Kinect set up, that had some LiDAR, too.

What Apple uses LiDAR for (it’s not Face ID)

We last saw LiDAR when we tore down the fourth-generation 12.9-inch iPad Pro released in late March 2020. We noted that the iPad Pro used far fewer light points than modern iPhones’ Face ID sensors. “That’s okay though,” we noted in the video, “because it doesn’t really need the same precision as Face ID, since it’s mapping room-scale objects, not identifying a specific person’s face. It’s likely a trade-off: fewer data points means less precision, but better sustained [Augmented Reality] performance and battery life.”

Another thing that sets an LiDAR apart from Face ID is that … Face ID doesn’t use LiDAR. While the “TrueDepth” tech that powers Face ID similarly involves a slew of infrared light beams spread onto your face (more than 30,000!), Face ID is using the distortion of light as it hits your face, not an exact distance measurement, to create a map. This technology is called “infrared or structured light 3D scanning” (third point in that link). As Ars Technica puts in in a deeper dive into LiDAR and other scanning technology: 

TrueDepth works by projecting a grid of more than 30,000 dots onto a subject’s face and then estimating the three-dimensional shape of the user’s face based on the way the grid pattern was deformed.

So LiDAR is not going to super-power Face ID. What will it do? Apple had a lot more to say about the LiDAR in the most recent iPad Pro announcement. With LiDAR, Apple says that its devices can:

  • Make the Measure app better and quicker at calculating heights and lengths, and finding edges.
  • Enable augmented reality apps—like a physical therapy app that measures joint angles, an IKEA visualization tool, and the-floor-is-lava games
  • Focus a camera faster in the dark, since LiDAR can tell how far a prominent object is without any light.

And yet … that doesn’t seem complete, does it? Apple putting its hefty resources into minituarizing a very expensive technology, just to help you imagine what a couch looks like in your living room? Ah, yes, there is more.

What Apple could use LiDAR for 

Apple and Google have some things in common. One of them is a real interest in mapping indoor spaces, which you can’t easily do with a car with a pole on it. Airports, shopping malls, multi-floor office buildings, the sealed arcologies where we’ll all end up living in our SimCity future—these are places where your phone, so helpful outside, can only guess as to where you are and what you might be near based on some loose Wi-Fi triangulation. Both companies have a vested interest in your phone not seeming useless most anywhere.

So Apple bought 3D-sensing company PrimeSense in 2013. PrimeSense was a primary architect of the Xbox Kinect platform. Google got the next-best thing: Johnny Chung Lee, who worked on the Kinect while at Microsoft. Google launched Project Tango, an AR-focused platform that encouraged developers to dream big with this new many-points-of-light LiDAR technology they were stuffing into some awkward-looking phones and tablets. We tore down a Project Tango device back in 2014, and while we couldn’t technically turn it on, we did manage to power up its LiDAR camera to capture its beam game.

So Google, being Google, had a public beta of some really weird tech, then eventually folded Project Tango into the quieter ARCore developer kit for Android. Apple, being Apple, quietly incorporated PrimeSense’s smarts into Face ID, and is now pushing further with actual LiDAR. But Apple might have another angle they’re working.

Jessica Lessin, now of tech industry news service The Information, blogged about Apple’s acquisition when it happened. Lessin noted that while motion-sensing was what Microsoft wanted for Xbox gamers, Apple might want something a bit more practical for its customers:

PrimeSense’s technology is much more strategic for mapping, according to one person familiar with the company. In fact, companies like Matterport, which makes a camera for mapping three-dimensional spaces, use its chips.

We know Apple cares about mapping. The company bought WifiSLAM, an indoor GPS company, to help it map out malls and [other] indoor spaces in a race against Google, which is doing the same. Sooner rather than later, our phones will pull up scans of real spaces we want to visit or may be approaching. Those two-dimensional maps will seem very  obsolete.


Right now, the LiDAR in the iPhone 12 is full of promise. Some day, it might fill a restaurant you’re going to with infrared beams, so you can better gauge the space inside. Or, as with even the most hyped AR projects, it might end up as a too-early stab at future features. You can’t measure everything.

Top image by Oregon State University / Flickr.