We know from previous Apple patents that the company is hard at work finding ways to embed Face ID into the displays of future iPhones.
The biggest hurdle here is that the infrared light needed for Face ID doesn't travel well through displays, but a newly issued patent from Apple suggests the company may have found a solution …
Embedding Face ID into the display
Former Apple design chief Jony Ive has long seen the holy grail of iPhone design as a “single glass plate.” From the front, you'll see no bezels, no notches or cutouts, just a solid display. Ive may have long since moved on, but the company is believed to still be working on this vision.
This would require eventually embedding everything in the Dynamic Island under the display, including both the front-facing camera and Face ID technology.
The camera is a long-term goal. While it’s technically possible today, the quality it delivers is far from acceptable for an iPhone. For that reason, it’s almost certain that Face ID will be embedded in the display first.
Solving the Biggest Problem
While infrared light can pass through displays, IR transmission is extremely poor, which would make facial recognition much slower and less reliable than it is now.
Apple has previously experimented with selectively deactivating certain pixels to improve transmission, but a patent issued yesterday (spotted by Patently Apple) describes a simpler, more reliable approach: removing some subpixels.
A pixel is made up of individual light emitters for red, green, and blue. These emitters are known as subpixels, and mixing them in different ways at different levels allows a pixel to display any color. Apple suggests that some of these subpixels could be removed to allow infrared light to pass through the gaps.
The idea is that the missing subpixels would not be noticeable to the eye because Apple would only eliminate a subpixel when it was directly next to an emitter of the same color in a neighboring pixel. The neighboring pixel's subpixel could then effectively be borrowed to create the same color mixture.
A subset of all display subpixels in the pixel removal region can be removed by iteratively removing the nearest neighboring subpixels of the same color.
The efficiency of this approach would be improved by removing some of the wiring. Each subpixel has its own power and control lines, and if you remove a subpixel, you can also remove the associated wiring, increasing the clear area available for IR signal transmission.
At least some of the horizontal and vertical control lines in many non-pixel regions are rerouted to provide continuous open areas that reduce the amount of diffraction for light passing through the display to the sensor.
Apple also suggests that parts of the touch grid could be removed in the same areas to further eliminate barriers to infrared transmission. Given that these would be sub-pixel-sized holes, they wouldn't impact touch accuracy.
Will this finally happen in the iPhone 17?
Face ID was variously predicted to be included in the iPhone 15 and again in the iPhone 16 – neither of which, of course, happened. It’s no surprise that some are making the same prediction for the iPhone 17.
Last month, I noted that there may be a few reasons for optimism here.
First, there have been multiple reports that at least one of this year’s models will feature a smaller notch in the display. Jeff Pu has suggested that the iPhone 17 Pro Max will have a “significantly narrower Dynamic Island.” Embedding Face ID under the display would be the most obvious way to achieve this.
Second, there’s the iPhone 17 Air. Apple’s goal here is to have the sleekest design possible, and reducing the Dynamic Island to a camera hole would fit that goal perfectly.
At the time, the iPhone 17 Air was expected to be the most expensive model in the lineup, which would give it some heft as the first in line for some new technology. However, that pricing idea has since retreated, so we're now expecting it again at some point, but can't say when.
Render: Michael Bower/9to5Mac