Apple (AAPL)’s new high-end iPhone will make any traditional camera manufacturer tremble. The iPhone 11 Pro, unveiled at a special Apple event on Tuesday, not only has three cameras in the back—each having its own functions—but also for the first time utilizes artificial intelligence to take a photo. Yes, the next time you feel proud of snapping a perfect pic, it may have actually been the little robot living inside your phone.
Here’s how it works: On the iPhone 11 Pro, every time you are about to take a picture, the cameras will quickly take eight images of the object before you press the shutter. When you actually take a photo, the phone will compare your image against the eight previously taken ones and merge the best pixels of each image into one final product.
SEE ALSO: Apple Introduces a New Way to Finally Stop Robocalls
The process is called “deep fusion,” which Apple’s senior VP of worldwide marketing Phil Schiller called “computational photography mad science” when introducing the new phone at Tuesday’s event.
This innovative feature is powered by the neural network hardware called “neural engine” in the new iPhone’s A13 Bionic processor. This piece of machine learning hardware was first introduced in the iPhone 8 in 2017. But this is the first time it will be used in photo taking.
To average iPhone users, the “deep fusion” feature will make it a lot easier to take beautiful pictures in challenging environments, such as low lighting or fast movement situations.
In addition to the iPhone 11 Pro (starting at $999), Apple unveiled two new phones during the event: the iPhone 11 Pro Max (a larger version of the Pro starting at $1,099) and the cheaper iPhone 11 (starting at $699).
All three models will be available for pre-order on Friday and arrive in store on September 20.