Not too long after the first rumors surfaced, Apple has given its usual non-confirmation that it has acquired Faceshift, the company behind the technology Star Wars used to animate the faces of CGI characters. It’s not an obvious fit for Apple, so what could be the thinking behind the purchase?
Like Apple’s patents, it is sometimes easy, I think, to read too much into some of the company’s acquisitions. Sure, it doesn’t go around acquiring companies randomly, but it may not always be after the complete package. It may well be that there is some small element of the company’s technology that Apple wants, or it may be an acquihire – where it’s the engineers rather than the specific tech the company wants.
But in this particular case, there is reason to suspect that Apple does have an interest in the broad brush-strokes of what Faceshift does …
That reason is that this is not a single acquisition in isolation. As we noted earlier, Apple bought PrimeSense back in 2013, the company behind the technology used in Microsoft’s Kinect sensor, and facial recognition company Polar Rose way back in 2010. So it’s clear that Apple has significant interest somewhere in the facial recognition/motion-sensing field, the question is: why? I can see six possibilities.
Some years back, we thought we had a pretty good idea. In the days when Apple was believed to be working on its own television – rather than just the Apple TV box – there were long-standing rumors that Apple was planning some kind of gesture-based UI. Perhaps we’d use a hand gesture to scroll through content and point to what we wanted, for example.
Apple reportedly abandoned its full TV set plans when it couldn’t see a way to make enough money from it. It was suggested at the time that there were two reasons for that: TVs typically have low margins, and people keep them for a long time, so the upgrade cycle is long.
I don’t necessarily buy the low-margin argument – you could say exactly the same about PCs or smartphones, both areas in which Apple still manages to make a lot of money. But I do think the lengthy upgrade cycle is likely to have given Apple pause: it’s something that has already impacted iPad sales.
But this doesn’t rule out TV use, of course. Apple does already make a television, it just asks customers to buy their own displays. The company has already introduced one new UI in the form of Siri, and there’s no reason to think that it isn’t working on other options for future models. So television remains one possibility.
Looking a little further ahead on the television front, once Apple has its own cord-cutting TV subscription service, it could well follow the examples of Netflix and Amazon and start offering its own original content.
Given Apple’s historic tie-ins with Pixar and Disney, it’s not beyond the bounds of possibility that it might want to create its own animated shows – where the Faceshift tech could obviously be used as-is.
Another possibility is gaming. While Apple may be unlikely to develop its own games, it could potentially offer a framework that would allow developers to incorporate Faceshift-style features into their own games. Imagine a game where your on-screen character mirrors your own facial expressions.
Couple this to Kinect-style motion-sensing, and you could have games where your character effectively becomes a virtual version of you, copying both your movements and expressions.
These are things that haven’t made too much sense for iOS games to date, where you are playing them on a portable device, but could make a great deal of sense now that Apple TV is aiming to bring mobile gaming into the living-room.
On the subject of entertainment, the tech could be a lot of fun in FaceTime chats with kids. Grandparents probably want to spend more time chatting with their grandchildren than vice-versa, but if the kids were looking at animated characters, their attention span might lengthen considerably.
Automatic tagging of people in photos and videos is a more mundane possibility – but one with enormous potential. Apple currently offers developers an API that can spot faces in photos and videos, but the code can’t yet identify who the people are.
This is an area where Google Photos is ahead of Apple, allowing you to search your photos for specific objects, and even particular people. Google’s tech is still pretty crude when it comes to recognizing people, though, so there is a definite opportunity for Apple to take the lead. The Mac Photos app currently has limited capacity to suggest similar faces, but it’s again very crude.
Imagine, for example, building facial recognition right into the Camera app in iOS, so that photos and videos are automatically tagged with the people in them. You’d then be able to search for those people directly on your iPhone, and the tags would of course carry over into the Photos app on your Mac so you could do the same there.
Perhaps give you the option of auto-including the tags when uploading photos to Facebook and other social media sites. This kind of stuff could be huge.
Finally, it could be that Apple simply wants to develop facial recognition tech good enough to use to unlock devices. Face-unlock has existed on Android for ages, but it’s way too crude to be trusted – it was originally fooled by photos and, when blinking was added, by a pair of photos. It has also been easily fooled by people who vaguely resemble the owner of the device, while failing to consistently recognize the real owner – the worst of both worlds.
Apple could be taking the same approach here as it did with fingerprint sensors, which first appeared on competitor phones many years ago. Apple waited until it could do it properly, with tech that is both fast and reliable.
If it could make face-recognition as reliable as Touch ID, that would be even more convenient: unlock your device just by picking it up (or opening the lid of a MacBook). Authorize iTunes purchases without doing a thing, simply because you’re already looking at the screen and it knows who you are. No need to hold your iPhone awkwardly when making Apple Pay purchases – so long as the camera can see your face, it authorizes them automatically.
Which possibilities appeal to you most? Take our poll to let us know, and share your thoughts – and any other ideas you may have – in the comments.
Photos: Main & FaceTime photos Faceshift; Pixar characters Pixar; Disney Infinity characters Disney; all other photos Apple.
FTC: We use income earning auto affiliate links. More.
Face tagging already exists in Photos for Mac, that includes facial recognition (Photos suggest similar faces).
Although tech like this could definitely help for example recognize the same person at different angle?
Yes, it’s very crude at present – one reason I think it would be best done in the camera app is Live Photos means it has movement to go on as well as static features, and that’s where Faceshift tech could make a real difference.
rather than input, couldn’t this this be output? So using the technology to give Siri a face – so instead of a voice answer, you get a lovely face reading the response.
Imagine the possibilities – give Siri different (famous) personalities. Buy Samuel L Jackson as your ‘Siri’! Your favourite pop star could be your assistant for a small fee.
This was my immediate suspicion. Siri has been taking a hit lately for being too impersonal compared to its rivals at Google and Microsoft (not that I agree) but putting a face to the voice coupled with realistic facial expression would be fantastic.
I believe it is multiple, but I voted for facial unlock. I think they want to make 3D modeling fun and they need the tech to make it seamless.
I heard about face recognition in cars, to prevent fall asleep while driving… this could be the reason of the acquisition…
I was surprised the automobile wasn’t mentioned in the article, I like your idea.
Theres a scene in Iron Man 2 where Pepper is bugging Tony about the price of something he wants to buy (can’t find a clip) and he shuts her down with roughly I don’t care what it costs; I want it.
Don’t tell me (real) companies don’t make those kinds of purchases – this is a cool technology, but for Apple – doesn’t really have an immediate usage – but because they can buy it, they did and maybe someday, they’ll do something with it.
Facial unlock would surely be extremely difficult to implement well since everyone on earth has multiple doppelgänger’s. And what about identical twins? Not sure it would work.
Another use might be in the so called smart home. Your house recognizes you and implements all your preferences while at home.
I agree that facial unlock is unlikely to be implemented reliably any time soon.
It’s specifically a capture technology, not an animation technology – so it would be for input, not output. Giving Siri a face would be nice – but they wouldn’t need to buy a tech company to do that. I’m going to go TOTALLY out there and guess that it’s for miming :)
So the practical problem with using Siri is you have to yell at it. If you have face-capture technology your phone could read your lips – to augment voice recognition in noisy backgrounds or where you don’t want to talk loudly. Totally wild guess, but it’d be handy…!
But that’s the idea I mentioned – you get your ‘talent’ to be performance captured using this technology to map all their face shapes as they talk. Then build a program that talks like them (motion and sound maybe). Then they can read out the Siri response in their famous style.
What about solving the eye contact problem in video conferences? Stick multiple cameras at the top and bottom of the screen. Create a 3-D model of the person then render the image from the angle of the eyes of the person you’re talking to (wherever that happens to be on screen).
As someone who video conferences every day for work, the slightly off-center camera just isn’t right. It’s one of those ‘cherry on top’ details that would be very Apple.
Yeah, no. And sounds like very expensive one…
Embedding camera inside of a display comes to mind… similarly to the way ambient light sensor is hidden behind Apple Watch’s screen?
How about a face for Siri?
What about iMovie, Final Cut or maybe a totally new first party app from Apple?
Budget mocap tech for filmmakers – I’m there! And a windfall for licensing attorneys as every apple user creats a unique proprietary avatar. Let the games begin.
One potential Use Case would be the ability to easily create realistic 3D avatars and associated virtual worlds. This would not only give Siri a face, but also more locational awareness / familiarity to the user. A deaf user for example could communicate with Siri using sign language and Siri would be able to respond in the same way as all the associated language gestures / response would have been captured. Could be the harbinger of seamless visual / audio driven user interfaces.
Couple items of thought.
1. Isn’t apple working on augment reality mapping? Could that be in play here?
2. The car argument mentioned higher makes for an interesting idea. What better way to prevent car theft than to have that sort of a lock? (Facial recognition) if I recall, a impression of the ear is just as unique as a finger print. Jump in car, hidden camera catches and crosses it to ‘authorized drivers’ and then away you go. (Or don’t go)
3. If I recall correctly, and this could be simple rumors, Beats was purchased not only for talent and hard ware but because Samsung was circling around it. Could this be a strategic acquire to prevent another tech giant from capitalizing?
I believe these are for future virtual reality plans. I’m sure they’re working on a lot of virtual reality tech. There’s still a lot of potential from smartphones, and VR is a huge one. I think the biggest thing smartphones can advance in now, is environmental sensors and and camera sensors on the back, I don’t mean for better pictures, but for analyzing and gaining information about the surroundings. I want numerous sensors up on the back of the phone beside the camera. I’m guessing the iPhone 7 will get the duo camera sensor array from LinX which Apple purchased. This allows for depth and can lead to measurements of points, add numerous other sensors and you can begin to analyze objects in 3D and the phone will be able to tell you what anything you see in the camera, is. The dimensions of an object, the distance to an object, etc. other things I’d hope become more miniturized and implemented are things like thermal, infrared imaging etc., to build virtual maps even if there is no light. I believe people at MIT found a way to see human outlines through walls using wifi signals as well, so I see that in the future, I mean the possibilities are pretty great.
They need to add temperature, humidity, UV, etc. I think one day smartphones will be able to do almost anything tricorders were able to in Star Trek. Information about the immediate environment and information about a living being’s status.
Personally, I disagree with the argument that low margins don’t make sense because PC margins are low as well and Apple makes PCs. Macs actually have decent margins – not low ones. And TVs have even slimmer margins than PC
“I don’t necessarily buy the low-margin argument – you could say exactly the same about PCs or smartphones, both areas in which Apple still manages to make a lot of money.”