How the age “prediction” works

The novelty site is making the rounds right now — I saw it this afternoon on Metafilter — and after fiddling with it briefly I got to wondering (or, well, got to doubting) whether they were doing anything interesting with their aging software.

And so I started throwing odd things at it: pictures of infants, pictures of people with odd faces, pictures of things like bananas. It made the babies look awful, the odd people continued looking odd when the site could recognize them as having faces at all, and the banana didn’t get past the initial facial recognition check.

And then I tried throwing a cartoon face at it, and got a glimpse of what’s actually going on: it looks like in20years is just blending one of a handful of pre-rendered facial templates onto the submitted face. I got curious about what all those templates look like, and so I found a very simple line-drawing face via google image search:

line drawing

…and threw that at the site for each of the possible configurations. The site provides three options for manipulation: gender (male or female), age progression (either 20 or 30 extra years tacked on) and drug addiction (are you methed out?); that’s a total of eight possible output images for the original input, so there’s likely exactly eight pre-rendered source images.

Here’s one of those images, for the Male, 20+, No Drugs configuration: output for test image

You can see the face behind the line drawing there, faintly. The aging functionality takes this face and apparently paints it onto the submitted photo, like a kind of high-bit-depth facepaint.

Finding the faces

I wanted a somewhat clearer look at the source images, though, so I took the eight output images, cropped out the extra framing, and did a heavy-handed levels adjustment to produce much higher-contrast images of each.

Here’s the matrix of those images, in two sets, the first with No Drugs and the second with extra addiction.

Male on the left, female on the right, 20+ on the top row and 30+ on the bottom row.

Male 20 NoFemale 20 No
Male 30 NoFemale 30 No

Ditto, with drugs:

Male 20 YesFemale 20 Yes
Male 30 YesFemale 30 Yes

So there’s your basic Faces Of Aging. The yes-drugs and no-drugs faces have far more in common with each other as a group than any of the drugs-vs-no-drugs matchups for any age and gender category; gaunt cheeks in the template images and a narrowing of the jaw in the distortion of the source image seem like the main ways in which the software elects to turn someone into an addict.


Of course, the site is also doing some amount of scaling and titling and adjusting for the obliqueness of shots that aren’t perfectly face-forward portraits; to demonstrate this positional work, I threw three slightly modified versions of the line drawing at the site, and produced the following output shots, all at the Male 20 No Drugs configuration:

Tilting the head. The software compensates for rotation in the source photograph.

Head height. The software scales the image vertically/horizontally to roughly match facial dimension.

The most interesting of the bunch — I dislocated the nose of the drawing, and the software interpreted that as an oblique shot, distorting the underlying image a bit to follow the feature as if this was an off-angle portrait.

Add those bits of manipulation to basic x/y positioning within the frame and you’ve got some pretty solid where-to-stick-the-face-paint stuff. That the age manipulation bit is itself so low-tech — just, again, blending one of these template images onto every single face submitted — is sort of a disappointment from an image-processing nerd’s perspective but hardly surprising for a dumb little web toy. If you’re in your 20s or 30s, this thing will do a decent job of making you look like you’ve spent a couple decades smoking either tobacco or crack, depending.

Breaking things

Though while as a toy it works well enough within narrow parameters, it falls down pretty badly on the outliers that the software’s “is this a face” functionality isn’t sufficient to screen out — sure, you can’t do a banana, but you can do a line drawing, and that’s not great since, as above, the results are just a revealing bit of a mess.

You can also do babies:

oh god that baby

…which isn’t great either. Or is great, if you want a laugh and don’t mind having it at the expense of’s “advanced face detection and morphing technology”. And if ten thousand people link to an awful baby photo on their website, they’re probably laughing right along as well.

Aside from screening failures (line drawings, babies), it’s also possible to produce blending errors; the positional recognition and distortion stuff is pretty good but it’s far from perfect, and so especially in situations where it has to handle more than one kind of distortion and so has more chances to make mistakes, you can get monstrosities like these:

oh godwhy why why

…which, well, heh.