© 2020 by Matthew Bofenkamp. Created with Wix.com

Face 2 Face Project

Show More

Completed: 2019

Genre: Edutainment

Platform(s): iOS

Description: In the summer of 2019, I interned at the Face 2 Face Project, where I did research on the technological capability of facial recognition in augmented reality to help autistic children practice recognizing facial expressions.

Most of the internship was exploratory, discovering the strengths and weaknesses of the technology, but I also helped make a prototype of a game, shown here. In the game, players would be given a list of facial expressions (raise eyebrows, widen eyes, smile, etc.). This list would either correspond to a real emotion or would randomly select expressions, usually resulting in a silly face. The player must make a face using all of the expressions listed as quickly as they can. When they succeed, a photo of their face is taken, labelling the corresponding emotion when appropriate, and lets players save it to their photo library. Players must make as many correct faces as they can in one minute.

The game design is based on our research, which showed that the main deterrent for autistic people when it comes to recognizing facial expressions is that they generally don't know how to interpret the meaning of any part of the face other than the mouth. Thus, breaking down all components of a facial expression can help autistic people learn all the expressions that come into play when an emotion is expressed.

Controls: 

  - Facial recognition checks whether or not players are making the right face.

 - Tap the refresh button to generate a new, easier list

Unusual elements:

 - The vast majority of the input is based on facial expressions

 - Rewards users for their ability to make wild facial expressions, which is not a skill that is often rewarded in games

 - Brings comedy to autism therapy

My roles: 

Lead Programmer, implementing the following features:

  • Generation of a list of expressions

  • Resetting the list of expressions when correct or refreshed

  • Grouping expressions to avoid incompatible pairs (e.g. raise brows and lower brows at the same time)

  • Fine-tuning detection of various features to make sure they can all be triggered reliably

  • Difficulty buttons

  • AR camera feedback in game

  • Various technological explorations not in the prototype, including a platformer triggered by facial expressions (happy face to move right, sad face to move left, raise brows to jump), use of front and back cameras at the same time (this was before the release of ARKit 3, so the hardware didn't make this easy), and tracked movement of the device in 3D world space.

Designer of the following features:

  • A game about mimicking a randomly generated face

  • Integrating methods of autism research into the mechanics of the refresh button

  • Saving photos of correct faces the player has made

  • UI layout

  • Deciding which emotions may be taught and how those emotions are defined

  • Difficulty levels (not shown in video; easy = maximum of two features in list, medium = maximum of three features in list, hard = maximum of four features in list)

Artist of the following features:

  • Color palette

  • Refresh, timer, and face icons