Have you downloaded FaceApp? Do you know how it works? Read on to explore machine learning and the risks of trading privacy for a free service in our commentary.
Unless you’ve been living under a rock for the past month, you’ve likely seen amusing images on social media, showing what your friends might look like in their 80s or as the opposite sex, all created by the viral app FaceApp.
You may have even tried it yourself. But let’s take a closer look at the app, particularly from a machine learning perspective. It presents an intriguing opportunity to explore the state of machine learning in 2019, as well as the issues of technology, virality, and privacy.
Technology
In machine learning, a system that generates images, such as human faces, is known as a generator neural network. The generator receives a noise vector—a list of random numbers—and uses it to create an image. The noise vector adds diversity to the process, ensuring the machine doesn’t produce the same image every time.
Training a generator to create realistic faces requires millions of training examples. The generator creates an image, a human evaluates it, pointing out any flaws, and then the generator refines its model based on the feedback. This process is repeated until the machine can generate realistic images.
In 2014, Ian Goodfellow introduced the idea of using two neural networks in competition rather than relying on human feedback. This concept led to Generative Adversarial Networks (GANs), where one network generates images and another acts as a critic, providing feedback on the quality of the generated images. Over time, the generator produces images that can pass the critic’s evaluation, leading to realistic outputs.
A conditional GAN extends this idea by adding categorical input, such as generating only elderly faces. This requires labeled training data so that the system can understand the features of an older face. The generator uses this data to create faces of different ages, while the discriminator uses it to assess whether the generated face fits the age category.
FaceApp takes this even further—it generates faces that resemble a specific person. This requires an identity-preserving conditional GAN, where the generator starts with a cleaned-up version of a particular face image, not affected by category data. This version of the input face is then fed into the conditional GAN, ensuring the output looks like the person, but altered according to the desired category (such as age).
Barriers to Entry
What’s important to note is that all of this technology is widely accessible in 2019. FaceApp likely built on open-source algorithms and added their own “secret sauce.” The creators likely identified key features in facial images and gathered large datasets categorized by conditions like age and expressions.
One of the key takeaways from FaceApp is that there are no longer significant barriers to entry for machine learning. A Python programmer can now create an app that uses cutting-edge academic algorithms without fully understanding their inner workings. This opens the door to tremendous innovation—how these algorithms are combined, what problems they solve, and where training data can be sourced.
Privacy and Virality
The fundamental rule for evaluating free online services is: If the service is free, you’re the product. So, how does FaceApp plan to monetize its free app? There are multiple options, such as selling advanced photo-editing features or using the images uploaded by users as training data that could later be sold to other companies.
Regardless of the business model, it’s clear that users sacrifice some privacy when they use FaceApp. The app requests access to your camera and photos, and users grant FaceApp perpetual rights to any images they upload. These images could potentially fall into the wrong hands and be misused—for example, to create realistic fake images of you at a crime scene, fooling facial recognition software, which is increasingly used for identification.
Despite these risks, millions of users have happily shared their photos, curious about what they might look like decades from now. This speaks volumes about the current state of privacy in 2019. We’re becoming more comfortable giving up our privacy in exchange for free, fun online services. While there’s growing concern about privacy regulation, like GDPR, our personal data often remains unprotected at home.
Source: Forbes India