Your face for sale: anyone can legally gather and market your facial data without explicit consent

The morning started with a message from a friend: “I used your photos to train my local version of Midjourney. I hope you don’t mind”, followed up with generated pictures of me wearing a flirty steampunk costume.

I did in fact mind. I felt violated. Wouldn’t you? I bet Taylor Swift did when deepfakes of her hit the internet. But is the legal status of my face different from the face of a celebrity?

Your facial information is a unique form of personal sensitive information. It can identify you. Intense profiling and mass government surveillance receives much attention. But businesses and individuals are also using tools that collect, store and modify facial information, and we’re facing an unexpected wave of photos and videos generated with artificial intelligence (AI) tools.

The development of legal regulation for these uses is lagging. At what levels and in what ways should our facial information be protected?

Is implied consent enough?

The Australian Privacy Act considers biometric information (which would include your face) to be a part of our personal sensitive information. However, the act doesn’t define biometric information.

Despite its drawbacks, the act is currently the main legislation in Australia aimed at facial information protection. It states biometric information cannot be collected without a person’s consent.

But the law doesn’t specify whether it should be express or implied consent. Express consent is given explicitly, either orally or in writing. Implied consent means consent may reasonably be inferred from the individual’s actions in a given context. For example, if you walk into a store that has a sign “facial recognition camera on the premises”, your consent is implied.

A poster at a supermarket that says camera technology trial in progress, partially obscured by a couple of bins.

An inconspicuous sign that flags camera technology trial is in progress counts as implied consent. Margarita Vladimirova

But using implied consent opens our facial data up to potential exploitation. Bunnings, Kmart and Woolworths have all used easy-to-miss signage that facial recognition technology is used in their stores.

Valuable and unprotected

Our facial information has become so valuable, data companies such as Clearview AI and PimEye are mercilessly hunting it down on the internet without our consent.

These companies put together databases for sale, used not only by the police in various countries, including Australia, but also by private companies.

Even if you deleted all your facial data from the internet, you could easily be captured in public and appear in some database anyway. Being in someone’s TikTok video without your consent is a prime example – in Australia this is legal.

Furthermore, we’re also now contending with generative AI programs such as Midjourney, DALL-E 3, Stable Diffusion and others. Not only the collection, but the modification of our facial information can be easily performed by anyone.

Our faces are unique to us, they’re part of what we perceive as ourselves. But they don’t have special legal status or special legal protection.

The only action you can take to protect your facial information from aggressive collection by a store or private entity is to complain to the office of the Australian Information Commissioner, which may or may not result in an investigation.

The same applies to deepfakes. The Australian Competition and Consumer Commission will consider only activity that applies to trade and commerce, for example if a deepfake is used for false advertising.

And the Privacy Act doesn’t protect us from other people’s actions. I didn’t consent to have someone train an AI with my facial information and produce made-up images. But there is no oversight on such use of generative AI tools, either.

There are currently no laws that prevent other people from collecting or modifying your facial information.


Read more: So, you’ve been scammed by a deepfake. What can you do?


Catching up the law

We need a range of regulations on the collection and modification of facial information. We also need a stricter status of facial information itself. Thankfully, some developments in this area are looking promising.

Experts at the University of Technology Sydney have proposed a comprehensive legal framework for regulating the use of facial recognition technology under Australian law.

It contains proposals for regulating the first stage of non-consensual activity: the collection of personal information. That may help in the development of new laws.

Regarding photo modification using AI, we’ll have to wait for announcements from the newly established government AI expert group working to develop “safe and responsible AI practices”.

There are no specific discussions about a higher level of protection for our facial information in general. However, the government’s recent response to the Attorney-General’s Privacy Act review has some promising provisions.

The government has agreed further consideration should be given to enhanced risk assessment requirements in the context of facial recognition technology and other uses of biometric information. This work should be coordinated with the government’s ongoing work on Digital ID and the National Strategy for Identity Resilience.

As for consent, the government has agreed in principle that the definition of consent required for biometric information collection should be amended to specify it must be voluntary, informed, current, specific and unambiguous.

As facial information is increasingly exploited, we’re all waiting to see whether these discussions do become law – hopefully sooner rather than later.