The ethics of biometric surveillance in the workplace
Most employees are used to the sight of video cameras recording them in the workplace. But now, with AI-powered technology like facial recognition systems and biometric ‘wearables,’ workplace surveillance is fast becoming far more invasive.
We’ve previously written about workplace surveillance systems that track employees’ productivity in real-time. These systems typically display a worker’s productivity against set targets on a screen that they – and everyone else – can see.
While this sort of workplace monitoring can place extreme pressure on workers to hit often unrealistic targets, they don’t typically capture biometric data. That is, data about a worker’s unique physical characteristics, like their face.
The risks of facial recognition and biometric surveillance technology
The difference between facial recognition systems and traditional workplace surveillance technology is that the former provides employers with biometric data about their workers. As you’ll read in this article, biometric surveillance technology like this has the potential not only to breach the privacy of workers. But also, to racially discriminate against them. Like in the case of a black Uber Eats driver which you’ll read about in this article.
And soon, as you’ll read later, some employers will have the ability to use biometric technology to monitor their employees’ emotions. While the developers of this kind of technology may be well intentioned, there are obvious risks to the privacy of workers. This was no more evident in the recent case of Bunnings.
Bunnings breached privacy laws with use of facial recognition technology
In November 2024, the Office of the Australian Information Commissioner (OAIC) concluded that hardware retailer Bunnings contravened the federal Privacy Act 1988. This was due to its use of facial recognition technology in its stores without obtaining proper consent from customers.
Between November 2018 and November 2021, Bunnings deployed facial recognition technology in 63 of its stores across Victoria and New South Wales. The system used CCTV cameras to capture the faces of everyone who entered these stores. It then matched these images against a database of persons previously identified as posing a potential security risk.
Bunnings’ system ‘well-intentioned’ but not justifiable
According to Bunnings, facial images of individuals not flagged as matches were deleted within an average of 4.17 milliseconds. Nevertheless, the OAIC determined that the collection of sensitive biometric information occurred regardless of the duration for which the data was retained.
Privacy Commissioner Carly Kind said that Bunnings’ use of facial recognition technology may have been a “well-intentioned” effort to prevent “unlawful activity” in its stores. However, she said that this aim “does not mean its use is justifiable.”
Bunnings ordered stop using facial recognition software
The Commissioner’s investigation found that Bunnings collected customers’ sensitive information without obtaining their consent. The company had also failed to take reasonable steps to notify individuals of this practice. Ms Kind said that those who entered Bunnings stores “would not have been aware that facial recognition technology was in use.”
The OAIC has ordered Bunnings to cease using facial biometric recognition technology. It was also forced to destroy all personal and sensitive data collected and publish a statement on its website detailing its non-compliance.
Google rolls out facial recognition workplace surveillance system
Of course, Australian companies aren’t the only ones using facial recognition technology. In June 2024, news broke that Google started trialling facial recognition technology for its employees. The workplace surveillance technology was implemented at its campus in Kirkland, Washington in America’s north-west. The reason the tech giant rolled out the technology was to improve bolster security and prevent unauthorised access, Google claimed.
According to an internal document seen by US news outlet CNBC, the technology collects the facial data of employees. It then compares this data to the photos on employee’s badges. The system aims to identify individuals who “pose a security risk to Google’s people, products, or locations.”
Google employees don’t have the ability to opt out of being captured by the system. However, the company’s internal document stated that facial data will be “strictly for immediate use and not stored.”
Google had previous high-profile security incident
While the roll out of the facial recognition workplace surveillance technology was not spurred by a single incident, Google did previously experience a serious security breach. This happened in 2018 at its YouTube office in California, where a woman shot at employees, injuring three. The shooter allegedly targeted the YouTube office as she “hated” the company for taking her videos down.
Video workplace biometric surveillance identified disobedient employees
Google told media that it “has no known plans to use its facial recognition technology for monitoring in-person attendance.” However, it has previously used video surveillance to identify disobedient employees.
This was the case after around 50 Google workers protested the company’s contract with the Israeli government in April 2024. Google’s vice president of global security later told staff internally that the company had used video surveillance footage to identify workers who took part in the protest. Twenty-eight of those workers were later dismissed, and others were placed on forced leave.
‘Dystopian’: Employees react to facial workplace biometric surveillance tech
Opinions among Google employees about the facial recognition technology were mixed. An employee at its Kirkland campus told media that “I personally don’t have too much of an issue with it and maybe even welcome it assuming it works well.”
Others, however, have raised concerns. One Kirkland-based employee described the system as “a little dystopian,” highlighting anxieties over data privacy and potential misuse. “A lot are concerned about facial data being stored by Google. Data is extremely valuable,” the employee commented.
Another employee revealed that they learned about the workplace surveillance system only after it was reported in the media.
Facial biometric surveillance tech: Ensuring worker security or breaching their privacy?
According to many employers, the purpose of biometric workplace surveillance technology like facial recognition systems isn’t to monitor employees. But rather, they claim they are used to ensure security for their workplace.
Bunnings defended its use of the biometric surveillance technology. Telling media that it was only used to identify customers who were likely to engage in violence or criminal behaviour. The company told media that facial recognition technology was the “fastest and most accurate way of identifying these individuals and quickly removing them from our stores.”
Ensuring security may be a justifiable reason to subject customers and employees to facial recognition surveillance. However, some employers may soon be able to use the technology to actually monitor the emotions of their workers.
Australian company pioneering emotion tracking wearable
In October 2024, Australian AI firm InTruth made headlines for its wearable technology it claims as the first in the world to provide clinical-grade emotional tracking. The wrist-worn device can track the wearer’s emotions in real time. In Truth’s founder Nicole Gibson told media the wearable could be used in workplace settings. She said the wearable would be an “AI emotion coach that knows everything about you, including what you’re feeling and why you’re feeling it.”
For instance, she claimed a police or fire department could use it to track the emotions of staff and therefore anticipate mental issues like post-traumatic stress disorder. Or an employer could use it to monitor the performance and energy of office staff, Ms Gibson told The Sydney Morning Herald.
“We basically pull the raw data from the wearables, and we have built a machine-learning model that translates that data into emotions that are fed back to the user,” she told the paper.
Nicole Gibson previously served as Australia’s youngest-ever National Mental Health Commissioner and has had a long career in mental health. InTruth’s wearable technology may be designed to help employers spot and therefore help prevent mental health in their staff. However, there are of course potential risks to the privacy of workers who may be forced to wear them in the future.
“We will have the most accurate emotion-detector in the world, collecting millions of people’s emotional data. It’s going to have huge upside,” Ms Gibson claimed to The Sydney Morning Herald.
Uber Eats driver wins compensation for racially-biased facial recognition verification
In March 2024, the facial recognition technology used by Uber Eats to vet its delivery drivers made global headlines. All for the wrong reasons. This came after UK-based Pa Edrissa Manjang, a black delivery driver, took Uber Eats to court alleging discrimination.
Mr Manjang began working for Uber Eats in Oxfordshire in November 2019. Initially, the app used by drivers to log in and access jobs did not require facial recognition verification. However, in March 2020, the company introduced a facial recognition feature that required drivers to upload real-time selfies to verify their identities.
The new system caused repeated issues for Mr Manjang, as the technology struggled to recognise his face correctly. This led to his suspension from the platform in 2021 following multiple failed recognition attempts. Uber Eats stated that there were “continued mismatches” in the photos submitted by Mr Manjang, even though his appearance had not changed.
Worker claimed facial recognition tech was racist
In October 2021, Mr Manjang filed claims with an employment tribunal, citing indirect racial discrimination, harassment and victimisation. His efforts were funded by the Equality and Human Rights Commission and the App Drivers and Couriers Union.
Mr Manjang argued to the tribunal that the repeated facial recognition checks constituted racial harassment. He also claimed that his suspension, which was an automated process triggered by the software, was racially biased.
Settlement reached
Uber Eats ended up agreeing to a financial settlement with Mr Manjang before his case went to a full tribunal hearing. The company, however, defended its use of facial recognition technology.
“Our real-time ID check is designed to help keep everyone who uses our app safe and includes robust human review to make sure that we’re not making decisions about someone’s livelihood in a vacuum, without oversight,” and Uber eats spokesperson said.
While Mr Manjang’s account was reinstated following his suspension, the processes leading to this reinstatement were not fully explained.
Can you be dismissed for refusing biometric surveillance monitoring at work?
A pivotal Fair Work Commission case that examined this question is Lee v Superior Wood Pty Ltd [2019]. Jeremey Lee began working for timber processing company Superior Wood in November 2014. He worked at one of the company’s sawmill sites in Queensland as a general factory hand. His role involved operating forklifts, handling machinery and performing general tasks related to timber milling.
In October 2017, Superior Wood announced the introduction of a workplace surveillance system at the sawmill site that used biometric scanners. The new system required all employees to use fingerprint scanners to log attendance and track shift times. This policy aimed to improve accuracy in recording employee work hours and bolster workplace efficiency.
Mr Lee, however, voiced strong concerns about the collection and storage of his biometric data. He was concerned about data security and third-party access to the information. Namely, the potential misuse or unauthorised access to his personal information.
Worker is dismissed
Mr Lee had multiple discussions with Superior Wood’s management about the workplace surveillance system, but the issue remained unresolved. He continued to sign in for work using manual sign in methods. Mr Lee was verbally warned that he needed to sign in using the scanner and that if he didn’t, he would be fired.
Then about a month later on 12 February 2018, Superior Wood terminated his employment. It said that he had refused to comply with the company’s Site Attendance Policy, which justified his dismissal. Mr Lee subsequently filed an unfair dismissal application with the Fair Work Commision.
Employer argues it had right to collect biometric data
At his unfair dismissal hearing, Mr Lee contended that the biometric data contained within his fingerprint belonged to him. He argued that it met the classification of ‘sensitive information’ under the federal Privacy Act 1988. Therefore, he claimed that Superior Wood did not have the right to collect this information.
However, the company argued to the Fair Work Commission that an exemption applied to the collection of this data under the Privacy Act 1988. Specifically, it cited Section 7B(3) of the Act, which exempts employers from complying if it holds sensitive information about an individual that is directly related to a current or former employment relationship.
Fair Work Commission rejects claim initially
At Mr Lee’s unfair dismissal hearing, the Fair Work Commission found that Superior Wood’s requirement for him to use the biometric scanner was lawful and reasonable. It agreed with the company’s argument that it was able to collect sensitive information about Mr Lee as it was necessary for workplace functions. The Fair Work Commission therefore ruled that Superior Wood had a valid reason to dismiss him.
Worker appeals decision
However, Mr Lee subsequently appealed this decision. His claim was therefore brought before the Full Bench of the Fair Work Commission. The Full Bench determined that Mr Lee’s employment contract did not incorporate the new Site Attendance Policy.
The policy was introduced four years after his employment began, and there was no evidence that his contract had been varied. Therefore, Mr. Lee’s compliance with the policy was contingent on whether the directive was lawful and reasonable.
Full Bench finds Privacy Act exemption didn’t apply
Superior Wood had argued that its collection of fingerprint data came under the employee records exemption in section 7B(3) of the Privacy Act applied, However, the Full Bench found that the exemption only applies to records “held” by an employer. Since biometric data had not yet been collected, the exemption was inapplicable.
The Full Bench interpreted APP 3 to prohibit the collection of sensitive information, such as biometric data, without the individual’s consent. Since Mr Lee explicitly withheld consent, the directive to provide fingerprint data was found to be inconsistent with APP 3.
Fair Work’s decision overturned
Lastly, the Full Bench found the directive for Mr Lee to provide his biometric data was unlawful. It stated that a necessary counterpart to the right to consent is the right to refuse. The commission pointed out that alternative methods for recording Mr Lee’s attendance were available.
The Full Bench therefore overturned the initial decision of the Fair Work Commission, concluding that Mr Lee’s dismissal was unfair. It found that the directive to provide biometric data was neither lawful nor reasonable, and therefore there was no valid reason for the dismissal.
Have you been unfairly dismissed?
We at A Whole New Approach can help you make a claim and guide you through the process with the Fair Work Commission. We are Australia’s leading workplace mediators, having helped over 16,000 employees take action.
Call us today on 1800 333 666 for a free and confidential conversation.
Find similar articles to “Can you be dismissed for refusing biometric surveillance?”
What can an employer insist on?