The emergence of artificial intelligence that uses cameras to verify workplace health and safety lapses has raised concerns about a creeping culture of workplace surveillance and a lack of worker protections. .
- AI can use cameras to monitor workplaces for health and safety violations and hazards
- Company with Australian customers says face blurring is part of privacy safeguards
- Experts say Australian law is not up to date to sufficiently regulate the growing use of AI in the workplace
Artificial intelligence technology that uses CCTV cameras can be trained to identify violations, such as when a worker is not wearing gloves or a helmet, or to identify hazards such as spills.
One company, Intenseye, reports having several Australian customers for the new technology, including a major mining company.
But Nicholas Davis, professor of emerging technologies at the University of Technology Sydney, said this latest use of AI raised questions about the rise of a surveillance industry that relied on constant surveillance of workers. .
“While this is just a small example that may be justified on some health and safety grounds – potentially justified – there are probably a million other use cases where similar technology can also be justified,” Professor Davis said.
The Australian Information Commissioner’s Office (OAIC) said it was aware of the growing use of technology, including AI technology, to monitor behavior in the workplace.
“Our office has received inquiries and complaints about workplace surveillance generally,” the OAIC said in a statement.
Company says workers are protected
Although artificial intelligence is already used in many ways in Australian workplaces, the association of AI with CCTV is an emerging technology.
Intenseye uses cameras to monitor installations and provide “real-time breach notifications”.
The company said its system blurs individuals’ faces to prevent retaliation for violations and to protect workers’ privacy.
Intenseye’s customer success manager, David Lemon, said there have been instances where customers have asked for faces not to be blurred, or other information that he says would be an invasion of privacy. private life.
But he said the company would not provide that information.
He said there was a growing demand for technology, which could be trained to identify behaviors or violations based on specific employer concerns.
Alerts about breaches appeared on a cloud-based digital platform, and Mr Lemon said the company had developed a new system that hid the “human” visual from video footage to only provide a visual “stick-shaped” to the employer.
Lemon said the company is aware of its obligations to protect employee privacy and has sought legal advice to ensure it complies with data and privacy laws in various countries.
He said the company complies with industry regulations and has been audited by the AI Ethics Lab.
“It’s cutting edge technology, it’s the frontier, it’s very new,” he said.
“Even customers with a big appetite for computer vision have fears simply because it’s change. It’s new. It can often be scary.”
The Lagging Laws of Technology
Professor Davis, who studies the regulation of technology as it relates to human rights, said the emergence of this type of technology raises questions about consent, safety culture and government accountability. employer in case of AI errors.
While companies can take steps to ensure ethical use of AI, he said Australia’s surveillance laws were not equipped to effectively regulate its use or define what its limits should be.
“It doesn’t anticipate things like breakthroughs in machine learning,” he said.
The Privacy Act of 1988 is currently being reviewed by the federal government, with the emergence of artificial intelligence technologies being listed as one of the reasons for the review.
Currently, the law does not specifically address workplace surveillance, although it does require employers to give notice if they intend to collect personal information.
Professor Davis is part of a UTS team, including former human rights commissioner Ed Santow, is working on a model law to regulate the use of facial recognition technology.
“There is a recognition or realization that we need much more dynamic, flexible and appropriate regulation for these kinds of technologies,” he said.
“I think employers increasingly have to be very rigorous and skeptical, and challenge the products being marketed to them. [where] their operation is not very clear.
Cameras are here to stay
The Department of Industry, Science and Resources has developed an Artificial Intelligence Ethics Framework for Business to test AI systems against a set of ethical principles.
But economist and director of the Australia Institute‘s Center of Future Work, Jim Stanford, said the lack of regulation was open to abuse and abuse.
“You have to have legal protection, you have to have an app, you have to have controller control,” he said.
Mr Stanford, co-author of a report on electronic monitoring and surveillance in Australian workplaces, said employers must also consider the health and behavioral impacts of constant surveillance.
“If people feel like they’re being watched all the time, they’ll do whatever they can to try to keep the boss happy,” he said.
“That in itself can lead to acceleration and intensification of labor which is bad for long-term health.”
Mr Stanford said he was not opposed to the presence of video cameras in the workplace and that their use was already widespread.
“The question is ‘how is it used? And what kind of protection do people have? “”, Did he declare.
“And that’s where the Australian regulatory regime is very, very behind the technology.