ICO warns of fines for ‘harmful’ use of AI

esteria.white

Britain’s privacy regulator has warned of a loss of public trust in AI and said any use of the technology that breaches data protection law would result in strict enforcement action.

Speaking at techUK’s Digital Ethics Summit 2023 on Wednesday, Information Commissioner John Edwards highlighted organizations using AI for “nefarious purposes” to harvest data or process customers unfairly.

“We know there are bad actors who don’t respect people’s information and who use AI to gain an unfair advantage over their competitors. Our message to these organizations is clear: failure to comply with data protection will not be profitable. Persistent misuse of customer information, or misuse of AI in these situations, in order to gain commercial advantage will be punished,” he said.

“Where appropriate, we will seek to impose fines proportionate to the ill-gotten gains achieved through non-compliance. But fines are not the only tool in our toolbox. We can order companies to stop processing the information and delete everything they have collected, as we did with Clearview AI.

The Information Commissioner’s Office (ICO) Clearview AI fined £7.5m ($9.4 million) last year for breaching UK data protection rules. However, the facial recognition software maker later won an appeal against the fine after a court recognized that the processing of data on UK citizens was carried out only by Clearview customers outside the EU – primarily US law enforcement.

Learn more about AI and privacy: #DataPrivacyWeek: Consumers are already concerned about the impact of AI on data privacy.

Edwards also told conference attendees of his fears that public trust in AI is waning.

“If people don’t trust AI, then they are less likely to use it, leading to fewer profits and less growth or innovation in society as a whole,” he said. he argued. “This needs to be addressed: 2024 cannot be the year consumers lose trust in AI. »

To maintain public trust in technology, developers must ensure they build privacy into their products from the design phase, Edwards said.

“Privacy and AI go hand in hand – there is neither here. You cannot expect to use AI in your products or services without considering data protection and how you will protect people’s rights. he added.

“There is no excuse for not ensuring that people’s personal information is protected if you use AI systems, products or services.”

Leave a comment