Archive for Legislation

Leaving Your Job? Don’t Take Personal Data With You Warns ICO

The Information Commissioner’s Office (ICO) has warned those retiring or taking a new job that under the Data Protection Act 2018, employees can face regulatory action if they are found to have retained information collected as part of their previous employment.

Old Investigation

The renewed warning was issued following the regulator concluding its dealings in an old investigation of two (former) police officers interviewed (by the media) about an historic case they’d worked on as serving officers involving an MP, and had been accused of disclosing details about the case to the media.

In this case, the investigation appears to have related to police handling of personal data such as notebooks and the fact that measures need to be put in place to ensure that these are not retained when officers leave the service.

The ICO investigation, brought about under the previous Data Protection Act 1998 legislation (because the alleged disclosure occurred before the DPA 2018 and GDPR’s introduction) may have resulted in no enforcement action being taken against the two officers, but prompted the ICO to issue a reminder that data protection laws have been toughened in this area.

“Knowingly or Recklessly Retaining Personal Data”

The warning in the ICO’s recent statement is that the Data Protection Act 1998 has since been strengthened through the Data Protection Act 2018, to include a new element of knowingly or recklessly retaining personal data without the consent of the data controller (see section 170 of the DPA 2018).

The only exceptions to this new part of the new Act are when it is necessary for the purposes of preventing or detecting crime, is required or authorised by an enactment, by a rule of law or by the order of a court or tribunal, or whether it is justified as being in the public interest.

Retiring or Taking a New Job

The ICO has warned that anyone who deals with the personal details of others in the course of their work, private or public sector, should take note of this update to the law, especially when employees are retiring or taking on a new job. Those leaving or retiring should also take note that they will be held responsible if the breach of personal data from their previous employer can be traced to their individual actions.

Examples

Examples of where the ICO has prosecuted for this type of breach of the law include a charity worker who, without the knowledge of the data controller, Rochdale Connections Trust, sent emails from his work email account (in February 2017) containing sensitive personal information of 183 people.  Also, a former Council schools admission department apprentice was found guilty of screen-shotting a spreadsheet that contained information about children and eligibility for free school meals and then sending it to a parent via Snapchat.

What Does This Mean For Your Business?

This latest statement from the ICO should remind all businesses and organisations, whether in the private or public sectors, that reasonable measures or procedures need to be put in place to ensure that anyone retiring or leaving for another job cannot take personal details with them that should be under the care of the data controller i.e. you and your company/organisation.

Failure to take this facet of current data law into account could result in fines from the regulator for those individuals responsible, potential legal action from the victims of any breach against your organisation, some bad and potentially damaging publicity, and costly and long-lasting damage to reputation.

Video Labelling Causes Problems

Google has already been criticised by some for not calling out China over disinformation about Hong Kong, but despite disabling 210 YouTube channels with suspected Chinese state links, Google’s new move to label Hong Kong YouTube videos hasn’t gone down well.

Big Social Media Platforms Act

Facebook and Twitter recently announced that they have banned a number accounts on their platforms due to what the popular social media platforms are calling “coordinated influence operations”. In other words, Chinese state-sponsored communications designed to influence opinion (pro-Beijing viewpoints) and to spread disinformation.  Twitter and Facebook are both blocked in mainland China anyway by the country’s notorious firewall but both platforms can be accessed in Hong King and Twitter recently suspended over 900 accounts believed to originate in China. The reasons for the suspensions included spam, fake accounts and ban evasion.

Google Labels Videos

Google’s response, which some critics have seen as being late anyway has been to add information panels to videos on its Hong Kong-facing site saying whether the video has been uploaded by media organisations that receive government funding or public funding.  The panels, which are live in 10 regions, were intended to give viewers an insight into whether the videos are state-funded or not.

Problem

Unfortunately, Google did not consider the fact that some media receives government funding, but are editorially independent, and the labelling has effectively put them in the same category as media that purely spreads government information.

Google and China

Many commentators have noted an apparent reluctance by Google to distance itself from the more repressive side of the Chinese state.  For example, Google has been criticised for not publicly criticising China over the state’s disinformation campaign about the Hong Kong protests.  Also, Google was recently reported to have a secret plan (Project Dragonfly) to develop a censored search engine for the Chinese market and it’s been reported that Google has an A.I research division in China.

Disinformation By Bot? Not

There have been fears that just as bots can be a time and cost-saving way of writing and distributing information, they could also be used to write disinformation and could even reach the point soon where they are equal in ability to human writers.  For example, the text generator, built by the research firm OpenAI, has (until recently) been considered to be too dangerous to make (the ‘trained’ version) public because of the potential for abuse in terms of using it to write disinformation.  In tests (the BBC, AI experts, and a Sheffield University professor) however, it proved to be relatively ineffective at generating meaningful text from input headlines, although it did appear able to reflect news bias in its writing.

What Does This Mean For Your Business?

The influence via social media in the last US presidential election campaign and the UK referendum (with the help of Cambridge Analytica) brought the whole subject of disinformation into sharp focus, and the Chinese state media’s response to the Hong King demonstrations has given more fuel to the narrative coming from the current US administration (Huawei accusations and trade war) that China should be considered a threat.  Google’s apparent lack of public criticism of Chinese state media disinformation efforts is in contrast to the response of social media giants Facebook and Twitter, and this coupled with reports of the company trying to develop a censored search engine for China to allow it to get back into the market over there means that Google is likely to be scrutinised and criticised by US state voices.

It is difficult for many users of social media channels to spot bias and disinformation, and although Google may have tried to do the right thing by labelling videos, its failure to take account of the media structure in China has meant more criticism for Google.  As an advertising platform for businesses, Google needs to take care of its public image, and this kind of bad publicity is unlikely to help.

Facial Recognition at King’s Cross Prompts ICO Investigation

The UK’s data protection watchdog (the Information Commissioner’s Office  i.e. the ICO) has said that it will be investigating the use of facial recognition cameras at King’s Cross by Property Development Company Argent.

What Happened?

Following reports in the Financial Times newspaper, the ICO says that it is launching an investigation into the use of live facial recognition in the King’s Cross area of central London.  It appears that the Property Development Company, Argent, had been using the technology for an as-yet-undisclosed period, and using an as-yet-undisclosed number of cameras. A reported statement by Argent (in the Financial Times) says that Argent had been using the system to “ensure public safety”, and that facial recognition is one of several methods that the company employs to this aim.

ICO

The ICO has said that, as part of its enquiry, as well requiring detailed information from the relevant organisations (Argent in this case) about how the technology is used, it will also inspect the system and its operation on-site to assess whether or not it complies with data protection law.

The data protection watchdog has made it clear in a statement on its website that if organisations want to use facial recognition technology they must comply with the law and they do so in a fair, transparent and accountable way. The ICO will also require those companies to document how and why they believe their use of the technology is legal, proportionate and justified.

Privacy

The main concern for the ICO and for privacy groups such as Big Brother Watch is that people’s faces are being scanned to identify them as they lawfully go about their daily lives, and all without their knowledge or understanding. This could be considered a threat to their privacy.  Also, with GDPR in force, it is important to remember that if a person’s face (if filmed e.g. with CCTV) is part of their personal data, and the handling, sharing, and security of that data also becomes an issue.

Private Companies

An important area of concern to the ICO, in this case, is the fact that a private company is using facial recognition becasuse the use of this technology by private companies is difficult to monitor and control.

Problems With Police Use

Following criticism of the Police use of facial recognition technology in terms of privacy, accuracy, bias, and management of the image database, the House of Commons Science and Technology Committee has recently called for a temporary halt in the use of the facial recognition systems.  This follows an announcement in December 2018 by the ICO’s head, Elizabeth Dunham, that a formal investigation was being launched into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.

What Does This Mean For Your Business?

The use of facial recognition technology is being investigated by the ICO and a government committee has even called for a halt in its use over several concerns. The fact that a private company (Argent) was found, in this case, to be using the technology has therefore caused even more concern and has highlighted the possible need for more regulation and control in this area.

Companies and organisations that want to use facial recognition technology should, therefore, take note that the ICO will require them to document how and why they believe their use of the technology is legal, proportionate and justified, and make sure that they comply with the law in a fair, transparent and accountable way.

£80,000 Fine For London Estate Agency Highlights Importance of Due Diligence in Data Protection

The issuing of an £80,000 fine by the Information Commissioner’s Office (ICO) to London-based estate agency Parliament View Ltd (LPVL) highlights the importance of due diligence when keeping customer data safe.

What Happened?

Prior to the introduction of GDPR, between March 2015 and February 2017, LPVL left their customer data exposed online after transferring the data via FTP from its server to a partner organisation which also offered a property letting transaction service. LPVL was using Microsoft’s Internet Information Services (IIS) but didn’t switch off an Anonymous Authentication Function, thereby giving anyone access to the server and the data without prompting them for a username or password.

The data that was publicly exposed included some very sensitive things which could be of value to hackers and other criminals including addresses of both tenants and landlords, bank statements and salary details, utility bills, dates of birth, driving licences (of tenants and landlords) and even copies of passports.  The ICO reported that the data of 18,610 individual users had been put at risk.

Hacker’s Ransom Request

The ICO’s tough penalty took into account the fact that not only was LPVL judged to have not taken the appropriate technical and organisational measures to prevent unlawful processing of the personal data, but that the estate agency only alerted the ICO to the breach after it had been contacted by a hacker in October who claimed to possess the personal data of LPVL’s, and who had requested a ransom.

The ICO judged that LPVL’s contraventions of the Data Protection Act were wide-ranging and likely to cause substantial damage and substantial distress to those whose personal data was taken, hence the huge fine.

Marriott International Also Fined

The Marriott International hotel chain has also just been issued with a massive £99.2m fine by the ICO for infringements of GDPR, also related to matters of due diligence.  Marriott International’s fine related to an incident that affected Starwood hotels from 2014 to 2018 (which Marriott was buying).  In this case, the ICO found that the hotel chain didn’t do enough to secure its systems and undertake due diligence when it bought Starwood.  The ICO found that the systems of the Starwood hotels group were compromised in 2014, but the exposure of customer information was not discovered until 2018 and by this time, data contained in approximately 339 million guest records globally had been exposed (7 million related to UK residents).

What Does This Mean For Your Business?

We’re now seeing the culmination of ICO investigations into incidents involving some large organisations, and the issuing of some large fines by the ICO e.g. British Airways and Marriott International, and also some lesser-known, smaller organisations – LPVL. These serve to remind all businesses of their responsibilities under GDPR.

Personal data is an asset that has real value, and therefore, organisations have a clear legal duty to ensure its security.  Part of ensuring this is carrying out proper due diligence when e.g. making corporate acquisitions (as with Marriott), when transferring data to partners (as with LPVL), and in all other situations.  Systems should be monitored to ensure that they haven’t been compromised and that adequate security is maintained.  Staff dealing with data should also be adequately trained to ensure that they act lawfully and make good decisions in data matters.

MPs Call To Stop Police Facial Recognition

Following criticism of the Police use of facial recognition technology in terms of privacy, accuracy, bias, and management of the image database, the House of Commons Science and Technology Committee has called for a temporary halt in the use of the facial recognition system.

Database Concerns

Some of the key concerns of the committee were that the Police database of custody images is not being correctly edited to remove pictures of unconvicted individuals and that innocent peoples’ pictures may be illegally included in facial recognition “watch lists” that are used by police to stop and even arrest suspects.

While the committee accepts that this may be partly due to a lack of resources to manually edit the database, the MP’s committee has also expressed concern that the images of unconvicted individuals are not being removed after six years, as is required by law.

Figures indicate that, as of February last year, there were 12.5 million images available to facial recognition searches.

Accuracy

Accuracy of facial recognition has long been a concern. For example, in December last year, ICO head Elizabeth Dunham launched a formal investigation into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.  For example, the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

Also, after trials of FRT at the 2016 and 2017 Notting Hill Carnivals, the Police faced criticism that FRT was ineffective, racially discriminatory, and confused men with women.

Bias

In addition to gender bias issues, the committee also expressed concern about how a government advisory group had warned (in February) that facial recognition systems could produce inaccurate results if they had not been trained on a diverse enough range of data, such as types of faces from different races e.g. black, asian, and other ethnic minorities.  The concern was that if faces from different races are under-represented in live facial recognition training datasets, this could lead to errors.  For example, human operators/police officers who are supposed to double-check any matches made by the system by other means before acting could defer to the algorithm’s decision without doing so.

Privacy

Privacy groups such as Liberty (which is awaiting a ruling on its challenge of South Wales Police’s use of the technology) and Big Brother Watch have been vocal and active in highlighting the possible threats posed to privacy by the police use of facial technology.  Also, even Tony Porter, the Surveillance Camera Commissioner,  has criticised trials by London’s Metropolitan Police over privacy and freedom issues.

Moratorium

The committee of MPs has therefore called for the government to temporarily halt the use of facial recognition technology by police pending the introduction of a proper legal framework, guidance on trial protocols and the establishment of an oversight and evaluation system.

What Does This Mean For Your Business?

Businesses use CCTV for monitoring and security purposes, and most businesses are aware of the privacy and legal compliance aspects (GDPR) of using the system and how /where the images are managed and stored.

As a society, we are also used to being under surveillance by CCTV systems, which can have real value in helping to deter criminal activity, locate and catch perpetrators, and provide evidence for arrests and trials. The Home Office has noted that there is general public support for live facial recognition in order to (for example) identify potential terrorists and people wanted for serious violent crimes.  These, however, are not the reasons why the MP’s committee has expressed its concerns, or why ICO head Elizabeth Dunham is launched a formal investigation into how police forces use FRT.

It is likely that while businesses would support the crime and terror-busting, and crime prevention aspects of FRT used by the police,  they would also need to feel assured that the correct legal framework and evaluation system are in place to protect the rights of all and to ensure that the system is accurate and cost-effective.

Is CCTV Surveillance By Amazon Drones The Future?

An Amazon patent from 2015 appears to indicate that Amazon may consider ‘surveillance as a service’ using a swarm of its delivery drones armed with CCTV, as a monetising opportunity in the future.

Patent

The details in the patent foresee customers paying for a tiered service that employs the onboard cameras of Amazon’s delivery drones visiting users’ homes in-between delivery routes and filming irregularities and potentially suspicious activities.  For example, the cameras could potentially be programmed to detect evidence of break-ins and lurkers on/near a property, and the onboard microphones could even be programmed to detect suspicious noises such as breaking glass.

Tiered Service

It is thought that such a service could offer different tiers of service (reflected by different pricing) based upon factors such as frequency of visits e.g. daily or weekly, monitoring type e.g. video or still, and alert type e.g. SMS, email, a call or via app ‘push’ notifications.

Privacy

There are likely to be some obvious privacy concerns with a private company using its drones to film an area where it has a customer. However in doing so, avoiding filming an area where it does not have permission to film would present a challenge.

The Amazon patent suggests a possible remedy in the form defining a “geo-fence” around the area that does have permission to be filmed so that the drone’s surveillance activities can be focused (to an extent).  The patent appears to accept, however, that some filming of the outside area of the fence could occur.

National Surveillance Camera Day

In a world first, last week the UK played host to an awareness-raising National Surveillance Camera Day on 20th June as part of the National Surveillance Camera Strategy. As part of the day’s events, an “doors open” initiative allowed the public to see first-hand how surveillance camera control centres are operated at the premises of signatories to the initiative in the UK e.g. local authorities, police forces, hospitals, and universities.

Drone Research Reveals Negative Perceptions Among The Public

For the most part, people accept that the presence of CCTV surveillance cameras in public areas, operated by local authorities, and the presence of CCTV on business premises are generally for the greater good as a crime-reduction tool.

The same cannot be said for drone-based surveillance.  For example, new research from the PwC has shown that public perception remains a barrier to drone uptake in the UK.  The results of the research showed that less than a third of the public (31%) feel positive about drones, and more than two-thirds are concerned about the use of drones for crime.  In contrast, businesses appear to have a much more positive perception of drone use with 35% of business leaders saying that drones aren’t being adopted in their industry because of negative public perceptions despite the fact 43% of those business people who were surveyed believed that their industry would benefit from drone use.

What Does This Mean For Your Business?

Amazon is a company that has continued to grow and diversify into many different areas in recent years, embracing and pioneering many different technologies along the way, such as parcel delivery drones. It is not unusual for companies, particularly big tech companies to introduce many patents with many new ideas. In that sense, it’s difficult to criticise Amazon for wanting to get maximum (monetising) leverage from its delivery drones from a business perspective.

There remain, however, some serious challenges to the ideas in the drone surveillance patent including privacy concerns, and problems with current negative public perceptions of drones.  This will require education around case-use for drones, and re-assurance around regulation and accountability – this is a public company and could be one of many using the skies to offer the same service once the floodgates are opened.

For some businesses, however, as identified by the PwC and by Amazon’s patent, drones potentially offer some great new business opportunities.  It should also be noted that drones can offer some potentially life-saving opportunities, such as the human kidney for transplant that was delivered by drone, in the first flight of its kind, to a Medical Centre in Baltimore in May this year, thereby getting the organ to the surgeons much faster than by road.

For Drones it seems, there remains many opportunities and challenges to come.

Survey Shows Half OF UK Firms Have No Cyber Resilience Plan

A survey commissioned by email security firm Mimecast and conducted by Vanson Bourne has revealed that even after GDPR’s introduction, more than half of UK firms have no Cyber Resilience Plan.

What Is A Cyber Resilience Plan?

An organisation’s cyber resilience is its ability to prepare for, respond to and recover from cyber-attacks, and a Cyber Resilience Plan details how an organisation intends to do this.  Most organisations now accept that the evolving nature of cyber-crime means that it’s no longer a case of ‘if’ but ‘when’ they will suffer a cyber-attack.  It is with this perspective in mind that a strategy should be developed to minimise the impact of any cyber-attack (financial, brand and reputational), meet legal and regulatory requirements (NIS and GDPR), improve the organisation’s culture and processes, protect customers and stakeholders, and enable the organisation to survive beyond an attack and its fallout.

More Than Half Without

Mimecast’s survey shows that even though 51% of IT decision-makers polled in the UK say they believe it is likely or inevitable they’ll suffer a negative business impact from an email-borne cyber-attack in the next 12 months, 52% still don’t have a cyber resilience plan in place.

Email Focus

Email is a critical part of the infrastructure of most organisations and yet it is the most common point of attack. It is with this in mind that the Mimecast survey has focused on the challenges that managing the security aspects of email present in terms of cyber resilience and in achieving compliance with GDPR.

E-Mail Archiving

One potential weakness that the survey revealed is that only 37% of UK IT decision-makers said that email archiving and e-discovery are included in their organisation’s cyber resilience strategy.  When you consider that email contains a great deal of personal and sensitive company data, it’s protection should really be at the core of any cyber resilience strategy.

Also, for example, in relation to GDPR, not having powerful archiving systems to enable emails to be found and deleted quickly upon a user’s request could pose a compliance challenge.

Human Error

Human error in terms of not being able to spot or know how to deal with suspicious emails is a common weakness that is exploited by cyber-criminals.

What Does This Mean For Your Business?

If the results of this survey reflect a true picture of what’s happening in many businesses, then it indicates that cyber resilience urgently needs to be given greater priority, particularly since it is now a case of ‘when’ rather than ‘if’ a cyber attack will occur.  Also, the risks of not addressing the situation could be huge in terms of risks to customers and stakeholders and the survival of the business itself, particularly with the huge potential fines with GDPR for breaches.

E-mail, and particularly email archiving (what’s stored, where and how well and quickly it can be searched) poses a serious challenge. Businesses should reassess whether their email archiving strategy is effective and safe enough and security should go beyond archive encryption to guard against impersonation attacks and malicious links.

Bearing in mind the role that human error so regularly plays in enabling attacks via email, education and training in this area alongside having clearly communicated company policy and best practice in managing email safely should form an important part of a company’s cyber resilience

Proposed Legislation To Make IoT Devices More Secure

Digital Minister Margot James has proposed the introduction of legislation that could make internet-connected gadgets less vulnerable to attacks by hackers.

What’s The Problem?

Gartner predicts that there will be 14.2 billion ‘smart’, internet-connected devices in use worldwide by the end of 2019.  These devices include connected TVs, smart speakers and home appliances. In business settings, IoT devices can include elevators, doors, or whole heating and fire safety systems in office buildings.

The main security issue of many of these devices is that they have pre-set, default unchangeable passwords, and once these passwords have been discovered by cybercriminals the IoT devices can be hacked in order to steal personal data, spy on users or remotely take control of devices in order to misuse them.

Also, IoT devices are deployed in many systems that link to and are supplied by major utilities e.g. smart meters in homes. This means that a large-scale attack on these IoT systems could affect the economy.

New Law

The proposed new law to make IoT devices more secure, put forward by Digital Minister Margot James, would do two main things:

  • Force manufacturers to ensure that IoT devices come with unique passwords.
  • Introduce a new labelling system that tells customers how secure an IOT product is.

The idea is that products will have to satisfy certain requirements in order to get a label, such as:

  • Coming with a unique password by default.
  • Stating for how long security updates would be made available for the device.
  • Giving details of a public point of contact to whom cyber-security vulnerabilities may be disclosed.

Not Easy To Make IoT Devices Less Vulnerable

Even though legislation could put pressure on manufacturers to try harder to make IoT devices more secure, technical experts and commentators have pointed out that it is not easy for manufacturers to make internet-enabled/smart devices IoT devices secure because:

Adding security to household internet-enabled ‘commodity’ items costs money. This would have to be passed on to the customer in higher prices, but this would mean that the price would not be competitive. Therefore, it may be that security is being sacrificed to keep costs down – sell now and worry about security later.

Even if there is a security problem in a device, the firmware (the device’s software) is not always easy to update. There are also costs involved in doing so which manufacturers of lower-end devices may not be willing to incur.

With devices which are typically infrequent and long-lasting purchases e.g. white goods, we tend to keep them until they stop working, and we are unlikely to replace them because they have a security vulnerability that is not fully understood. As such these devices are likely to remain available to be used by cybercriminals for a long time.

What Does This Mean For Your Business?

Introducing legislation that only requires manufacturers to make relatively simple changes to make sure that smart devices come with unique passwords and are adequately labelled with safety and contact information sounds as though it shouldn’t be too costly or difficult.  The pressure of having, by law, to display a label that indicates how safe the item is could provide that extra motivation for manufacturers to make the changes and could be very helpful for security-conscious consumers.

The motivation for manufacturers to make the changes to the IoT devices will be even greater when faced with the prospect of retailers eventually being barred from selling products that don’t have a label, as is the plan with this proposed legislation.

The hope from cybersecurity experts and commentators is that the proposal isn’t watered-down before it becomes law.

GDPR Says HMRC Must Delete Five Million Voice Records

The Information Commissioner’s Office (ICO) has concluded that HMRC has breached GDPR in the way that it collected the biometric voice records of users and now must delete five million biometric voice files.

What Voice Files?

Back in January 2017, HMRC introduced a system whereby customers calling the tax credits and Self-Assessment helpline could enrol for voice identification (Voice ID) as a means of speeding up the security steps. The system uses 100 different characteristics to recognise the voice of an individual and can create a voiceprint that is unique to that individual.

When customers call HMRC for the first time, they are asked to repeat the vocal passphrase “my voice is my password” to up to five times to register before speaking to a human adviser.  The recorded passphrase is stored in an HMRC database and can be used as a means of verification/authentication in future calls.

It was reported that in the 18 months following the introduction of the system, HMRC acquired 5 million peoples’ voiceprints this way.

What’s The Problem?

Privacy campaigners questioned the lawfulness of the system and in June 2018, privacy campaigning group ‘Big Brother Watch’ reported that its own investigation had revealed that HMRC had (allegedly) taken the five million taxpayers’ biometric voiceprints without their consent.

Big Brother Watch alleged that the automated system offered callers no choice but to do as instructed and create a biometric voice ID for a Government database.  The only way to avoid creating the voice ID on calling, as identified by Big Brother Watch, was to say “no” three times to the automated questions, whereupon the system still resolved to offer a voice ID next time.

Big Brother Watch highlighted the fact that GDPR prohibits the processing of biometric data for the purpose of uniquely identifying a person, unless there is a lawful basis under Article 6, and that because voiceprints are sensitive data but are not strictly necessary for dealing with tax issues, HMRC should request the explicit consent of each taxpayer to enrol them in the scheme (Article 9 of GDPR).

This led to Big Brother Watch registering a formal complaint with the ICO.

Decision

The ICO has now concluded that HMRC’s voice system was not adhering to the data protection rules and effectively pushed people into the system without explicit consent.

The decision from the ICO is that HMRC now must delete the five million records taken prior to October 2018, the date when the system was changed to make it compliant with GDPR.  HMRC has until 5th June to delete the five million voice records, which the state’s tax authority says it is confident it can do long before that deadline.

What Does This Mean For Your Business?

Big Brother Watch believes this to be the biggest ever deletion of biometric IDs from a state database, and privacy campaigners have hailed the ICO’s decision as setting an important precedent that restores data rights for millions of ordinary people.

Many businesses and organisations are now switching/planning to switch to using biometric identification/verification systems instead of password-based systems, and this story is an important reminder that these are subject to GDPR. For example, images and unique Voiceprint IDs are personal data that require explicit consent to be given, and that people should have the right to opt out as well as to opt-in.