Archive for Data Security

Trust Challenge For Online Sharing Services

The Global Trust Survey from service provider Jumio has revealed that a quarter of adults feel unsafe using online sharing services.

What Are Online Sharing Services?

Online sharing services refers to companies like Uber and Airbnb where multiple users can use technology to book and consume a shared offering (car and room sharing), and where those offering the service can increase the utilisation of an asset – both parties get value from the exchange. The so-called “sharing economy” also includes services such as crowdfunding, personal services, and video and audio streaming.

The Sharing Economy

The sharing economy is expected to grow to a massive $335 billion by 2020. For example, in just 11 years, Airbnb has grown from nothing to becoming a $30bn firm listing more than six million rooms, flats and houses in more than 81,000 cities across the globe. Figures show that, on average, two million people use an Airbnb property each night.

Trust Challenge Revealed

Jumio’s Global Trust Survey showed that even though online sharing services are growing, and have been with us for some time now, in the 30 days prior to the survey taking place, over 80% of UK adults said that they hadn’t used an online sharing service, and 25% of UK adults said that they felt “somewhat unsafe” or “not at all safe” when using online sharing services.

A key element in making shared services successful is trust, and recent global from PwC confirmed this where 89% of consumers agreed that the sharing economy marketplace is based on trust between providers and users.

Identity Verification Vital

One area uncovered by the Global Trust and Safety Survey which appears to be a challenge for shared services is proving and verifying identity.  For example, the survey found that 60% of users believe it is either ‘somewhat important’ or ‘very important’ for new users to undergo an identity check to prove that they are who they claim to be.

This is the reason why companies such as Lyft are rolling out continuous background checks and enhanced identity verification, and why Uber is updating its app to give an alert to riders to check the license plate, make, and model of the vehicle, and to confirm the name and picture of the driver.

What Does This Mean For Your Business?

Trust is something that takes a long time for a business to build, and it is a vital element in the success of shared services such as those where considerable risk (financial and, critically, personal risk) is involved. Trust is also something that can be very easily lost, sometimes in an instant or through one high profile incident involving that service e.g. the recent murder in the US of a student by a man posing as an Uber driver.

The results of the Global Trust Survey help to remind businesses that offer shared services that consumers need and want a layer of safety to help them feel comfortable in trying and using those services.  Companies can, therefore, help create an ecosystem of trust through the process of identity verification.

Serious Security Flaws Discovered In Popular GPS Tracker

Researchers at UK cyber-security company, Fidus Information Security, say that they have found security flaws in a popular Chinese-manufactured white-label location tracker that could be serious enough to warrant a recall.

Which Tracker?

The GPS tracker which is used as a panic alarm for elderly patients, to monitor children, and to track vehicles is white label manufactured but rebranded and sold by several different companies which reportedly include Pebbell (by HoIP Telecom), OwnFone Footprint and SureSafeGo. The tracker uses a SIM card to connect to the 2G/GPRS network.  According to Fidus at least 10,000+ of these trackers are currently used in the UK

What’s The Problem?

According to the researchers, simply sending the device a text message with a keyword can trick the tracker into revealing its real-time location. Also, other commands tried by the researchers can allow anyone to call the device and remotely listen in to its in-built microphone without the user knowing, and even remotely stop the signal from the tracker, thereby making the device effectively useless.  On its blog, Fidus lists several other things that its researchers were able to do to the device including change or completely remove all emergency contacts, disable the motion alarm, disable fall detection and remove any device PIN which had been set.

All these scenarios could pose significant risks to the (mainly vulnerable) users of the trackers.

According to Fidus, one of the main reasons why the device has so many security flaws is that it doesn’t appear that the manufacturers, nor the companies reselling the devices, have conducted any security testing or penetration testing of the device.

PIN Problem

The research by Fidus also uncovered the fact that even though the manufacturers built in PIN functionality to help lock the devices down, the PIN, by default, is disabled and users need to read the manual to find out about it, and when enabled, the PIN is required as a prefix to any commands to be accepted by the device, except for REBOOT or RESET functionality.  The problem with this is that the RESET functionality is the thing that really could provide any malicious user with the ability to gain remote control of the device.  This is because is the RESET command that wipes all stored contacts and emergency contacts, restores the device to factory defaults and means that a PIN is no longer needed.

What Does This Mean For Your Business?

What is particularly disturbing about this story is that the tracking devices are used for some of the most vulnerable members of society.  Even though they have been marketed as a way to make a person safer, the cruel irony is that it appears that if they are taken over by a malicious attacker, they could put a person at greater risk.

This story also illustrates the importance of security penetration testing in discovering and plugging security loopholes in devices before making them widely available.  This is another example of an IoT/smart device that has security loopholes related to default settings, and with an ever-growing number of IoT devices out there, many of them perhaps not tested as well as they could be, many buyers are unknowingly at risk from hackers.

Old Routers Are Targets For Hackers

Internet security experts are warning that old routers are targets for cyber-criminals who find them an easy hacking option.

How Big Is The Threat?

Trend Micros have reported that back in 2016 there were five families of threats for routers, but this grew to 35 families of threats in 2018. Research by the American Consumer Institute in 2018 revealed that 83 per cent of home and office routers have vulnerabilities that could be exploited by attackers.  These include the more popular brands such as Linksys, NETGEAR and D-Link.

Why Are Old Routers Vulnerable?

Older routers are open to attacks that are designed to exploit simple vulnerabilities for several reasons including:

  • Routers are often forgotten about since their initial setup and consequently, 60 per cent of users have never updated their router’s firmware.
  • Routers are essentially small microcomputers.  This means that anything that can infect those can also infect routers.
  • Many home users leave the default passwords for the Wi-fi network, the admin account associated with it, and the router.
  • Even when vulnerabilities are exposed, it can take ISPs months to be able to update the firmware for their customers’ routers.
  • Today’s routers are designed to be easy and fast to work straight out of the box, and the setup doesn’t force customers to set their own passwords – security is sacrificed for convenience.
  • There are online databases where cyber-criminals can instantly access a list of known vulnerabilities by entering the name of a router manufacturer. This means that many cyber-criminals know or can easily find out what the specific holes are in legacy firmware.

What If Your Router Is Compromised?

One big problem is that because users have little real knowledge about their routers anyway and pay little attention to them apart from when their connection goes down.  It is often the case, therefore, that users tend not to know that their router has been compromised as there are no clear outward signals.

Hacking a router is commonly used to carry out other criminal and malicious activity such as Distributed Denial of Service attacks (DDoS) as part of a botnet, credential stuffing, mining bitcoin and accessing other IoT devices that link to that router.

Examples

Examples of high-profile router-based attacks include:

  • The Mirai attack that used unsecured routers to spread the Mirai malware that turned networked devices into remotely controlled “bots” that could be used as part of a botnet in large-scale network attacks.
  • The VPNFilter malware (thought to have been sponsored by the Russian state and carried out by the Fancy Bear hacking group) that infected an estimated half a million routers worldwide.
  • The exploit in Brazil spread across D-Link routers and affecting 100,000 devices, aimed at customers of Banco de Brazil.

Also, back in 2017, Virgin Media advised its 800,000 customers to change their passwords to reduce the risk of hacking after finding that many customers were still using risky default network and router passwords.

Concerns were also expressed by some security commentators about TalkTalk’s Super Router regarding the WPS feature in the router always being switched on, even if the WPS pairing button was not used, thereby meaning that attackers within range could have potentially hacked into the router and stolen the router’s Wi-Fi password.

What Does This Mean For Your Business?

If you have an old router with old firmware, you could have a weak link in your cyber-security.  If that old router links to IoT devices, these could also be at risk because of the router.

Manufacturers could help reduce the risk to business and home router users by taking steps such as disabling the internet until a user goes through a set up on the device which could include changing the password to a unique one.

Also, vendors and ISPs could help by having an active upgrade policy for out of date, vulnerable firmware, and by making sure that patches and upgrades are sent out quickly.

ISPs could do more to educate and to provide guidance on firmware updates e.g. with email bulletins.  Some tech commentators have also suggested using a tiered system where advanced users who want more control of their set-up can have the option, but everyone else gets updates rolled out automatically.

Could Biometric Regulations Be On The Way Soon?

A written parliamentary question from MP Luciana Berger about the possibility of bringing forward legislation to regulate the use of facial recognition technology has led the Home Office to hint that the legislation (and more) may be on the way soon.

Questions and Answers

The question by the MP about bringing forward ‘biometrics legislation’ related to how facial recognition was being used for immigration purposes at airports. Last month, MP David Davis also asked about possible safeguards to protect the security and privacy of citizens’ data that is held as part of the Home Office’s biometrics programme.

Caroline Nokes has said on behalf of the Home Office, in response to these and other questions about biometrics, that options to simplify and extend governance and oversight of biometrics across the Home Office sector are being looked at, including where law enforcement, border and immigration control use of biometrics is concerned.  Caroline Nokes is also reported to have said that other measures would also be looked at with a view to improving the governance and use of biometrics in advance of “possible legislation”.

Controversial

There have been several controversial incidents where the Police have used/held trials of facial recognition at events and in public places, for example:

In February this year a deliberately overt trial of live facial recognition technology by the Metropolitan Police in the centre of Romford led to an incident whereby a man who was observed pulling his jumper over part of his face and putting his head down while walking past the police cameras ended up being fined after being challenged by police.  The 8-hour trial only resulted in three arrests as a direct result of facial recognition technology.

In December 2018 ICO head Elizabeth Dunham was reported to have launched a formal investigation into how police forces use facial recognition technology after high failure rates, misidentifications and worries about legality, bias, and privacy.

A trial of facial recognition at the Champions League final at the Millennium Stadium in Cardiff back in 2017 only yielded one arrest, and this was the arrest of a local man for something unconnected to the Champions League. This prompted criticism that the trial was a waste of money.

Biometrics – Approved By The FIDO Alliance

One area where biometrics has got the seal of approval by The FIDO Alliance is in its use in facial recognition, and fingerprint scanning as part the login for millions of Windows 10 devices from next month. The FIDO Alliance is an open industry association whose mission is to develop and promote authentication standards that help reduce the world’s over-reliance on passwords.

In a recent interview with CBNC, Microsoft’s Corporate Vice President and Chief Information Officer Bret Arsenault, signalled the corporation’s move away from passwords on their own as a means of authentication towards biometrics and a “passwordless future”.  Windows Hello (the Windows 10 authenticator) has been built to align with FIDO2 standards so it works with Microsoft cloud services, and this has led to the FIDO Alliance now granting Microsoft official certification for Windows Hello from the forthcoming May 2019 upgrade.

What Does This Mean For Your Business?

Taking images of our faces as part of a facial recognition system used by the government may seem like an efficient way of identifying and verification e.g. for immigration purposes, but our facial images constitute personal data.  For this reason, we should be concerned about how and where they are gathered (with or without our knowledge) and how they are stored, as well as how and why they are used.  There are security and privacy matters to consider, and it may well make sense to put regulations and perhaps legislation in place now in order to provide some protection for citizens and to ensure that biometrics are used responsibly by all, including the state, and that privacy and security are given proper consideration.

It should be remembered that some of the police facial recognition tests have led to mistaken identity, and this is a reminder that the technology is still in its early stages, and this may provide another reason for regulations and legislation now.

Surveillance Attack on WhatsApp

It has been reported that it was a surveillance attack on Facebook’s WhatsApp messaging app that caused the company to urge all of its 1.5bn users to update their apps as an extra precaution recently.

What Kind of Attack?

Technical commentators have identified the attack on WhatsApp as a ‘zero-day’ exploit that is used to load spyware onto the victim’s phone.  Once the victim’s WhatsApp has been hijacked and the spyware loaded onto the phone, it can, for example, access encrypted chats, access photos, contacts and other information, as well as being able to eavesdrop on calls, and even turn on the microphone and camera.  It has been reported that the exploit can also alter the call logs and hide the method of infection.

How?

The attack is reported to be able to use the WhatsApp’s voice calling function to ring a target’s device. Even if the target person doesn’t pick the call up the surveillance software can be installed, and the call can be wiped from the device’s call log.  The exploit can happen by using a buffer overflow weakness in the WhatsApp VOIP stack which enables an overwriting of other parts of the app’s memory.

It has been reported that the vulnerability is present in the Google Android, Apple iOS, and Microsoft Windows Phone builds of WhatsApp.

Who?

According to reports in the Financial Times which broke the story of the WhatsApp attack (which was first discovered earlier this month), Facebook had identified the likely attackers as a private Israeli company, The NSO Group, that is part-owned by the London-based private equity firm Novalpina Capital.  According to reports, The NSO Group are known to work with governments to deliver spyware, and one of their main products called Pegasus can collect intimate data from a targeted device.  This can include capturing data through the microphone and camera and also gathering location data.

Denial

The NSO Group have denied responsibility.  NSO has said that their technology is only licensed to authorised government intelligence and law enforcement agencies for the sole purpose of fighting crime and terror, and that NSO wouldn’t or couldn’t use the technology in its own right to target any person or organisation.

Past Problems

WhatsApp has been in the news before for less than positive reasons.  For example, back in November 2017, WhatsApp was used by ‘phishing’ fraudsters to circulate convincing links for supermarket vouchers in order to obtain bank details.

Fix?

As a result of the attack, as well as urging all of its 1.5bn users to update their apps, engineers at Facebook have created a patch for the vulnerability (CVE-2019-3568).

What Does This Mean For Your Business?

Many of us think of WhatsApp as being an encrypted message app, and therefore somehow more secure. This story shows that WhatsApp vulnerabilities are likely to have existed for some time.  Although it is not clear how many users have been affected by this attack, many tech and security commentators think that it may have been a focused attack, perhaps of a select group of people.

It is interesting that we are now hearing about the dangers of many attacks being perhaps linked in some way to states and state-sponsored groups rather than individual actors, and the pressure is now on big tech companies to be able to find ways to guard against these more sophisticated and evolving kinds of attacks and threats that are potentially on a large scale.  It is also interesting how individuals could be targeted by malware loaded in a call that the recipient doesn’t even pick up, and it perhaps opens up the potential for new kinds of industrial espionage and surveillance.

Proposed Legislation To Make IoT Devices More Secure

Digital Minister Margot James has proposed the introduction of legislation that could make internet-connected gadgets less vulnerable to attacks by hackers.

What’s The Problem?

Gartner predicts that there will be 14.2 billion ‘smart’, internet-connected devices in use worldwide by the end of 2019.  These devices include connected TVs, smart speakers and home appliances. In business settings, IoT devices can include elevators, doors, or whole heating and fire safety systems in office buildings.

The main security issue of many of these devices is that they have pre-set, default unchangeable passwords, and once these passwords have been discovered by cybercriminals the IoT devices can be hacked in order to steal personal data, spy on users or remotely take control of devices in order to misuse them.

Also, IoT devices are deployed in many systems that link to and are supplied by major utilities e.g. smart meters in homes. This means that a large-scale attack on these IoT systems could affect the economy.

New Law

The proposed new law to make IoT devices more secure, put forward by Digital Minister Margot James, would do two main things:

  • Force manufacturers to ensure that IoT devices come with unique passwords.
  • Introduce a new labelling system that tells customers how secure an IOT product is.

The idea is that products will have to satisfy certain requirements in order to get a label, such as:

  • Coming with a unique password by default.
  • Stating for how long security updates would be made available for the device.
  • Giving details of a public point of contact to whom cyber-security vulnerabilities may be disclosed.

Not Easy To Make IoT Devices Less Vulnerable

Even though legislation could put pressure on manufacturers to try harder to make IoT devices more secure, technical experts and commentators have pointed out that it is not easy for manufacturers to make internet-enabled/smart devices IoT devices secure because:

Adding security to household internet-enabled ‘commodity’ items costs money. This would have to be passed on to the customer in higher prices, but this would mean that the price would not be competitive. Therefore, it may be that security is being sacrificed to keep costs down – sell now and worry about security later.

Even if there is a security problem in a device, the firmware (the device’s software) is not always easy to update. There are also costs involved in doing so which manufacturers of lower-end devices may not be willing to incur.

With devices which are typically infrequent and long-lasting purchases e.g. white goods, we tend to keep them until they stop working, and we are unlikely to replace them because they have a security vulnerability that is not fully understood. As such these devices are likely to remain available to be used by cybercriminals for a long time.

What Does This Mean For Your Business?

Introducing legislation that only requires manufacturers to make relatively simple changes to make sure that smart devices come with unique passwords and are adequately labelled with safety and contact information sounds as though it shouldn’t be too costly or difficult.  The pressure of having, by law, to display a label that indicates how safe the item is could provide that extra motivation for manufacturers to make the changes and could be very helpful for security-conscious consumers.

The motivation for manufacturers to make the changes to the IoT devices will be even greater when faced with the prospect of retailers eventually being barred from selling products that don’t have a label, as is the plan with this proposed legislation.

The hope from cybersecurity experts and commentators is that the proposal isn’t watered-down before it becomes law.

G7 Cyber Attack Simulation To Test Financial Sector

The G7 nations will be holding a simulated cyber-attack this month to test the possible effects of a serious malware infection on the financial sector.

France

The attack simulation was organised by the French central bank under France’s presidency of the Group of Seven nations (G7).  The three-day exercise will be aimed at demonstrating the cross-border effects of such an attack and will involve 24 financial authorities from the seven countries, comprising central banks, market authorities and finance ministries.  It has been reported that representatives of the private sector in France, Italy Germany and Japan will also participate in the simulation.

Why?

As reported in March in a report by the Carnegie Endowment for International Peace (co-developed with British defence company BAE Systems), state-sponsored cyber attacks on financial institutions are becoming more frequent, resulting in destructive and disruptive damages rather than just theft.

The report highlighted how, of the 94 cases of cyber attacks reported as financial crimes since 2007, the attackers behind 23 of them were believed to be state-sponsored.  Most of these state-sponsored attacks are reported to have come from countries such as Iran, Russia, China and North Korea.

The report pointed out that the number of cyber attacks linked to nations jumped to six in 2018 from two in 2017 and two in 2016.

State-sponsored attacks can take the form of direct nation-state activity and/or proxy activity carried out by criminals and “hacktivists”.

State-Sponsored Attacks – Examples

An example of the kind of state-sponsored hacking that has led to the need for simulations is the attack by North Korean hackers on the Bank of Chile’s ATM network in January, the result of which was a theft of £7.5 million.

Also, in 2018 it was alleged that North Korean hackers accessed the systems of India’s Cosmos Bank and took nearly $13.5 million in simultaneous withdrawals across 28 countries.

As far back as 206 North Korean hackers took $81 after breaching Bangladesh Bank’s systems and using the SWIFT network (Society for Worldwide Interbank Financial Telecommunication).  The perpetrators sent fraudulent money transfer orders to the New York branch of the U.S. central bank where the Dhaka bank has an account.

What Does This Mean For Your Business?

An escalation in state-sponsored attacks on bank systems in recent years is the real reason why, in addition to fending cybercriminals from multiple individual sources, banks have noted an evolution of the threat which has forced them to focus on sector and system-wide risks.

As customers of banks, businesses are likely to be pleased that banks, which traditionally have older systems, are making a real effort to ensure that they are protected from cyber-attacks, particularly the more sophisticated and dangerous state-sponsored cyber-attacks.

Data Breach Report A Sharp Reminder of GDPR

The findings of Verizon’s 2019 Data Breach Investigations Report have reminded companies that let customer information go astray that they could be facing big fines, and damaging publicity.

The Report

The annual Verizon Data Breach Investigations Report (DBIR) draws upon information gained from more than 2,000 confirmed breaches that hit organisations worldwide, and information about more than 40,000 incidents such as spam and malware campaigns and web attacks.

Big Fines

The report reminds companies that although personal data can be stolen in seconds, the effects can be serious and can last for a long time. In addition to the problems experienced by those whose data has been stolen (who may then be targeted by other cybercriminals as the data is shared or sold), the company responsible for the breach can, under GDPR, face fines amounting to 4 percent of their global revenues if it has been judged to have not done enough to protect personal data or clean up after a breach.

Senior Staff Hit Because of Access Rights

It appears that senior staff are a favourite target of cybercriminals at the current time.  This is likely to be because of the high-level access that can be exploited if criminals are able to steal the credentials of executives. Also, once stolen, a senior executive’s account could be used to e.g. request and authorise payments to criminal accounts. The report also highlights the fact that senior executives are particularly vulnerable to attack when on their mobile devices.

Booby Trap Emails Less Successful

The report also states how sending booby-trapped emails (emails with malicious links) is proving to be less successful for cybercriminals now with only 3 per cent of those targeted falling victim, and a click rate of only 12 per cent.

What Does This Mean For Your Business?

The report is a reminder that paying attention to GDPR compliance should still be a very serious issue that’s given priority and backing from the top within companies, as one data breach could have very serious consequences for the entire company.

Senior executives need to ensure that there is a clear verification and authorisation/checking procedure in place that all accounts/finance department staff are aware of when it comes to asking for substantial payments to be sent, even if the request appears to come from the senior executives themselves via their personal email. Obtaining the credentials of senior executives can also mean that cybercriminals can operate man-in-the-middle attacks.

Executives and staff need to be aware that if a high-level email address has been compromised the first thing they may know about it is when funds are taken, so cybersecurity training, awareness and policies need to communicated and carried with all staff, right up to the top level.

The low level of booby trap emails being successfully deployed could be a sign that businesses are getting the message about email-based threats, or it could be that criminals are focusing their attention elsewhere.

Google Offers Auto-Delete of History After Three Months

Google is joining tech giants Facebook and Microsoft by offering users greater privacy of their data which for Google will give its users the option to automatically delete their search and location history after three or eighteen months.

What’s The Problem?

According to Google, feedback has shown that users want simpler ways to manage or delete their data, and web users have been more concerned about matters of their data privacy after several high profile data breaches, most notably that of Facebook sharing 50 million profiles of its users data with analytics company, Cambridge Analytica back in 2014.

The Change

Google already offers tools to help users manually delete all or part of their location history or web and app activity.  The addition of the new tool, which is scheduled to happen “in the coming weeks” will enable users to set up auto-delete settings for their location history, web browsing and app activity.

With the new tool, users will be able to select how long they want their activity data to be saved for – three months or eighteen months – after which time Google says the data will automatically be deleted from the user’s account.

The new automatic deletion will be optional, and the manual deletion tools will remain.

Facebook and Microsoft

At the beginning of May, Microsoft announced several new features intended to improve privacy controls for its Microsoft 365 users, with a view to simplifying its data privacy policies.

Also, Facebook’s Mark Zuckerberg recently announced a privacy-focused road map for the social network.

Google’s Tracking Questioned

Back in 2018, the ‘Deceived By Design’ report by the government-funded Norwegian Consumer Council accused tech giants Microsoft, Facebook and Google of being unethical by leading users into selecting settings that do not benefit their privacy.

In November 2018, Google’s tracking practices for user locations were questioned by a coalition of seven consumer organisations who were reported to have filed complaints with local data protection regulators. Although Google says that tracking is turned off by default and can be paused at any time by users, the complaints focused on research by a coalition member who claimed that people are forced to use the location system.

Furthermore, research by internet privacy company DuckDuckGo in December 2018 led to a claim that even in Incognito mode, users of Google Chrome can still be tracked, and searches are still personalised accordingly.

What Does This Mean For Your Business?

The introduction of GDPR and high-profile data breach and privacy incidents such as the Facebook and Cambridge Analytica scandal have made us all much more aware about (and more protective of) our personal data and how it is collected, stored and used by companies and other organisations. It is no surprise, therefore, that feedback to Google showed a need for greater control and privacy by users, and the announcement of the new (optional) automatic deletion tool also provides a way for Google to get some good data privacy PR at a time when other tech giants like Facebook and Microsoft have also been seen to make data privacy improvements for their users.

Current details about how to manually delete your Google data can be found here https://support.google.com/websearch/answer/465?co=GENIE.Platform%3DDesktop&hl=en and the ‘My Activity’ centre for your Google account, where you will most likely be able to make your automatic settings can be found here: https://myactivity.google.com/.

GDPR Says HMRC Must Delete Five Million Voice Records

The Information Commissioner’s Office (ICO) has concluded that HMRC has breached GDPR in the way that it collected the biometric voice records of users and now must delete five million biometric voice files.

What Voice Files?

Back in January 2017, HMRC introduced a system whereby customers calling the tax credits and Self-Assessment helpline could enrol for voice identification (Voice ID) as a means of speeding up the security steps. The system uses 100 different characteristics to recognise the voice of an individual and can create a voiceprint that is unique to that individual.

When customers call HMRC for the first time, they are asked to repeat the vocal passphrase “my voice is my password” to up to five times to register before speaking to a human adviser.  The recorded passphrase is stored in an HMRC database and can be used as a means of verification/authentication in future calls.

It was reported that in the 18 months following the introduction of the system, HMRC acquired 5 million peoples’ voiceprints this way.

What’s The Problem?

Privacy campaigners questioned the lawfulness of the system and in June 2018, privacy campaigning group ‘Big Brother Watch’ reported that its own investigation had revealed that HMRC had (allegedly) taken the five million taxpayers’ biometric voiceprints without their consent.

Big Brother Watch alleged that the automated system offered callers no choice but to do as instructed and create a biometric voice ID for a Government database.  The only way to avoid creating the voice ID on calling, as identified by Big Brother Watch, was to say “no” three times to the automated questions, whereupon the system still resolved to offer a voice ID next time.

Big Brother Watch highlighted the fact that GDPR prohibits the processing of biometric data for the purpose of uniquely identifying a person, unless there is a lawful basis under Article 6, and that because voiceprints are sensitive data but are not strictly necessary for dealing with tax issues, HMRC should request the explicit consent of each taxpayer to enrol them in the scheme (Article 9 of GDPR).

This led to Big Brother Watch registering a formal complaint with the ICO.

Decision

The ICO has now concluded that HMRC’s voice system was not adhering to the data protection rules and effectively pushed people into the system without explicit consent.

The decision from the ICO is that HMRC now must delete the five million records taken prior to October 2018, the date when the system was changed to make it compliant with GDPR.  HMRC has until 5th June to delete the five million voice records, which the state’s tax authority says it is confident it can do long before that deadline.

What Does This Mean For Your Business?

Big Brother Watch believes this to be the biggest ever deletion of biometric IDs from a state database, and privacy campaigners have hailed the ICO’s decision as setting an important precedent that restores data rights for millions of ordinary people.

Many businesses and organisations are now switching/planning to switch to using biometric identification/verification systems instead of password-based systems, and this story is an important reminder that these are subject to GDPR. For example, images and unique Voiceprint IDs are personal data that require explicit consent to be given, and that people should have the right to opt out as well as to opt-in.