Archive for Data Security

12 Russian Intelligence Officers Charged With Election Hacking

Even though, in an interview this week, President Trump appeared to absolve Russia of election interference (since retracted), the US Department of Justice has now charged 12 Russian intelligence officers with hacking Democratic officials in the 2016 US elections.

The Allegations

It is alleged by the US Justice Department that, back in March 2016, on the run-up to the presidential election campaign which saw Republican Donald Trump elected as president, the Russian intelligence officers were responsible for cyber-attacks on the email accounts of staff for Hillary Clinton’s Democrat presidential campaign.

Also, the Justice Department alleges that the accused Russians corresponded with several Americans (but not in a conspiratorial way), used fictitious online personas, released thousands of stolen emails (beginning in June 2016), and even plotted to hack into the computers of state boards of elections, secretaries of state, and voter software.

No Evidence Says Kremlin

The Kremlin is reported to have said that it believes there is no evidence for the US allegations, describing the story as an “old duck” and a conspiracy theory.

32, So Far

The latest allegations are all part of the investigation, led by Special Counsel Robert Meuller, into US intelligence findings that the Russians allegedly conspired in favour of Trump, and that some of his campaign aides may have colluded.
So far, 32 people (mostly Russians) have been indicted. 3 companies and 4 former Trump advisers have also been implicated.

Trump Says…

President Trump has dismissed allegations that the Russians help put him in the White House as a “rigged witch hunt” and “pure stupidity”.

In a press conference after his meeting with Russian President, Vladimir Putin in Helsinki, President Trump, however, caused shock and disbelief when asked whether he thought Russia had been involved in US election interference, he said “I don’t see any reason why it would be”.

He has since appeared to backtrack by saying that he meant to say “wouldn’t” rather than “would”, and that he accepts his own intelligence agency’s findings that Russia interfered in the 2016 election, and that other players may have been involved too.

What Does This Mean For Your Business?

Part of the fallout of constant struggle between states and super-powers are the cyber attacks that end up affecting many businesses in the UK. Also, if there has been interference in an election favouring one party, this, in turn, affects the political and economic decisions made in that country, and its foreign policy. These have a knock-on effect on markets, businesses and trade around the world, particularly for those businesses that export to, import from, or have other business interests in the US. Even though, in the US, one of the main results of the alleged electoral interference scandal appears to have been damaged reputations and disrupted politics, the wider effects have been felt in businesses around the world.

These matters and the links to Facebook and Cambridge Analytica have also raised awareness among the public about their data security and privacy, whether they can actually trust corporations with it, and how they could be targeted with political messages which could influence their own beliefs.

Cambridge Analytica Re-Born

A new offshoot of Cambridge Analytica, the disgraced data analysis company at the heart of the Facebook personal data sharing scandal, has been set up by former members of staff under the name ‘Auspex’.

Old Version Shut Down

After news of the scandal, which saw the details of an estimated 87 million Facebook users (mostly in the US) being shared with CA, and then used by CA to target people with political messages in relation to the last US presidential elections, CA was shut down by its parent company SCL Elections. CA is widely reported to have ceased operations and filed for bankruptcy in the wake of the scandal.

Ethical This Time

Auspex, which (it should be stressed) is not just another version of CA, but is likely to carry on the same kind of data analysis work, has been set up by Ahmed Al-Khatib, a former director of Emerdata which was also set up after the Cambridge Analytica scandal. Mr Al-Khatib has been reported as saying that Auspex will use ethically based, data-driven communications with a focus on improving the lives of people in the developing world.

Middle East and Africa

The markets in the developing world that Auspex will initially be focusing on are the Middle East and Africa, and the kinds of ethical work that it will be doing, according Auspex’s own communications, are health campaigning and tackling the spread of extremist ideology among a disenfranchised youth.


Auspex has been quick to state that it has made changes and that it will be fully compliant from the outset, thereby hoping to further distance itself from its murky origins in CA.


One thing that is likely to attract the attention of critics is that, not only is Mark Turnbull, the former head of CA’s political division the new Auspex Managing Director, but that the listed directors of the new company include Alastair Harris, who is reported to have worked at CA, and Omar Al-Khatib is listed as a citizen of the Seychelles.

What Does This Mean For Your Business?

The Cambridge Analytica and Facebook scandal is relatively recent, and the ICO have only just presented their report about the incident. For many people, it may not feel right that personnel from Cambridge Analytica can appear to simply set up under another name and start again. Critics can be forgiven for perhaps not trusting statements about a new ethical approach, especially since Mark Turnbull appeared alongside former CA chief executive Alexander Nix in an undercover film by Channel 4, where Nix gave examples of how his company could discredit politicians e.g. by setting up encounters with prostitutes.

The introduction of GDPR has brought the matters of data security and privacy into sharp focus for businesses in the UK, and businesses will be all too aware of the possible penalties if they get on the wrong side of the ICO.

In the case of the Facebook / Cambridge Analytica scandal, the ICO has recently announced that Facebook will be fined £500,000 for data breaches, and that it is still considering taking legal action against CA’s company’s directors. If successful, a prosecution of this kind could result in convictions and an unlimited fine.

£500,000 Fine For Facebook Data Breaches

Sixteen months after the Information Commissioners Office (ICO) began its investigation into the Facebook’s sharing the personal details of users with political consulting firm Cambridge Analytica, the ICO has announced that Facebook will be fined £500,000 for data breaches.


The amount of the fine is the maximum that can be imposed under GDPR. Although it sounds like a lot, for a corporation valued at around $500 billion, and with $11.97 billion in advertising revenue and $4.98 billion in profit for the past quarter (mostly from mobile advertising), it remains to be seen how much of an effect it will have on Facebook.

Time Before Responding

Facebook has now been given time to respond to the ICO’s verdict before a final decision is made by the ICO.

Facebook have said, however, that it acknowledges that it should have done more to investigate claims about Cambridge Analytica and taken action back in 2015.

Reminder of What Happened

The fine relates to the harvesting of the personal details of 87 million Facebook users without their explicit consent, and the sharing of that personal data with London-based political Consulting Firm Cambridge Analytica, which is alleged to have used that data to target political messages and advertising in the last US presidential election campaign.

Also, harvested Facebook user data was shared with Aggregate IQ, a Data Company which worked with the ‘Vote Leave’ campaign in the run-up to the Brexit Referendum.

The sharing of personal user data with those companies was exposed by former Cambridge Analytica employee and whistleblower Christopher Wylie. The resulting publicity caused public outrage, saw big falls in Facebook’s share value, brought apologies from its founder / owner, and saw insolvency proceedings (back in May) for Cambridge Analytica and its parent SCL Elections.

What About Cambridge Analytica?

Although Facebook has been given a £500,000 fine, Cambridge Analytica no longer exists as a company. The ICO has indicated, however, that it is still considering taking legal action against the company’s directors. If successful, a prosecution of this kind could result in convictions and an unlimited fine.


As for Canadian data analytics firm AggregateIQ, the ICO is reported to still be investigating whether UK voters’ personal data provided by the Brexit referendum’s Vote Leave campaign had been transferred and accessed outside the UK and whether this amounted to a breach of the Data Protection Act. Also, the ICO is reported to be investigating to what degree AIQ and SCL Elections had shared UK personal data, and the ICO is reported to have served an enforcement notice forbidding AIQ from continuing to make use of a list of UK citizens’ email addresses and names that it still holds.

Worries About 11 Main Political Parties

The ICO is also reported to have written to the UK’s 11 main political parties, asking them to have their data protection practices audited because it is concerned that the parties may have purchased certain information about members of the public from data brokers, who might not have obtained consent.

What Does This Mean For Your Business?

When this story originally broke, it was a wake-up call about what can happen to the personal data that we trust companies / corporations with, and it undoubtedly damaged trust between Facebook and its users to a degree. It’s a good job that the ICO is there to follow things up on our behalf because, for example, a Reuters/Ipsos survey conducted back in April found that, even after all the publicity surrounding Facebook and Cambridge Analytica scandal, most users remained loyal to the social media giant.

Also, the case has raised questions about how our data is shared and used for political purposes, and how the using and sharing of our data to target messages can influence the outcome of elections, and, therefore, can influence the whole economic and business landscape. This has meant that there has now been a call for the UK government to step-in and introduce a code of practice which should limit how personal information can be used by political campaigns before the next general election.
Facebook has recently been waging a campaign, including heavy television advertising, to convince us that it has changed and is now more focused on protecting our privacy. Unfortunately, this idea has been challenged by the recent ‘Deceived By Design’ report by the government-funded Norwegian Consumer Council, which accused tech giants Microsoft, Facebook and Google of being unethical by leading users into selecting settings that do not actually benefit their privacy.

Tech Giant GDPR Privacy Settings ‘Unethical’ Says Council

The ‘Deceived By Design’ report by the government-funded Norwegian Consumer Council has accused tech giants Microsoft, Facebook and Google of being unethical by leading users into selecting settings that do not benefit their privacy.

Illusion of Control

The report alleges that, far from actually giving users more control over their personal data (as laid out by GDPR), the tech giants may simply be giving users the illusion that this is happening. The report points to the possible presence of practices such as:

– Facebook and Google making users who want the privacy-friendly option go through a significantly longer process (privacy intrusive defaults).

– Facebook, Google and Windows 10 using pop-ups that direct users away from the privacy-friendly choices.

– Google presenting users with a hard-to-use dashboard with a maze of options for their privacy and security settings. For example, on Facebook it takes 13 clicks to opt out of authorising data collection (opting in can take just one).
– Making it difficult to delete data that’s already been collected. For example, deleting data about location history requires clicking through 30 to 40 pages.

– Google not warning users about the downside of personalisation e.g. telling users they would simply see less useful ads, rather than mentioning the potential to be opted in to receive unbalanced political ad messages.

– Facebook and Google pushing consumers to accept data collection e.g. with Facebook stating how, if users keep face recognition turned off, Facebook won’t be able to stop a stranger from using the user’s photo to impersonate them, while not stating how Facebook will use the information collected.

Dark Patterns

In general, the reports criticised how the use of “dark patterns” such as misleading wording and default settings that are intrusive to privacy, settings that give users an illusion of control, hiding privacy-friendly options, and presenting “take-it-or-leave-it choices”, could be leading users to make choices that actually stop them from exercising all of their privacy rights..

Big Accept Button

The report, by Norway’s consumer protection watchdog, also notes how the GDPR-related notifications have a large button for consumers to accept the company’s current practices, which could appear to many users to be far more convenient than searching for the detail to read through.


Google, Facebook and Microsoft are all reported to have responded to the report’s findings by issuing statements focusing on the progress and improvements they’ve made towards meeting the requirements of the GDPR to date.

What Does This Mean For Your Business?

GDPR was supposed to give EU citizens much more control over their data, and the perhaps naive expectation was that companies with a lot to lose (in fines for non-compliance and reputation), such as the big tech giant and social media companies would simply fall into line and afford us all of those new rights straight away.

The report by the Norwegian consumer watchdog appears to be more of a reality check that shows how our personal data is a valuable commodity to the big tech companies, and that, according to the report, the big tech companies are willing to manipulate users and give the illusion that they are following the rules without actually doing so. The report appears to indicate that these large corporations are willing to force consumers to try to fight for rights that have already been granted to them in GDPR.

Samsung Phones Sending Photos Without Permission

The Samsung Galaxy S9, Galaxy S9+ and Note 8 are all reported to have been recently affected by a bug in the Samsung Messages app that sends out photos from the user’s gallery without their permission … to random contacts.

What Happens?

According to Samsung phone users on social media and the company’s forum, some users have been affected by a bug in the default texting app on Galaxy, Samsung Messages. Reports indicate that the bug causes Samsung Messages to text photos stored in a user’s gallery to a random person listed as contact. The user is not informed that the pictures have been sent, or to whom, and there has even been one reported complaint that a person’s whole gallery was sent to a contact in the middle of the night!


Although there is no conclusive evidence concerning the cause, online speculation has centred on the bug being related to the interaction between Samsung Messages and recent RCS (Rich Communication Services) profile updates that have rolled out on carriers including T-Mobile. These updates have been rolled out to add updated and new features to the outdated SMS protocol e.g. better media sharing and typing indicators.


Samsung is reported to have acknowledged the reports of problems, and is said to be looking into them. Samsung is also reported to have urged concerned customers to contact them directly on 1-800-SAMSUNG, and the company supposedly have been in contact with T-Mobile about the issue. T-Mobile is recorded as saying that it is not their issue.

What Can You Do?

As well contacting Samsung, and in the absence of any definitive news of a fix as yet, there are two main possible fixes that Samsung owners can pursue. These are:

  1. To go into the phone’s app settings and revoke Samsung Messages’ ability to access storage. This should stop Messages from sending photos or anything else stored on the device.
  2. Switch to a different texting app e.g. Android Messages or Textra. There are no (known) reports of these being affected by the same bug.

What Does This Mean For Your Business?

People pay a lot of money to get the latest phones and to get the right contracts to allow for the high volume of communications associated with business use. It is (at the very least) annoying, but more generally scary and potentially damaging that personal, private image files can be randomly sent. These photos could, for example, contain commercially sensitive information that could put a company’s competitive advantage at risk if sent to the wrong person. Also, some photos could cause embarrassment for the user and / or the subject of the photo, and could damage business and personal relationships if they fell into the wrong hands. Some photos sent to the wrong person, as well as compromising privacy, could pose serious security risks.

At a time when we acknowledge that photos of ourselves / our faces stored by e.g. CCTV cameras are our personal data, Samsung could find itself on the wrong end of GDPR-related and other lawsuits if found to be directly responsible for the bug and its results.

Tesla Traps Tripp

California-based vehicle tech corporation ‘Tesla’ is suing a former employee, who some saw simply as a Whistleblower, over alleged acts of industrial espionage.


The former Tesla technician who stands accused by Tesla boss Elon Musk of industrial espionage has been named as Martin Tripp. The allegations made against Mr Tripp include that he was hacking and stealing company secrets, and that he wrote software that was designed to aid in the theft of photos and videos.

Tesla has also alleged that Mr Tripp was partly motivated to commit malicious acts against the company after he failed to get a promotion. Tesla has filed a federal lawsuit against him.

Tesla is also reported as saying that 40-year old forces veteran Tripp made false claims to the media about the information he (allegedly) stole, particularly where claims about punctured battery cells, excess scrap material and manufacturing delays are concerned.


Far from being an alleged criminal who meant the company harm, Mr Tripp claims that he is simply a Whistleblower who the company is trying to get rid of in order to cover up details about products / components that could damage the company’s reputation if they were known.

For example, Mr Tripp claims that he has simply been trying to expose “some really scary things” at Tesla, including punctured batteries being used in vehicles. Mr Tripp has also alleged that he became disillusioned with Tesla when (as he alleges) he saw how Elon Musk was lying to investors about how many cars they were making.

Mr Tripp has also been reported as saying that he didn’t write any software to aid the theft of photos and videos because he has no patience for coding, and that he didn’t care about failing to get a promotion.

Tripp is looking for legal protection as a whistleblower.

Silencing a Scapegoat?

Mr Tripp has been reported as saying that he is being made a scapegoat because he provided information that was true, that Tesla are doing everything they can to silence him, and that he feels that he had no rights as a whistleblower.

The local Sheriff’s office is reported as announcing that there is no credible threat to the Tesla’s lithium-ion battery factory, known as the Gigafactory.

Mr Tripp has been reported as saying that he allegedly turned whistleblower after his concerns were not taken seriously by anyone in the company.

What Does This Mean For Your Business?

It would certainly not be unheard-of for a disgruntled employee / former employee to pose a security risk or commit acts of sabotage. For example, back in 2014, Andrew Skelton, who was an auditor at the head office of Morrisons (supermarket chain) in Bradford, leaked the personal details of almost 100,000 staff. Mr Skelton is believed to have deliberately stolen and leaked the data in a move to get back at the company after he was accused of dealing in legal highs at work.

We are also familiar with how difficult companies / organisations and other interested parties can make it for people who are ‘whistleblowers’ e.g. reports in the media about Dr Hayley Dare who received poison-pen letters was dismissed from a 20 year unblemished career with a 3 line email after raising concerns over a patient’s safety with her employer, an NHS Trust.

In the case of Tesla, it is currently not possible to say whether or not Mr Tripp is a whistleblower or a disgruntled former-employee with malicious intent. What it does remind us though is that corporate / company culture should be such that employees feel able to express their concerns, are listened to, and that it is viewed as a positive way to find areas to make improvements and modifications that could actually help a company in the long-run.

The Tesla story should also remind companies to plug some basic security loopholes in IT systems when employees leave / are dismissed. This includes simply changing passwords, access rights, and monitoring systems to ensure that nothing untoward is happening.

GDPR Exemption Sought

It has been reported that financial market regulators from the US, the UK and Asia are pressing for an exemption from GDPR.

Growing Calls For Exemption

Even though GDPR only came into force a little over a month ago (May 25th), financial regulators from several countries, most notably the US, have been pressing over several years for an exemption to be built-in, and have hosted multiple meetings about the matter on both sides of the Atlantic.

What’s The Problem?

Before GDPR, financial regulators could use their exemption to share vital information e.g. bank and trading account data, to advance misconduct probes. Now that GDPR is in force, regulators are, therefore, arguing that no exemption means that international probes and enforcement actions in cases involving market manipulation and fraud could be hampered.

Regulators say that they are particularly concerned about the effects on U.S. investigations into crypto-currency fraud and market manipulation (for which many actors are based overseas) could be at risk. Without an exemption, regulators say that cross-border information sharing could be challenged because some countries’ privacy safeguards now fall short of those now offered by the EU under GDPR.

Seeking An “Administrative Arrangement”

The form of exemption that regulators are reported to be seeking is a formal “administrative arrangement” with the Brussels-based European Data Protection Board (EDPB), headed by Andrea Jelinek. The written arrangement would clarify if and how the public interest exemption can be applied to their cross-border information sharing.

Which Regulators?

Reports indicate that the regulators involved in discussions about getting an exemption include the EU’s European Securities and Markets Authority (ESMA), the U.S. Commodity Futures Trading Commission (CFTC), the Securities and Exchange Commission (SEC), the Ontario Securities Commission (OSC), the Japan Financial Services Agency (FSA), Britain’s Financial Conduct Authority (FCA), and the Hong Kong Securities and Futures Commission (SFC).

Why Not?

The worry from the EDPB is that granting exemptions could lead to the illegitimate circumventing and watering down of the new GDPR privacy safeguards, now among the toughest in the world. This, in turn, could lead to the harming EU citizens which is exactly the opposite of the reason for the introduction of GDPR.

The matter has, however, been complicated by the fact that regulators’ slow response to the 2007-2009 global financial crisis was partly blamed on poor cross-border coordination, which has since been improved, and better information sharing after the crisis is reported to have lead to billions of dollars in fines for banks e.g. for trying to rig Libor interest rate benchmarks.

What Does This Mean For Your Business?

A financial crisis (e.g. involving bad behaviour by banks) can create serious knock-on costs and problems for businesses worldwide, and it is, therefore, possible to see why financial regulators feel they need an exemption so that they can continue to share information which will ultimately be in the interest of business and the public. It is likely, therefore, that discussions will continue for some time yet to try to find a way to grant exemptions in certain circumstances.

The contrary view is that granting exemptions will water down legislation that was designed to offer stronger protection to us all, potentially putting EU citizens at risk, and allowing organisations that we can’t effectively monitor to simply circumvent the new law and behave how they like. This could undermine the privacy and rights of EU citizens.

Calls to Stop Storing of Personal Communications Data and Voiceprints

Privacy groups have led calls to halt the blanket collection and storing of communications data in the EU area, and the creation and storing of the “audio signatures” of 5.1 million people by HM Revenue and Customs (HMRC).

Collection of Communications Data

The privacy groups Privacy International, Liberty, and Open Rights Group, have filed complaints to the European Commission which call for EU governments to stop making companies collect and store all communications data. Their complaints have also been echoed by dozens of community groups, non-governmental organisations (NGOs), and academics.

What’s The Problem?

The main complaint is that communications companies in EU states indiscriminately collect and retain all of our communications data. This includes the details of all calls, texts and so forth (i.e. who with, dates, times etc).

The privacy groups and their supporters argue that not only does this amount to a form of intrusive surveillance, but that the practice was actually ruled unlawful by the Court of Justice of the European Union (CJEU) in two judgments in 2014 and 2016.

Privacy groups have expressed concern that some companies in some EU states have tried to circumvent the CJEU judgements, and the CJEU have clearly stated that general and indiscriminate retention of communications data is disproportionate and can’t be justified.

In the UK, for example, the intelligence agencies collect details of thousands of calls daily, but under the CJEU judgements, this amounts to breaking the law.

HMRC Collecting Recordings of Voices

Perhaps even more shocking is the news this week that, according to privacy group Big Brother Watch, the UK HM Revenue and Customs (HMRC) has a Voice ID system that has collected 5.1 million audio signatures.

The accusation is that HMRC is creating biometric ID cards or voiceprints by the back door. These voiceprints could conceivably be used by government agencies to identify UK citizens across other areas of their private lives.

Big Brother Watch has also expressed concern that customers are not given the choice to opt out of the use of this system.

Helpful and Secure

HMRC, which launched the Voice ID scheme last year, asks callers to repeat the phrase “my voice is my password” to register and access their tax details, and says that the system has been very popular with customers. HMRC has also said that the 5 million+ voice recordings that it already has are stored securely.

Privacy campaigners are calling for the deletion of the voiceprints that are currently stored, and for a different system to be implemented, or to at least allow customers to opt out of Voice ID and to be able to use an alternative method.

What Does This Mean For Your Business?

Businesses may be very aware, after having to adjust their own systems to be compliant to the recently introduced GDPR, that all EU citizens should now have more rights about what happens to their personal data. The term ‘personal data’ in the GDPR sense now covers things like our images on CCTV footage, and should, therefore, cover recordings of our personal conversations and biometric data such as recordings of our voices / voice prints / audio signatures.

While we may accept that there are arguments for monitoring our communications data e.g. fighting terrorism, many people clearly feel that the blanket collection of all communications data, not just that of suspects, is a step too far, is an invasion of privacy, and has echoes of ‘big brother’.

Biometrics e.g. using a fingerprint / face-print to access a phone or as part of security to access a bank account is now becoming more commonplace, and can be a helpful, more secure way of validating / authenticating access. Again, images of our faces, fingerprints, and our audio signatures (in the case of HMRC) are our personal data, and it is right that we would want them to be secure, and as with GDPR, that they are only used for the one purpose that we have given consent for, and not to be passed secretly among states and unknown agencies. Also, the ideas that we can opt in or opt out of systems, and are given a choice of which system we use i.e. not being forced to submit a voice recording, is an important issue, and one that many thought GDPR would address.

As more and more biometric systems come into use in the future, legislation will, no doubt, need to be updated again to take account of the changes.

Appeal Dismissed After Asylum Seeker Data Breach

An appeal by the UK Home Office to limit the number of potential claimants from a 2013 data breach has been dismissed on the grounds that an accidentally uploaded spreadsheet exposed the confidential information and personal data of asylum applicants and their family members.

What Happened?

Back in 2013, the Home Office is reported to have uploaded a spreadsheet to their website. The spreadsheet should have simply contained general statistics about the system by which children who have no legal right to remain in the UK are returned to their country of origin (known as ‘the family returns process’).

Unfortunately, this spreadsheet also contained a link to a different downloadable spreadsheet that displayed the actual names of 1,598 lead applicants for asylum or leave to remain. It also contained personal details such as the applicants’ ages, nationality, the stage they had reached in the process and the office that dealt with their case. This information could also potentially be used to infer where they lived.

The spreadsheet is reported to have been available online for almost two weeks during which time the page containing the link was accessed from 22 different IP addresses and the spreadsheet was downloaded at least once. The spreadsheet was also republished to a US website, and from there it was accessed 86 times during a period of almost one month before it was finally taken down.

For those claiming asylum e.g. because of persecution in the home country that they had escaped from, this was clearly a very distressing and worrying situation.


In the court case that followed in June 2016, the Home Office was ordered to pay six claimants a combined total of £39,500 for the misuse of private information and breaches of the Data Protection Act (“DPA”). The defendants conceded that their actions amounted to a misuse of private information (“MPI”) and breaches of the DPA.

The Home Office did, however, lodge an appeal in an apparent attempt to limit the number of other potential claims for damages.

Appeal Dismissed

The appeal by the Home Office was dismissed by the three Appeal Court judges, and meant that both the named applicants and their wives (if proof of ‘distress’ could be shown) could sue for both the common law and statutory torts. This was because the judges said that the processing of data in the name of claimant about his family members was just as much the processing of their personal data as his, therefore, meaning that their personal and confidential information had also been misused.

Not The First Time

The Home Office appears to have been the subject similar incidents in the past. For example, back in January the Home Office paid £15,500 in compensation after admitting handing over sensitive information about an asylum seeker to the government of his Middle East home country, thereby possibly endangering his life and that of his family.

The handling of the ‘Windrush’ cases, which has recently made the headlines, has also raised questions about the quality of decision-making and the processes in place when it comes to matters of immigration.

What Does This Mean For Your Business?

In this case, it is possible that those individuals whose personal details were exposed would have experienced distress, and that the safety of them and their families could have been compromised as well as their privacy. This story indicates the importance of organisations and businesses being able to correctly and securely handle the personal data of service users, clients and other stakeholders. This is particularly relevant since the introduction of GDPR.

It is tempting to say that this case illustrates that no organisation is above the law when it comes to data protection. However, it was announced in April that the Home Office will be granted data protection exemptions via a new data protection bill. The exemptions could deprive applicants of a reliable means of obtaining files about themselves from the department through ‘subject access requests’. It has also been claimed that the new bill will mean that data could be shared secretly between public services, such as the NHS, and the Home Office, more easily. Some critics have said that the bill effectively exempts immigration matters from data protection. If this is so, it goes against the principles of accountability and transparency that GDPR is based upon. It remains to be seen how this bill will progress and be challenged.

AI Creates Phishing URLs That Can Beat Auto-Detection

A group of computer scientists from Florida-based cyber security company, Cyxtera Technologies, are reported to have built machine-learning software that can generate phishing URLs that can beat popular security tools.

Look Legitimate

Using the Phishtank database (a free community site where anyone can submit, verify, track and share phishing data) the scientists built the DeepPhish machine-learning software that is able to create URLs for web pages that appear to be legitimate (but are not) login pages for real websites.

In actual fact, the URLs, which can fool security tools, lead to web pages that can collect the entered username and passwords for malicious purposes e.g. to hijack accounts at a later date.


The so-called ‘DeepPhish’ machine-learning software that was able to produce the fake but convincing URLs is actually an AI algorithm. It was able to produce the URLs by learning effective patterns used by threat actors and using them to generate new, unseen, and effective attacks based on that attacker data.

Can Increase The Effectiveness of Phishing Attacks

Using Phishtank and the DeepPhish AI algorithm in tests, the scientists found that two uncovered attackers could increase their phishing attacks effectiveness from 0.69% to 20.9%, and 4.91% to 36.28%, respectively.

Training The AI Algorithm

The effectiveness of AI algorithms is improved by ‘training’ them. In this case, the training involved the team of scientist first inspecting more than a million URLs on Phishtank. From this, the team were able to identify three different phishing attacks that had generated web pages to steal people’s credentials. These web addresses were then fed into the AI phishing detection algorithm to measure how effective the URLs were at bypassing a detection system.

The team then added all the text from effective, malicious URLs into a Long-Short-Term-Memory network (LSTM) so that the algorithm could learn the general structure of effective URLs, and extract relevant features.

All of this enabled the algorithm to learn how to generate the kind of phishing URLs that could beat popular security tools.

What Does This Mean For Your Business?

AI offers some exciting opportunities for businesses to save time and money, and improve the effectiveness of their services. Where cyber-security is concerned, AI-enhanced detection systems are more accurate than traditional manual classification, and the use of intelligent detection systems has enabled the identification of threat patterns and the detection of phishing URLs with 98.7% accuracy, thereby giving the battle advantage to defensive teams.

However, it has been feared for some time that if cyber-criminals were able to use a well-trained and sophisticated AI systems to defeat both traditional and AI-based cyber-defence systems, this could pose a major threat to Internet and data security, and could put many businesses in danger.

The tests by the Florida-based cyber-security scientists don’t show very high levels of accuracy in enabling effective defence-beating phishing URLs to be generated. This is a good thing for now, because it indicates that most cyber-criminals with even fewer resources may not yet be able to harness the full power to launch AI-based attacks. The hope is that the makers of detection and security systems will be able to use AI to stay one step ahead of attackers.

State-sponsored attackers, however, may have many more resources at their disposal, and it is highly likely that AI-based attack methods are already being used by state-sponsored players. Unfortunately, state-sponsored attacks can cause a lot of damage in the business and civilian worlds.