Archive for Social Media

Fake News Fact Checkers Working With Facebook

London-based, registered charity ‘Full Fact’ will now be working for Facebook, reviewing stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

Why?

The UK Brexit referendum, the 2017 UK general election, and the U.S. presidential election were both found to have suffered interference in the form of so-called ‘fake news’ / misinformation spread via Facebook which appears to have affected the outcomes by influencing voters.

For example, back in 2018, it was revealed that London-based data analytics company, Cambridge Analytica, which was once headed by Trump’s key adviser Steve Bannon, had illegally harvested 50 million Facebook profiles in early 2014 in order to build a software program that was used to predict and generate personalised political adverts to influence choices at the ballot box in the last U.S. election. Russia was also implicated in trying to influence voters via Facebook.

Chief executive of Facebook, Mark Zuckerberg, was made to appear before the U.S. Congress in April to talk about how Facebook is tackling false reports, and even recently a video that was shared via Facebook (which had 4 million views before being taken down) falsely suggested that smart meters emit radiation levels that are harmful to health. The information in the video was believed by many even though it was false.

Scoring System

Back in August 2018, it was revealed that for 2 years Facebook had been trying to manage some misinformation issues by using a system (operated by its own ‘misinformation team’) that allocated a trustworthiness score to some members.  Facebook is reported to be already working with fact-checkers in more than 20 countries. Facebook is also reported to have had a working relationship with Full Fact since 2016.

Full Fact’s System

This new system from third-party Full Fact will now focus on Facebook in the UK.  When users flag up to Facebook what they suspect may be false content, the Full Fact team will identify and review public pictures, videos or stories and use a rating system that will categorise them as true, false or a mixture of accurate and inaccurate content.  Users will then be told if the story they’ve shared, or are about to share, has been checked by Full Fact, and they’ll be given the option to read more about the claim’s source, but will not be stopped from sharing anything.

Also, the false rating system should mean that false content will appear lower in news feeds, so it reaches fewer people. Satire from a page or domain that is a known satire publication will not be penalised.

Like other Facebook third-party fact-checkers, Full Fact will be able to act against pages and domains that repeatedly share false-rated content e.g. by reducing by their distribution and by reducing their ability to monetise and advertise.  Also, Full Fact should be able to stop repeat offenders from registering as a news page on Facebook.

Assurances

Full Fact has published assurances that among other things, they won’t be given access to Facebook users’ private data for any reason, Facebook will have no control over what they choose to check, and they will operate in a way that is independent, impartial and open.

Political Ad Transparency – New Rules

In October last year, Facebook also announced that a new rule for the UK now means that anyone who wishes a place an advert relating to a live political issue or promoting a UK political candidate, referencing political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate, will need to prove their identity, and prove that they are based in the UK. The adverts they post will also have to carry a “Paid for by” disclaimer to enable Facebook users to see who they are engaging with when viewing the ad.

What Does This Mean For Your Business?

As users of social networks, we don’t want to see false news, and false news that influences the outcome of important issues (e.g. elections and referendums) have a knock-on effect to the economic and trade environment which, in turn, affects businesses.

Facebook appears to have lost a lot of trust over the Cambridge Analytica (SCL Elections) scandal, findings that Facebook was used to distribute posts of Russian origin to influence opinion in the U.S. election, and that the platform was also used by parties wishing to influence the outcome of the UK Referendum. Facebook, therefore, must show that it is taking the kind of action that doesn’t stifle free speech but does go some way to tackling the spread of misinformation via its platform.

There remains, however, some criticism in this case that Facebook may still be acting too slowly and not decisively enough, given the speed by which some false content can amass millions of views.

Reddit Locks Out Users Over Security Concerns

Online community Reddit shut some users out of their accounts and forced password resets due to “unusual activity” which may have been a ‘credential stuffing’ attempt by hackers.

Reddit

California-based Reddit, founded in 2005, is a kind social network / online community.  Reddit, which is the fifth most popular site in the United States (Alexa figures), is split into over a million communities called “subreddits,” each one covering a different topic.  Reddit allows registered members to submit content to the site, and that content is voted up and down by other members.

What Happened With The Lockdown?

According to Reddit’s own reports, a large group of accounts had to be locked down due to a security concern which took the form of account activity that resembled someone using very simple passwords or the reuse of credentials across multiple websites or services – in other words, a credential-stuffing attempt.

Reddit’s admin known as “u/Sporkicide” reported that it appeared likely that a list of usernames and passwords, possibly taken from another compromised site, were being tried against other popular sites, including Reddit, to see if they work e.g. if a user had used the same username and password for multiple websites.

Reddit advised customers, those with locked accounts would be allowed to reset their passwords and thereby unlock and restore their accounts. Reddit said that the notification to do so would be a notification to the account (affected customers could still log in to get it) and/or an email to any support ticket raised by affected users.

Not The First Time

Back in August 2018 Reddit reported that between a June 14th and June 18, an attacker compromised some employee accounts through their cloud and source code hosting providers and was able to access some user data, including email addresses and a complete 2007 database backup containing old passwords and early Reddit user data from the site’s launch in 2005 through May 2007.

Advice

As well as announcing that it was conducting a “painstaking investigation” of the incident, Reddit advised users to make sure that they choose strong passwords that are unique to Reddit, update their email addresses to enable automated password resets, and add two-factor authentication their accounts to make them more secure.

What Does This Mean For Your Business?

This story highlights the importance of not using the same username and password across many websites.  The danger is that, if hackers can steal login credentials in a hack on one website, they or other attackers who have purchased / acquired the stolen data may well try to use that login data on many other popular websites to try and gain access.

Also, where other security measures such as two-factor authentication are available, it is worth using it as an extra obstacle to the kind of simple, opportunistic credential-stuffing attempts that are all-too-frequent.

Businesses / organisations should always encourage users to use login details that are unique to their website, give visual guidance on password strength on set-up, and specify a certain number of required characters for passwords e.g. including a capital letter, numbers, other special characters, and making the password a certain length.  As well as being a bit more secure, this can also help to stop people from using exactly the same password between multiple sites.

Blurring of Personal and Business Technology Cause For Concern

A report by CCS Insight showing how three-quarters of employees are forced to install work software on their personal mobile devices has highlighted how the increased blurring of personal and business technology is causing concern.

Objections

The report, which took into account the views of 672 employees across the US and Western Europe about digital technology, revealed how, among many other concerns, workers object to the practice of having to download work-based applications onto their personal mobile devices just so that they can carry out their jobs. As well as the understandable objection to feeling forced to blur work and home life by having to install intrusive work software on a personal device, employees also objected to the practice out of fear that their employers could use the software to track them.

Poor Connectivity

Another major annoyance indicated by workers who took part in the survey was poor connectivity in the digital workplace.

WhatsApp Popularity

Despite highlighting poor connectivity at work as a major grumble for workers, it appears that it hasn’t stopped them from using always-on, connected apps. For example, the report revealed that WhatsApp is now the most widely used mobile app in businesses, even beating out Microsoft Office 365. WhatsApp, however, is likely to be something that workers will have on their phone anyway, and its end-to-end encryption means that workers don’t have to fear any kind of tracking by the boss through its use.

Other Concerns

Other employee concerns highlighted by the report include:

  • The fear that their job may be lost to AI. This concern was expressed despite half of the employees surveyed saying that they expect digital assistants such as Google Assistant to help them in their job.
  • Only two-thirds of employees saying that they trust their employers with their privacy.
  • A mistrust of tech giant companies, although Microsoft was shown to be more trusted than most.

What Does This Mean For Your Business?

The fact that many employees have high spec mobile devices and access to apps that could be used by the company, and the fact that ‘Bring Your Own Device’ (BYOD) schemes are commonplace, doesn’t appear to make employees feel comfortable about having to download work-based apps. Employees may be justified in feeling that they shouldn’t feel pressured into having to employ their personal devices for work tasks, and that employers shouldn’t rely so heavily upon the personal devices of employees instead of providing their own, and that respecting the barrier between work and home life is important. By the same token, employers who allow workers to use their own devices at work may also expect employees to be respectful in terms of how much time they spend dealing with personal matters during work time on their devices.

Workers may be justified in worrying about the impact of AI on their jobs in the future, and connectivity problems are a known source of work stress, particularly in the case of mobile workers.

When it comes to the mistrust of tech giants, this seems reasonable considering the number of high profile reports of data breaches and unauthorised data sharing in recent times (e.g. Facebook and Cambridge Analytica).

New Political Ad Transparency Rules Tested With Pro-Brexit Website

No sooner had Facebook announced new rules to force political advertisers to prove their identities and their ad spend than an anonymous pro-Brexit campaign website with a massive £257,000 ad spend was discovered.

Mainstream Network

The anonymous website and campaign identified only as ‘Mainstream Network’ was discovered by Campaign group 89up. Clicking on the Facebook adverts by Mainstream Network takes users to a page on their local constituency and MP, and clicking from there was found to generate an email to their MP requesting that the Prime Minister should abandon her Chequers Brexit deal. It has also been discovered that a copy of each of the emails is sent back to Mainstream Network.

11 Million People Reached

Campaign group 89up estimate that the unknown backers of Mainstream Network must have spent in the region of £257,000 to date on the Facebook adverts, which 89up estimate could have reached 11 million people.

What’s The Problem?

The problem with these political adverts is that Facebook has recently announced new rules in the UK that require anyone wishing to place an advert relating to a live political issue, promoting a UK political candidate, referencing political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate, to prove their identity, and prove that they are based in the UK. Policing this should involve obtaining proof of identity and where they are based e.g. by checking a passport / driving licence / resident permit. According to Facebook, any political adverts must also carry a “Paid for by” disclaimer to enable Facebook users to see who the adverts are from, and the “Paid for by” link next to each advert should link through to a publicly searchable archive of political adverts showing a range of the ad’s budget and number of people reached, and the other ads that Page is running, and previous ads from the same source.

GDPR Breach Too?

It is also believed that sending a copy of the email back to Mainstream Network, in this case, could also constitute a breach of GDPR.

First Job For Facebook’s Nick Clegg

What to do about Mainstream Network and their campaign could end up being the first big task of Facebook’s newly appointed global communications chief and former deputy PM Sir Nick Clegg. It’s been reported that Mark Zuckerberg himself and Facebook’s chief operating officer Sheryl Sandberg were personally involved in recruiting Mr Clegg given the importance and nature of the role.

What Does This Mean For Your Business?

After Facebook announced new rules to ensure political ad-transparency, the discovery of Mainstream Network’s anonymous adverts and the scale of the ad spend and reach must be at the very least embarrassing and awkward for Facebook, and is another piece of unwanted bad publicity for the social network tech giant. Whatever a campaign of this kind and scale is for, Facebook must really be seen to act in order to retain the credibility of its claims that it wants political ad transparency, not to lose any more of the trust if its users and advertisers, and to avoid being linked with any more political influence scandals.

Facebook has recently faced many other high profile problems including how much tax it pays, the scandal of sharing user details with Cambridge Analytica and AggregateIQ (over the UK referendum), a fine by the ICO for breaches of the U.K.’s Data Protection Act, and a major hack, and is perhaps with all this in mind that it has hired a former politician and UK Deputy Prime minister. Some political commentators have also noted that it may be very useful for Facebook to have a person on-board who knows the key players, who has reach and is able to lobby on Facebook’s behalf in one of its toughest regulatory areas, the European Union.

Facebook Messenger May Introduce Voice Commands

It has been reported that Facebook has been testing how voice commands could be used in its Messenger platform to help users to send messages, initiate voice calls and set reminders.

Internally Testing

Facebook is reported to have confirmed to tech news platform ‘TechCrunch’ that it is internally testing a prototype of voice control (which was discovered by a TechCrunch tipster) in the M assistant of Messenger.

Facebook’s new speech recognition feature goes by the name of ‘Aloha’. It is believed that Aloha will be used for Facebook and Messenger apps, as well as external hardware. The Aloha voice assistant could become part of Facebook’s planned Portal video chat screen device / smart speaker, which is currently in development.

Benefits

Enabling voice control in the Messenger platform could bring considerable benefits to users, such as being able to use Messenger ‘hands-free’ in the car, improving accessibility, and generally making it easier for people to use the Messenger platform in the home and on the go.

How Will It Work?

Initial reports indicate that Aloha will be activated in Messenger by tapping an M assistant button which will appear at the top of a message thread screen. This will enable listening for voice commands.

Need To Differentiate

Apart from the obvious, high profile, negative publicly over the Cambridge Analytica data sharing and the recent massive hack, Facebook has experienced challenges in recent times as many of its younger users have moved to Snapchat. Facebook bought Instagram in a move that many saw as a way to attract the young users that moved from Facebook, but this strategy doesn’t appear to have been highly successful.

Adding a voice assistant to Messenger could, therefore, be a way for it to tackle part of this issue, and to differentiate its Messenger option from competitors such as SMS, Snapchat, Android Messages, iMessage and other texting platforms. Facebook is also known to be experimenting with other visual features such as Facebook Stories, augmented reality filters and more in order to help engage and retain users, and differentiate its services.

What Does This Mean For Your Business?

Facebook has been relatively late to the market with a digital voice assistant, but it appears to have found a way to deploy it at a time when it may be most needed to help differentiate its services from competing services, and to generate some good publicity amid the bad.

One of the biggest challenges that Facebook has at the moment, apart from the fact that Snapchat, iMessage, WhatsApp and other services are already popular and users may be loyal, is one of trust by users. The Cambridge Analytica data sharing scandal, and the recent hack which could have more reverberations as cyber-criminals sell and use the data they stole, may mean that users may not trust Facebook to handle their speech data as responsibly as they would like. There are, for example, stories of how other digital voice assistants have listened-in on their users e.g. back in May when an Amazon Echo (Alexa) recorded a woman’s conversation and shared it with one of her husband’s employees. It remains to be seen, therefore, whether users will now be willing to trust Facebook with what is still quite a sensitive area of personal data governance, particularly where business conversations are concerned.

Facebook Promptly Removes Mynamar Military Accounts

Facebook has reacted quickly and taken down many accounts of Myanmar’s military leaders after a damning UN report accused them of genocide and war crimes against the Muslim Rohingya population.

Removed

It has been reported that, following the release of the report, Facebook removed a total of 18 user accounts, and 52 pages associated with the Myanmar military. These are thought to include the page of its commander-in-chief.

All of the removed pages and accounts are thought to have had a total of almost 12 million followers.

The Situation, In Brief

The action by Facebook relates to the situation in Myanmar, formerly Burma, where approximately 25,000 Rohingya Muslims have been killed and an estimated 700,000 forced to flee to Bangladesh in over the past year. The blame for the alleged genocide has been placed firmly at the door of the Myanmar military, and the country’s leader and Nobel Prize winner, Ang San Suu Kyi, has been widely criticised for apparently failing to use her position as head of government, or her moral authority, to stop the persecution and violence in Rakhine state.

Facebook – Several Reasons

As well as the fact that the accounts relate to suspected war criminals, Facebook has several reasons to act quickly in taking down the accounts of military leaders and their associates, including the fact that:

– Facebook, by its own admission, had been too slow up until now in acting to remove posts aimed at stirring up and spreading hatred against the minority Muslim Rohingya population.

– Facebook is a very popular social network in Myanmar, and thus bad as well as good messages can be distributed widely and quickly using the platform.

– The Tatmadaw (the official name of the armed forces of Myanmar) has been using its official Facebook pages to discredit allegations of the crimes it has committed, and to stir-up further fears about the Rohingya. Also, the Tatmadaw are thought to have been using bogus independent news and opinion pages to covertly push their own messages.

– Facebook is very aware that it has been used as a means to influence political opinion and even election outcomes in some other countries i.e. alleged Russian use of Facebook in the US election. This has made Facebook anxious to stop this happening.

– The social network, along with other platforms, apps, and tech giants, has long been accused by many different governments of failing to act / failing to act quickly enough to remove hate speech and racist content.

– Facebook and other platforms have been threatened with regulation e.g. Ofcom in the UK, and Facebook is anxious to claw back much of the trust it lost in the Cambridge Analytica scandal, as well as getting some good publicity.

What Does This Mean For Your Business?

Most businesses like to operate in and associate themselves with stable countries, particularly where they feel the government is trustworthy, and where the military don’t have too much power. The cost in human suffering in events and circumstances in Myanmar have been terrible, and this has also caused the economy to suffer, as its growth has slowed, it’s currency has dropped against the dollar, and as other countries and potential trading partners have tried to distance themselves from the current regime.

For Facebook, this has been a much-needed opportunity to present its positive side and show that it can and will act quickly to police its own network where it feels it has credible and conclusive evidence to do so, and to be able to justify its actions. This has been something that Facebook appears to have been much more keen to do lately e.g. in deleting 30+ pages and accounts attempting to influence the US midterm elections, and in removing 650+ fake Facebook accounts and pages, and pages designed to influence politics in US and the UK, as well as in the Middle East and Latin America.

The power and responsibility of social network platforms is now beginning to become apparent. Businesses are now major advertisers on social networks too, and as such, they need to ensure that they can reach the right audience in enough numbers and that their advertising doesn’t suffer from negative associations or being displayed next to content or posts that promote hatred.

Facebook Uses Scoring System To Manage Misinformation

It has been reported that Facebook allocates a trustworthiness score to some members to help it manage misinformation issues such as some members continually flagging / reporting stories as fake if they don’t agree with the content.

Score?

It is not publicly known exactly how the score is arrived at, but it has been reported recently in the Washington Posts that Facebook’s ‘Misinformation Team’ will be making use of the metric, a system that has taken a year to develop.

Why?

It is understood that the system, which Facebook denies amounts to a reputation score, is part of an initiative announced 2 years ago to find a way to deal with issues around fake news and fighting misinformation.

These include both making news with dubious / fake content appear lower in users’ news feeds, and stopping people from indiscriminately flagging news as fake in order to control and influence news and opinions.

Repeat Flaggers In The Spotlight

The scoring system will have a focus on stopping some Facebook members from simply flagging / reporting stories they don’t agree with.

Some commentators have speculated that this part of the scoring system works by correlating any false news reports with the decisions of independent fact-checkers, and by giving higher scores (and presumably higher news feed positions) to a user who makes a single complaint that is substantiated, than to a user who makes lots of complaints, only some of which are substantiated.

Not The First Time

Facebook is not the first and only platform to us such scoring systems for members. For example, Uber rates customers on scores they’ve given to drivers, Twitter has been reported as having used a reputation score to help recommend which members to follow, and a pilot scheme in China is allocating a social credit score to citizens based on their online behaviour.

Criticism

The Facebook scoring system has been criticised by some people who say that Facebook’s own trustworthiness is unregulated, the scoring system is automated and not transparent, and could amount to another way of Facebook using peoples’ data in a way they may not expect or want (bearing in mind the Facebook / Cambridge Analytica scandal).

What Does This Mean For Your Business?

We are used to the idea that decisions that affect businesses are made using algorithms and automatic scoring systems i.e. search engine rankings. If the new Facebook scoring system works as it should and for the purpose that Facebook has stated, then it may contribute to better management of misinformation, which can only benefit the economy and businesses.

Unfortunately, how Facebook can be trusted to use our data behind the scenes is a sore subject at the moment, and it could be said that mistrust of Facebook and its motives with this move is expected and healthy. Since the Cambridge Analytics revelations, and findings that Facebook was used to distribute dubious, politically influential posts of Russian origins leading up to the US election, Facebook has to at least be seen / reported to be doing more to manage misinformation on its platform.

Unfortunately for Facebook, the scoring system is unlikely to appeal to President Trump, who has warned that it is dangerous for tech / social media companies such as Facebook to regulate themselves. Some commentators have suggested that this concern is partly based on a fear that conservative voices may be silenced by such measures.

Facebook Favours Free Speech Over Fake News Removal

In a recent Facebook media presentation in Manhattan, and despite the threat of social media regulation e.g. from Ofcom, Facebook said that removing fabricated posts would be “contrary to the basic principles of free speech”.

Fake News

The term ‘fake news’ has become synonymous with the 2016 US general election and accusations that Facebook was a platform for fake political news to be spread e.g. by Russia. Also, fake news is a term that has become synonymous with President Trump, who frequently uses the term, often (some would say) to act as a catch-all term to discredit/counter critical stories in the media.

In essence, fake news refers to deliberate misinformation or hoaxes, manipulated to resemble credible journalism and attract maximum attention, and it is spread mainly by social media. Facebook has tried to be seen to flag up and clean up obvious fake news ever since its reputation was tarnished by the election news scandals.

What About InfoWars?

The point was made to Facebook at the media presentation by a CNN reporter that the fact that InfoWars, a site having been known to have published false information and conspiracy theories, has been allowed to remain on the platform may be evidence that Facebook is not tackling fake news as well as it could.

A Matter of Perspective

To counter this and other similar accusations, Facebook has stated that it sees pages on both the left and the right side of politics distributing what they consider to be opinion or analysis but what others, from a different perspective, may call fake news.

Facebook also tweeted that banning those kinds of pages e.g. InfoWars, would be contrary to the basic principles of free speech.

A Matter of Trust

Ofcom research has suggested that people have relatively little trust in what they read in social media content anyway. The research showed that only 39% consider social media to be a trustworthy news source, compared to 63% for newspapers and 70% for TV.

Age Plays A Part

Other research from Stanford’s Graduate School of Education, involving more than 7,800 responses from middle school, high school and college students in 12 US states focused on their ability to assess information sources. The results showed a shocking lack of ability to evaluate information at even as basic a level as distinguishing advertisements from articles. When you consider that many young people get their news from social media, this shows that they may be more vulnerable and receptive to fake stories, and their wide networks of friends could mean that fake stories could be quickly and widely spread among other potentially vulnerable recipients.

Although Facebook is known to have an older demographic now, many young people still use it, Facebook has tried to launch a kind of Facebook for children to attract more young users, and Facebook owns Instagram, partly as a means to try and mop up young users who leave Facebook. It could be argued, therefore, that Facebook, and other social media platforms have a responsibility to regulate some content in order to protect users.

What Does This Mean For Your Business?

Fake news stories are not exclusive to social media platforms as the number of retractions and apologies in newspapers over the years are a testament. The real concern has arisen about social media, and Facebook particularly, because of what appears (allegedly) to have been the ability of actors from a foreign power being able to use fake news on Facebook to actually influence the election of a President. Which party and President is in power in the US can, in turn, have a dramatic effect on businesses and markets around the world, and the opportunities that other foreign powers think they have.

Facebook is also busy fighting another crisis in trust that has arisen from news of its sharing of users’ personal data with Cambridge Analytica, and the company is focusing much of its PR effort not on talking specifically about fake news, but about how Facebook has changed, why we should trust it again, and how much it cares about our privacy.

Meanwhile in the UK, Ofcom chief executive Sharon White, has clearly stated that she believes that media platforms need to be “more accountable” in their policing of content. While this may be understandable, many rights and privacy campaigners would not like the idea that free speech could be influenced and curbed by governments, perhaps to suit their own agenda. The arguments continue.

12 Russian Intelligence Officers Charged With Election Hacking

Even though, in an interview this week, President Trump appeared to absolve Russia of election interference (since retracted), the US Department of Justice has now charged 12 Russian intelligence officers with hacking Democratic officials in the 2016 US elections.

The Allegations

It is alleged by the US Justice Department that, back in March 2016, on the run-up to the presidential election campaign which saw Republican Donald Trump elected as president, the Russian intelligence officers were responsible for cyber-attacks on the email accounts of staff for Hillary Clinton’s Democrat presidential campaign.

Also, the Justice Department alleges that the accused Russians corresponded with several Americans (but not in a conspiratorial way), used fictitious online personas, released thousands of stolen emails (beginning in June 2016), and even plotted to hack into the computers of state boards of elections, secretaries of state, and voter software.

No Evidence Says Kremlin

The Kremlin is reported to have said that it believes there is no evidence for the US allegations, describing the story as an “old duck” and a conspiracy theory.

32, So Far

The latest allegations are all part of the investigation, led by Special Counsel Robert Meuller, into US intelligence findings that the Russians allegedly conspired in favour of Trump, and that some of his campaign aides may have colluded.
So far, 32 people (mostly Russians) have been indicted. 3 companies and 4 former Trump advisers have also been implicated.

Trump Says…

President Trump has dismissed allegations that the Russians help put him in the White House as a “rigged witch hunt” and “pure stupidity”.

In a press conference after his meeting with Russian President, Vladimir Putin in Helsinki, President Trump, however, caused shock and disbelief when asked whether he thought Russia had been involved in US election interference, he said “I don’t see any reason why it would be”.

He has since appeared to backtrack by saying that he meant to say “wouldn’t” rather than “would”, and that he accepts his own intelligence agency’s findings that Russia interfered in the 2016 election, and that other players may have been involved too.

What Does This Mean For Your Business?

Part of the fallout of constant struggle between states and super-powers are the cyber attacks that end up affecting many businesses in the UK. Also, if there has been interference in an election favouring one party, this, in turn, affects the political and economic decisions made in that country, and its foreign policy. These have a knock-on effect on markets, businesses and trade around the world, particularly for those businesses that export to, import from, or have other business interests in the US. Even though, in the US, one of the main results of the alleged electoral interference scandal appears to have been damaged reputations and disrupted politics, the wider effects have been felt in businesses around the world.

These matters and the links to Facebook and Cambridge Analytica have also raised awareness among the public about their data security and privacy, whether they can actually trust corporations with it, and how they could be targeted with political messages which could influence their own beliefs.

Cambridge Analytica Re-Born

A new offshoot of Cambridge Analytica, the disgraced data analysis company at the heart of the Facebook personal data sharing scandal, has been set up by former members of staff under the name ‘Auspex’.

Old Version Shut Down

After news of the scandal, which saw the details of an estimated 87 million Facebook users (mostly in the US) being shared with CA, and then used by CA to target people with political messages in relation to the last US presidential elections, CA was shut down by its parent company SCL Elections. CA is widely reported to have ceased operations and filed for bankruptcy in the wake of the scandal.

Ethical This Time

Auspex, which (it should be stressed) is not just another version of CA, but is likely to carry on the same kind of data analysis work, has been set up by Ahmed Al-Khatib, a former director of Emerdata which was also set up after the Cambridge Analytica scandal. Mr Al-Khatib has been reported as saying that Auspex will use ethically based, data-driven communications with a focus on improving the lives of people in the developing world.

Middle East and Africa

The markets in the developing world that Auspex will initially be focusing on are the Middle East and Africa, and the kinds of ethical work that it will be doing, according Auspex’s own communications, are health campaigning and tackling the spread of extremist ideology among a disenfranchised youth.

Compliant

Auspex has been quick to state that it has made changes and that it will be fully compliant from the outset, thereby hoping to further distance itself from its murky origins in CA.

Personnel

One thing that is likely to attract the attention of critics is that, not only is Mark Turnbull, the former head of CA’s political division the new Auspex Managing Director, but that the listed directors of the new company include Alastair Harris, who is reported to have worked at CA, and Omar Al-Khatib is listed as a citizen of the Seychelles.

What Does This Mean For Your Business?

The Cambridge Analytica and Facebook scandal is relatively recent, and the ICO have only just presented their report about the incident. For many people, it may not feel right that personnel from Cambridge Analytica can appear to simply set up under another name and start again. Critics can be forgiven for perhaps not trusting statements about a new ethical approach, especially since Mark Turnbull appeared alongside former CA chief executive Alexander Nix in an undercover film by Channel 4, where Nix gave examples of how his company could discredit politicians e.g. by setting up encounters with prostitutes.

The introduction of GDPR has brought the matters of data security and privacy into sharp focus for businesses in the UK, and businesses will be all too aware of the possible penalties if they get on the wrong side of the ICO.

In the case of the Facebook / Cambridge Analytica scandal, the ICO has recently announced that Facebook will be fined £500,000 for data breaches, and that it is still considering taking legal action against CA’s company’s directors. If successful, a prosecution of this kind could result in convictions and an unlimited fine.