Archive for Social Media

AI and the Fake News War

In a “post-truth” era, AI is one of the many protective tools and weapons involved in the battles that male up the current, ongoing “fake news” war.

Fake News

Fake news has become widespread in recent years, most prominently with the UK Brexit referendum, the 2017 UK general election, and the U.S. presidential election, all of which suffered interference in the form of so-called ‘fake news’ / misinformation spread via Facebook which appears to have affected the outcomes by influencing voters. The Cambridge Analytica scandal, where over 50 million Facebook profiles were illegally shared and harvested to build a software program to generate personalised political adverts led to Facebook’s Mark Zuckerberg appearing before the U.S. Congress to discuss how Facebook is tackling false reports. A video that was shared via Facebook, for example (which had 4 million views before being taken down), falsely suggested that smart meters emit radiation levels that are harmful to health. The information in the video was believed by many even though it was false.

Government Efforts

The Digital, Culture, Media and Sport Committee has published a report (in February) on Disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government has, therefore, been calling for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.


One way that social media companies have sought to tackle the concerns of governments and users is to buy-in fact-checking services to weed out fake news from their platforms.  For example, back in January London-based, registered charity ‘Full Fact’ announced that it would be working for Facebook, reviewing stories, images and videos to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.


A moderator-led response to fake news is one option, but its reliance upon humans means that this approach has faced criticism over its vulnerability to personal biases and perspectives.

Automation and AI

Many now consider automation and AI to be an approach and a technology that are ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge ( seeks to explore how AI technologies, particularly machine learning and natural language processing, can be leveraged to combat fake news, and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Deepfake Videos

Deepfake videos are an example of how AI can be used to create fake news in the first place.  Deepfake videos use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create an embarrassing or scandalous video. Deepfake audio can also be manipulated in a similar way.  Deepfake videos aren’t just used to create fake news sources, but they can also be used by cyber-criminals for extortion.

AI Voice

There has also been a case in March this year, where a group of hackers were able to use AI software to mimic an energy company CEO’s voice in order to steal £201,000.

What Does This Mean For Your Business?

Fake news is a real and growing threat, as has been demonstrated in the use of Facebook to disseminate fake news during the UK referendum, the 2017 UK general election, and the U.S. presidential election. State-sponsored politically targeted campaigns can have a massive influence on an entire economy, whereas other fake news campaigns can affect public attitudes to ideas and people and can lead to many other complex problems.

Moderation and automated AI may both suffer from bias, but at least they are both ways in which fake news can be tackled, to an extent.  Through adding fact-checking services, other monitoring, and software-based approaches e.g. through browsers, social media and other tech companies can take responsibility for weeding out and guarding against fake news.

Governments can also help in the fight by putting pressure on social media companies and by collaborating with them to keep the momentum going and to help develop and monitor ways to keep tackling fake news.

That said, it’s still a big problem, no solution is infallible, and all of us as individuals would do well to remember that, especially today, you really can’t believe everything you read and an eye to source and bias of news coupled with a degree of scepticism can often be healthy.

Video Labelling Causes Problems

Google has already been criticised by some for not calling out China over disinformation about Hong Kong, but despite disabling 210 YouTube channels with suspected Chinese state links, Google’s new move to label Hong Kong YouTube videos hasn’t gone down well.

Big Social Media Platforms Act

Facebook and Twitter recently announced that they have banned a number accounts on their platforms due to what the popular social media platforms are calling “coordinated influence operations”. In other words, Chinese state-sponsored communications designed to influence opinion (pro-Beijing viewpoints) and to spread disinformation.  Twitter and Facebook are both blocked in mainland China anyway by the country’s notorious firewall but both platforms can be accessed in Hong King and Twitter recently suspended over 900 accounts believed to originate in China. The reasons for the suspensions included spam, fake accounts and ban evasion.

Google Labels Videos

Google’s response, which some critics have seen as being late anyway has been to add information panels to videos on its Hong Kong-facing site saying whether the video has been uploaded by media organisations that receive government funding or public funding.  The panels, which are live in 10 regions, were intended to give viewers an insight into whether the videos are state-funded or not.


Unfortunately, Google did not consider the fact that some media receives government funding, but are editorially independent, and the labelling has effectively put them in the same category as media that purely spreads government information.

Google and China

Many commentators have noted an apparent reluctance by Google to distance itself from the more repressive side of the Chinese state.  For example, Google has been criticised for not publicly criticising China over the state’s disinformation campaign about the Hong Kong protests.  Also, Google was recently reported to have a secret plan (Project Dragonfly) to develop a censored search engine for the Chinese market and it’s been reported that Google has an A.I research division in China.

Disinformation By Bot? Not

There have been fears that just as bots can be a time and cost-saving way of writing and distributing information, they could also be used to write disinformation and could even reach the point soon where they are equal in ability to human writers.  For example, the text generator, built by the research firm OpenAI, has (until recently) been considered to be too dangerous to make (the ‘trained’ version) public because of the potential for abuse in terms of using it to write disinformation.  In tests (the BBC, AI experts, and a Sheffield University professor) however, it proved to be relatively ineffective at generating meaningful text from input headlines, although it did appear able to reflect news bias in its writing.

What Does This Mean For Your Business?

The influence via social media in the last US presidential election campaign and the UK referendum (with the help of Cambridge Analytica) brought the whole subject of disinformation into sharp focus, and the Chinese state media’s response to the Hong King demonstrations has given more fuel to the narrative coming from the current US administration (Huawei accusations and trade war) that China should be considered a threat.  Google’s apparent lack of public criticism of Chinese state media disinformation efforts is in contrast to the response of social media giants Facebook and Twitter, and this coupled with reports of the company trying to develop a censored search engine for China to allow it to get back into the market over there means that Google is likely to be scrutinised and criticised by US state voices.

It is difficult for many users of social media channels to spot bias and disinformation, and although Google may have tried to do the right thing by labelling videos, its failure to take account of the media structure in China has meant more criticism for Google.  As an advertising platform for businesses, Google needs to take care of its public image, and this kind of bad publicity is unlikely to help.

Facebook Launches Martin Lewis Anti-Scam Service

Facebook has launched a new anti-scam service using the £3m that it agreed to donate to the development of the programme in return for TV consumer money champion Martin Lewis dropping his legal action over scam ads.

What Legal Action?

Back in September 2018, MoneySavingExpert’s (MSE) founder Martin Lewis (OBE) took Facebook to the UK High Court to sue the tech giant for defamation over a series of fake adverts bearing his name.  Many of the approximately 1000 fake ads, bearing Mr Lewis’ name appeared on the Facebook social media platform over the space of a year, could and did (in some cases) direct consumers to scammer sites containing false information, which Mr Lewis argued may have caused serious damage to his reputation, and caused some people to lose money.

In January 2019, Mr Lewis Facebook came to an agreement with Facebook whereby he would drop his lawsuit if Facebook donated £3 million to Citizens Advice to create a new UK Scams Action project (launched in May 2019) and if Facebook agreed to launch a UK-focused scam ad reporting tool supported by a dedicated complaints-handling team.

How The New Anti-Scam Service Works

Facebook users in the UK will be able to access the service by clicking on the three dots (top right) of any advert to see ‘more options’ and “report ad”.  The list of reasons for reporting the ad now includes a “misleading or scam ad” option.

Also, the Citizens Advice charity has set up a phone line to help give advice to victims of online and offline scams.  The “Scams Action Service” advisers can be called on 0300 330 3003 Monday to Friday, and the advisers also offer help via live online chat.  In serious cases, face-to-face consultations can also be offered.

What To Do

If you’ve been scammed, the Citizens Advice charity recommends that you tell your bank immediately, reset your passwords, make sure that your anti-virus software has been updated, report the incident to Action Fraud, and contact the new Citizens Advice Scams Action service:

What Does This Mean For Your Business?

It is a shame that it has taken the threat of a lawsuit over damaging scam ads spread through its own platform to galvanize Facebook into putting some of its profits into setting up a service that can tackle the huge and growing problem of online Fraud.  Facebook and other ad platforms may also need to take more proactive steps with their advertising systems to make it more difficult for scammers to set up adverts in the first place.

Having a Scams Action service now in place using a trusted UK charity will also mean that awareness can be raised, and information given about known scams, and victims will have a place to go where they get clear advice and help.

US Visa Applicants Now Asked For Social Media Details and More

New rules from the US State Department will mean that US visa applicants will have to submit social media names and five years’ worth of email addresses and phone numbers.

Extended To All

Under the new rules, first proposed by the Trump administration back in February 2017, whereas previously the only visa applicants who had needed such vetting were those from parts of the world known to be controlled by terrorist groups, all applicants travelling to the US to work or to study will now be required to give those details to the immigration authorities. The only exemptions will be for some diplomatic and official visa applicants.

Delivering on Election Immigration Message

The new stringent rules follow on from the proposed crackdown on immigration that was an important part of now US President Donald Trump’s message during the 2016 election campaign.

Back in July 2016, the Federal Register of the U.S. government published a proposed change to travel and entry forms which indicated that the studying of social media accounts of those travelling to the U.S. would be added to the vetting process for entry to the country. It was suggested that the proposed change would apply to the I-94 travel form, and to the Electronic System for Travel Authorisation (ESTA) visa. The reason(s) given at the time was that the “social identifiers” would be: “used for vetting purposes, as well as applicant contact information. Collecting social media data will enhance the existing investigative process and provide DHS greater clarity and visibility to possible nefarious activity and connections by providing an additional toolset which analysts and investigators may use to better analyse and investigate the case.”

There had already been reports that some U.S. border officials had actually been asking travellers to voluntarily surrender social media information since December 2016.


In February 2017, the Trump administration indicated that it was about to introduce an immigration policy that would require foreign travellers to the U.S. to divulge their social media profiles, contacts and browsing history and that visitors could be denied entry if they refused to comply. At that time, the administration had already barred citizens of seven Muslim-majority countries from entering the US.


Critics of the idea that social media details should be obtained from entrants to the US include civil rights group the American Civil Liberties Union which pointed out that there is no evidence it would be effective and that it could lead to self-censorship online.  Also, back in 2017, Jim Killock, executive director of the Open Rights Group was quoted online media as describing the proposed as “excessive and insulting”.

What Does This Mean For Your Business?

Although they may sound a little extreme, these rules have now become a reality and need to be considered by those needing a US visa.  Given the opposition to President Trump and his some of his thoughts and policies and the resulting large volume of Trump-related content that is shared and reacted to by many people, these new rules could be a real source of concern for those needing to work or to study in the US.  It is really unknown what content, and what social media activity could cause problems at immigration for travellers, and what the full consequences could be.

People may also be very uncomfortable being asked to give such personal and private details as social media names and a massive five years’ worth of email addresses and phone numbers, and about how those personal details will be stored and safeguarded (and how long for), and by whom they will be scrutinised and even shared.  The measure may, along with other reported policies and announcements from the Trump administration even discourage some people from travelling to, let alone working or studying in the US at this time. This could have a knock-on negative effect on the economy of the US, and for those companies wanting to get into the US marketplace with products or services.

Surveillance Attack on WhatsApp

It has been reported that it was a surveillance attack on Facebook’s WhatsApp messaging app that caused the company to urge all of its 1.5bn users to update their apps as an extra precaution recently.

What Kind of Attack?

Technical commentators have identified the attack on WhatsApp as a ‘zero-day’ exploit that is used to load spyware onto the victim’s phone.  Once the victim’s WhatsApp has been hijacked and the spyware loaded onto the phone, it can, for example, access encrypted chats, access photos, contacts and other information, as well as being able to eavesdrop on calls, and even turn on the microphone and camera.  It has been reported that the exploit can also alter the call logs and hide the method of infection.


The attack is reported to be able to use the WhatsApp’s voice calling function to ring a target’s device. Even if the target person doesn’t pick the call up the surveillance software can be installed, and the call can be wiped from the device’s call log.  The exploit can happen by using a buffer overflow weakness in the WhatsApp VOIP stack which enables an overwriting of other parts of the app’s memory.

It has been reported that the vulnerability is present in the Google Android, Apple iOS, and Microsoft Windows Phone builds of WhatsApp.


According to reports in the Financial Times which broke the story of the WhatsApp attack (which was first discovered earlier this month), Facebook had identified the likely attackers as a private Israeli company, The NSO Group, that is part-owned by the London-based private equity firm Novalpina Capital.  According to reports, The NSO Group are known to work with governments to deliver spyware, and one of their main products called Pegasus can collect intimate data from a targeted device.  This can include capturing data through the microphone and camera and also gathering location data.


The NSO Group have denied responsibility.  NSO has said that their technology is only licensed to authorised government intelligence and law enforcement agencies for the sole purpose of fighting crime and terror, and that NSO wouldn’t or couldn’t use the technology in its own right to target any person or organisation.

Past Problems

WhatsApp has been in the news before for less than positive reasons.  For example, back in November 2017, WhatsApp was used by ‘phishing’ fraudsters to circulate convincing links for supermarket vouchers in order to obtain bank details.


As a result of the attack, as well as urging all of its 1.5bn users to update their apps, engineers at Facebook have created a patch for the vulnerability (CVE-2019-3568).

What Does This Mean For Your Business?

Many of us think of WhatsApp as being an encrypted message app, and therefore somehow more secure. This story shows that WhatsApp vulnerabilities are likely to have existed for some time.  Although it is not clear how many users have been affected by this attack, many tech and security commentators think that it may have been a focused attack, perhaps of a select group of people.

It is interesting that we are now hearing about the dangers of many attacks being perhaps linked in some way to states and state-sponsored groups rather than individual actors, and the pressure is now on big tech companies to be able to find ways to guard against these more sophisticated and evolving kinds of attacks and threats that are potentially on a large scale.  It is also interesting how individuals could be targeted by malware loaded in a call that the recipient doesn’t even pick up, and it perhaps opens up the potential for new kinds of industrial espionage and surveillance.

Slack Builds Email Bridge

Chat App and collaborative working tool Slack appears to have given up the fight to eliminate email by allowing the introduction of new tools that enable Slack collaboration features inside Gmail and Outlook, thereby building a more inclusive ‘email bridge’.

What Is Slack?

Slack, launched ‘way back’ in 2013, is a cloud-based set of proprietary team collaboration tools and services. It provides mobile apps for iOS, Android, Windows Phone, and is available for the Apple Watch, enabling users to send direct messages, see mentions, and send replies.

Slack teams enable users (communities, groups, or teams) to join through a URL or invitation sent by a team admin or owner. It was intended as an organisational communication tool, but it has gradually been morphing into a community platform i.e. it is a business technology that has crossed-over into personal use.

Email Bridge

After having a five-year battle against email, Slack is building an “email bridge” into its platform that will allow those who only have email to communicate with Slack users.


The change is aimed at getting those members of an organisation on board who have signed up to the Slack app but are not willing to switch entirely from email to Slack. The acceptance that not everyone wants to give up using their email altogether has made way for a belief by Slack that something at least needs to be built-in to the app to allow companies and organisations to be able to leverage the strengths of all their workers, and at least allow those organisation and team members who are separated because of their Slack vs email situation to be connected to the important conversations within Slack. It will also now mean that companies and organisations have time to make the transition in working practices at their own pace (or not ) i.e. migrate (or not migrate) entirely to Slack.


The change supports Slack’s current Outlook and Gmail functionality, which enables users to forward emails into a channel where members can view and discuss the content and plan responses from inside Slack. It also allows anything set within the Outlook or Gmail Calendar to be automatically synced to Slack.

The new changes will allow team members who have email but have not committed to Slack to receive an email notification when they’re mentioned by their username in channels or are sent a direct message.

What Does This Mean For Your Business?

Slack appears to have listened to Slack users who’d like a way to keep connected with their e-mail only / waiting to receive credentials colleagues, and the email bridge is likely to meet with their approval in this respect.  For Slack, it also presents the opportunity gently for those people who are more resistant to change into eventually making the move to Slack.

This change is one of several announced by Slack, such as the ‘Actions’ feature last year, and the two new toolkits (announced in February this year) that will allow non-coders to build apps within Slack.

Slack knows that there are open source and other alternatives in the market, and the addition of more features and more alliances will help Slack to provide more valuable tools to users, thereby helping it to gain and retain loyalty and compete in a rapidly evolving market.

‘ManyChat’ Raises $18 million Funding For Facebook Messenger Bot

California-based startup ‘ManyChat’ has raised $18 million Series A funding for its Facebook Messenger marketing bot.


ManyChat Inc. is now the leading messenger marketing product, reportedly powering over 100,000 bots on Facebook Messenger.

ManyChat lets you use visual drag`n`drop interface to create a free Facebook Messenger bot for marketing, sales and support.  The bot is essentially a Facebook Page that sends out messages and responds to users automatically.

The ManyChat bot allows you to welcome new users, send them content, schedule posts, set up keyword auto-responses (text, pictures, menus), automatically broadcast your RSS feed and more.

The bot, which is a blend of automation and personal outreach also incorporates Live Chat that notifies you when a conversation is needed with a subscriber.

Facebook Messenger

ManyChat says it has focused on Facebook Messenger because it is the #1 app in the US and Canada with over 1 billion active users, and it is the most engaging channel with average 80% open rates and 4 to 10 times higher CTRs compared to email.

The Funding

The $18 million funding for ManyChat was led by Bessemer Venture Partners, with participation from Flint Capital, and means that Bessemer’s Ethan Kurzweil will be joining the board of directors, and Bessemer’s Alex Ferrara becomes a board observer.

1+ Million Accounts Created

ManyChat reports that more than 1 million accounts have been created on the platform already by customers in many different industry sectors.  The platform has also reported that these 1+ million customers have managed to enlist 350 million Messenger subscribers and that there are now a staggering 7 billion messages sent on the platform each month.

What Does This Mean For Your Business?

Bots provide a way for businesses to reduce costs, make better use of resources and communicate with customers and enquirers 24/7.

As ManyChat points out, it’s becoming increasingly difficult for businesses to effectively reach their audience because people open less email and social media is ‘noisy’ to the point where messages become lost in the crowd.  A key advantage of ManyChat, therefore, is that it uses Facebook Messenger as a private channel of communication with each user, it’s instant and interactive, no message is ever lost, and Messenger has huge user numbers. Other advantages that businesses will appreciate is that it’s free and easy to set up the bot (no coding skills are required), and it offers the best of both worlds of automated communications, and the option to jump in with Live Chat when it is needed.

This kind of bot could enable businesses and organisations to make their marketing more effective while maximising efficiency.

ManyChat is also good news for Facebook which owns Messenger as it appears to be boosting user numbers by finding an improved, business-focused use for the app.

For ManyChat, its Facebook Messenger bot appears to be only the beginning (hence the funding), with investors looking at platforms like Instagram, WhatsApp, RCS, and more to further expand bot marketing services in the future.

New UK ‘Duty of Care’ Rules To AppNew UK ‘Duty of Care’ Rules To Apply To Social Media Companiesly To Social Media Companies

The new ‘Online Harms’ whitepaper marks a world first as the UK government plans to introduce regulation to hold social media and other tech companies to account for the nature of the content they display, backed by the policing power of an independent regulator and the threat of fines or a ban.

Duty of Care

The proposed new legal framework from the Department for Digital, Culture, Media and Sport (DCMS) and the Home Office aims to give social media and tech companies a duty of care to protect users from threats, harm, and other damaging content relating to cyberbullying, terrorism, disinformation, child sexual exploitation and encouragement of behaviours that could be damaging.

The need for such regulation has been recognised for some time and was brought into sharper focus recently by the death in the UK of 14-year-old Molly Russell, who was reported to have viewed online material on depression and suicide, and in March this year, the live streaming on one of Facebook’s platforms of the mass shooting at a mosque in New Zealand which led Australia to suggest fines for social media and web-hosting companies and imprisonment of executives if violent content is not removed.

The Proposed Measures

The proposed measures by the UK government in its white paper include:

  • Imposing a new statutory “duty of care” that will hold companies accountable for the safety of their users, as well as a commitment to tackle the harm caused by their services.
  • Tougher requirements on tech companies to stop the dissemination of child abuse and terrorist content online.
  • The appointment of an independent regulator with the power to force social media platforms and tech companies to publish transparency reports on the amount of harmful content on their platforms and what they are doing to address the issue.
  • Forcing companies to respond to users’ complaints, and act quickly to address them.
  • The introduction of codes of practice by the regulator which will include requirements to minimise the spread of misleading and harmful disinformation using dedicated fact checkers (at election time).
  • The introduction of a “safety by design” framework that could help companies to incorporate the necessary online safety features in their new apps and platforms at the development stage.

GDPR-Style Fines (Or A Ban)

Culture, Media and Sport Secretary Jeremy Wright has said that tech companies that don’t do everything reasonably practicable to stop harmful content on their platforms could face fines comparable with those imposed for serious GDPR breaches e.g. 4% of a company’s turnover.

It has also been suggested that under the new rules to be policed by an independent regulator, bosses could be held personally accountable for not stopping harmful content on their platforms. It has also been suggested that in the most serious cases, companies could be banned from operating in Britain if they do not do everything reasonably practical to stop harmful content being spread via their platforms.


Although there is a general recognition that regulation to protect, particularly young people, from harmful/damaging content is a good thing, a proportionate and predictable balance needs to be struck between protecting society and supporting innovation and free speech.

Facebook is reported to have said that it is looking forward to working with the government to ensure new regulations were effective and have a standard approach across platforms.


The government’s proposals will now have a 12-week consultation, but the main criticism to date has been that parts of the government’s approach in the proposals are too vague and that regulations alone can’t solve all the problems.

What Does This Mean For Your Business?

Clearly, the UK government believes that self-regulation among social media and tech companies does not work.  The tech industry has generally given a positive response to the government’s proposals and to an approach that is risk-based and proportionate rather than one size fits all.  The hope is that the vaguer elements of the proposals can be clarified and improved over the next 3 months of consultation. 

To ensure the maximum protection for UK citizens, any regulations should be complemented by ongoing education for children, young people and adults to make sure that they have the skills and awareness to navigate the digital world safely and securely.

Facebook Rolls Out ‘Why Am I Seeing This Post?’ Tool

In an attempt to be more transparent and give more control to its users, Facebook is about to roll-out a new “Why am I seeing this post?” tool, which will give users insights into their newsfeed algorithm.

Algorithm Explained

The new tool essentially goes some way to explaining how the algorithm that decides what appears where in a user’s Facebook newsfeed works.  The tool will give a view of the inputs used by the social network to rank stories, photos and video, and in doing so will enable users to access the actions that they may want to take if they want to change what they see in their newsfeed.


The new tool, which was developed using research groups in New York, Denver, Paris and Berlin, will show users the data that connects them to a certain type of post e.g. they may be friends with the poster, or they’ve liked  a person’s  posts more than others, they’ve frequently commented on that type of post before, or that the post has proved to be popular with users who have the same interests.

Although the tool will enable users to see how the key aspects of the algorithm work, in the interests of convenience, simplicity, speed and security, users will not be shown all the many thousands of inputs that influence the decision.

Additional Details

Facebook is also updating its existing “Why Am I Seeing this Ad?” feature with additional details such as explaining how ads work that target customers using email lists.

Newsfeed Strategy Shift

Early last year, Facebook changed its newsfeed strategy so that posts from family and friends were given greater priority, and non-advertising content from publishers and brands was downgraded.

Bad Times

Facebook’s reputation has reached several low points in recent times in matters relating to the data security and privacy of its users, and how the company has responded to calls for it to clean up content such as hate speech, certain types of video, and political messages from other states.

Most famously, Facebook was fined £500,000 for data breaches relating to the harvesting of the personal details of 87 million Facebook users without their explicit consent, and the sharing of that personal data with London-based political Consulting Firm Cambridge Analytica, which is alleged to have used that data to target political messages and advertising in the last US presidential election campaign. Also, harvested Facebook user data was shared with Aggregate IQ, a Data Company which worked with the ‘Vote Leave’ campaign in the run-up to the Brexit Referendum.

In September last year, Facebook engineers discovered that hackers had used a vulnerability in Facebook’s “View As” feature to compromise an estimated 50 million user accounts.

Additionally, last February the governor of New York, Andrew Cuomo, ordered an investigation into reports that Facebook Inc may have been using apps on users’ smartphones to collect personal information about them.

What Does This Mean For Your Business?

After a series of high profile privacy scandals, Facebook has been making efforts to regain the trust of its users, not just out of a sense of responsibility, but to protect its brand and pave the way for the roll-out a single messaging service which combines Facebook messenger, WhatsApp and Instagram that could make Facebook even more central to users’ communications. Facebook bought Instagram as a way to retain users who were moving away from Facebook, but these users jumped straight onto WhatsApp.  This new service will be a way for Facebook to join all these pieces together, make the best use of what it has, and maximise the value and appeal to users.

The new “Why am I seeing this post?” tool does sound as though it will cover both bases of giving users more control and improving transparency, and it is one of many things that Facebook has been trying to do (and to be seen to do) in order to make the headlines for the right reasons.  Other measures have included announcing the introduction of new rules for political ad transparency in the UK, working with London-based fact-checking charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation, and even developing its own secure blockchain-based cryptocurrency that will enable its users to have a PayPal-like experience when purchasing advertised products, as well as providing authentication and an audit trail.

Facebook boss Mark Zuckerberg has also recently written an opinion piece in the Washington Post offering proposals to address the issues of harmful content, election protection, privacy and data protection, and data portability in his own platform and the wider social media and Internet environment.

New York’s Governor Orders Investigation Into Facebook Over App Concerns

The Governor of New York, Andrew Cuomo, has ordered an investigation into reports that Facebook Inc may be using apps on users’ smartphones to collect personal information about them.

Alerted By Wall Street Journal

The Wall Street Journal prompted the Governor to order New York’s Department of State and Department of Financial Services (DFS) to investigate Facebook when the paper reported that Facebook may have more access than it should to data from certain apps, sometimes even when a person isn’t even signed in to Facebook.

Health Data

It has been reported that the kind of data that some apps allegedly share with Facebook includes health-related information such as weight, blood pressure and ovulation status.

The alleged sharing of this kind of sensitive and personal data, whether or not a person is logged-in Facebook, prompted Governor Cuomo to call such practice an “outrageous abuse of privacy.”


Facebook’s defence against these allegations, which appears to have prompted a short-lived but noticeable fall in Facebook’s share value, was to point out that WSJ’s report focused on how other apps use people’s data to create ads.

Facebook added that it requires other app developers to be clear with their users about the information they are sharing with Facebook and that it prohibits app developers from sending sensitive data to Facebook.

The social media giant also stressed that it tries to detect and remove any data that should not be shared with it.

Lawsuits Pending

This appears to be just one of several legal fronts where Facebook will need to defend itself.  For example, Facebook is still facing a U.S. Federal Trade Commission investigation into the alleged inappropriate sharing of information belonging to 87 million Facebook users with now-defunct political consulting firm Cambridge Analytica.

Apple Also Accused By Governor Over FaceTime Bug

New York’s Governor Cuomo and New York Attorney General Letitia James have also announced an investigation into Apple Inc’s alleged failure to warn customers about a bug in its FaceTime app that could inadvertently allow eavesdropping as iPhones users were able to listen to conversations of others who have not yet accepted a video call.

DFS Involvement

The Department of Financial Services (DFS), which is one of the two agencies that have been ordered to investigate this latest Facebook app sharing matter has only recently begun to get more involved in digital matters, particularly by producing the country’s first cybersecurity rules governing state-regulated financial institutions such as banks, insurers and credit monitors.

Some commentators have expressed concern, however, about the DFS saying last month that DFS life insurers could use social media posts in underwriting their policies, on the condition that they did not discriminate based on race, colour, national origin, sexual orientation or other protected classes.

What Does This Mean For Your Business?

You could be forgiven for thinking that after the scandal over Facebook’s unauthorised sharing of the personal details of 87 million users with Cambridge Analytica, that Facebook may have learned its lesson about the sharing of personal data and may have tried harder to uncover and plug any loopholes that could allow this to happen. The tech giant still has several lawsuits and regulatory inquiries over privacy issues pending, and this latest revelation about the sharing very personal health information certainly won’t help its cause. Clearly, as the involvement of the FDS shows, there needs to be more oversight of (and investigation into) apps that share their data with Facebook, and possibly the need for more legislation and regulation of the smart app / smart tech ecosystem.

There are ways to stop Facebook from sharing your data with other apps via your phone settings and by disabling Facebook’s data sharing platform.  You can find instructions here: