Archive for Social Media

Google In Talks About Paying Publishers For News Content

It has been reported that Google is in talks with publishers with a view to buying in premium news content for its own news services to improve its relationship with EU publishers, and to combat fake news.

Expanding The Google News Initiative

Reports from the U.S. Wall Street Journal indicate that Google is in preliminary talks with publishers outside the U.S. in order expand its News Initiative (https://newsinitiative.withgoogle.com/), the program where Google works with journalists, news organisations, non-profits and entrepreneurs to ensure that fake news is effectively filtered out of current stories in the ‘digital age’.  Examples of big-name ‘partners’ that Google has worked with as part of the initiative include the New York Times, The Washington Post, The Guardian and fact-checking organisations like the International Fact-Checking Network and CrossCheck (to fact-check the French Election).

As well as partnerships, the Google News Initiative provides a number of products for news publishing e.g. Subscribe With Google, News on Google, Fact Check tags and AMP stories (tap-operated, full-screen content).

This Could Please Publishers

The move by Google to pay for content should please publishers, some of whom have been critical of Google and other big tech players for hosting articles on their platforms that attract readers and advertising money, but not paying to display them. Google has faced particular criticism in France at the end of last year after the country introduced a European directive that should have made tech giants pay for news content but in practice simply led to Google removing the snippet below links to French news sites, and removing the thumbnail images that often appear next to news results.

Back in 2014 for example, Google closed its Spanish news site after it was required to pay “link tax” licensing fees to Spanish news sites and back in November 2018 Google would not rule out shutting down Google News in other EU countries if a “link tax” was adopted by them.

Competitors

Google is also in competition with other tech giants who now provide their own fact-checked and moderated news services.  For example, back in October 2019, Facebook launched its own ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources.

What Does This Mean For Your Business?

For European countries and European publishers, it is likely to be good news that Google is possibly coming to the table to offer some money for the news content that it displays on its platform, and that it may be looking for a way to talk about and work through some of the areas of contention.

For Google, this is an opportunity for some good PR in an area where it has faced criticism in Europe, an opportunity to improve its relationship with publishers in Europe, plus a chance to add value to its news service and to help Google to compete with other tech giants that also offer news services with the fake news weeded out.

Featured Article – Combatting Fake News

The spread of misinformation/disinformation/fake news by a variety of media including digital and printed stories and deepfake videos is a growing threat in what has been described as out ‘post-truth era’, and many people, organisations and governments are looking for effective ways to weed out fake news, and to help people to make informed judgements about what they hear and see.

The exposure of fake news and its part in recent election scandals, the common and frequent use of the term by prominent figures and publishers, and the need for the use of fact-checking services have all contributed to an erosion of public trust in the news they consume. For example, YouGov research used to produce annual Digital News Report (2019) from the Reuters Institute for the Study of Journalism at the University of Oxford showed that public concern about misinformation remains extremely high, reaching a 55 per cent average across 38 countries with less than half (49 per cent) of people trusting the news media they use themselves.

The spread of fake news online, particularly at election times, is of real concern and with the UK election just passed, the UK Brexit referendum, the 2017 UK general election, and the last U.S. presidential election all being found to have suffered interference in the form of so-called ‘fake news’ (and with the 59th US presidential election scheduled for Tuesday, November 3, 2020) the subject is high on the world agenda.

Challenges

Those trying to combat the spread of fake news face a common set of challenges, such as those identified by CEO of OurNews, Richard Zack, which include:

– There are people (and state-sponsored actors) worldwide who are making it harder for people to know what to believe e.g. through spreading fake news and misinformation, and distorting stories).

– Many people don’t trust the media or don’t trust fact-checkers.

– Simply presenting facts doesn’t change peoples’ minds.

– People prefer/find it easier to accept stories that reinforce their existing beliefs.

Also, some research (Stanford’s Graduate School of Education) has shown that young people may be more susceptible to seeing and believing fake news.

Combatting Fake News

So, who’s doing what online to meet these challenges and combat the fake news problem?  Here are some examples of those organisations and services leading the fightback, and what methods they are using.

Browser-Based Tools

Recent YouGov research showed that 26% per cent of people say they have started relying on more ‘reputable’ sources of news, but as well as simply choosing what they regard to be trustworthy sources, people can now choose to use services which give them shorthand information on which to make judgements about the reliability of news and its sources.

Since people consume online news via a browser, browser extensions (and app-based services) have become more popular.  These include:

– Our.News.  This service uses a combination of objective facts (about an article) with subjective views that incorporate user ratings to create labels (like nutrition labels on food) next to new articles that a reader can use to make a judgement.  Our.News labels use publisher descriptions from Freedom Forum, bias ratings from AllSides, information about an article’s sources author and editor.  It also uses fact-checking information from sources including PolitiFact, Snopes and FactCheck.org, and labels such as “clickbait” or “satire” along with and user ratings and reviews.  The Our.News browser extension is available for Firefox and Chrome, and there is an iOS app. For more information go to https://our.news/.

– NewsGuard. This service, for personal use or for NewsGuard’s library and school system partners, offers a reliability rating score of 0-100 for each site based on its performance on nine key criteria, ratings icons (green-red ratings) next to links on all of the top search engines, social media platforms, and news aggregation websites.  Also, NewsGuard gives summaries showing who owns each site, its political leaning (if any), as well as warnings about hoaxes, political propaganda, conspiracy theories, advertising influences and more.  For more information, go to https://www.newsguardtech.com/.

Platforms

Another approach to combatting fake news is to create a news platform that collects and publishes news that has been checked and is given a clear visual rating for users of that platform.

One such example is Credder, a news review platform which allows journalists and the public to review articles, and to create credibility ratings for every article, author, and outlet.  Credder focuses on credibility, not clicks, and uses a Gold Cheese (yellow) symbol next to articles, authors, and outlets with a rating of 60% or higher, and a Mouldy Cheese (green) symbol next to articles, authors, and outlets with a rating of 59% or less. Readers can, therefore, make a quick choice about what they choose to read based on these symbols and the trust-value that they create.

Credder also displays a ‘Leaderboard’ which is based on rankings determined by the credibility and quantity of reviewed articles. Currently, Credder ranks nationalgeographic.com, gizmodo.com and cjr.org as top sources with 100% ratings.  For more information see https://credder.com/.

Automation and AI

Many people now consider automation and AI to be an approach and a technology that is ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be employed to combat fake news and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Government

Governments clearly have an important role to play in the combatting of fake news, especially since fake news/misinformation has been shown to have been spread via different channels e.g. social media to influence aspects of democracy and electoral decision making.

For example, in February 2019, the Digital, Culture, Media and Sport Committee published a report on disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government called for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.

Also, in the US, Facebook’s Mark Zuckerberg has been made to appear before the U.S. Congress to discuss how Facebook tackles false reports.

Finland – Tackling Fake News Early

One example of a government taking a different approach to tackling fake news is that of Finland, a country that has recently been rated Europe’s most resistant nation to fake news.  In Finland, evaluation of news and fact-checking behaviour in the school curriculum was introduced in a government strategy after 2014, when Finland was targeted with fake news stories from its Russian neighbour.  The changes to the school curriculum across core areas in all subjects are, therefore, designed to make Finnish people, from a very young age, able to detect and do their part to fight false information.

Social Media

The use of Facebook to spread fake news that is likely to have influenced voters in the UK Brexit referendum, the 2017 UK general election and the last U.S. presidential election put social media and its responsibilities very much in the spotlight.  Also, the Cambridge Analytica scandal and the illegal harvesting of 50 million Facebook profiles in early 2014 for apparent electoral profiling purposes damaged trust in the social media giant.

Since then, Facebook has tried to be seen to be actively tackling the spread of fake news via its platform.  Its efforts include:

– Hiring the London-based, registered charity ‘Full Fact’, who review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.  Facebook is also reported to be working with fact-checkers in more than 20 countries, and to have had a working relationship with Full Fact since 2016.

– In October 2018, Facebook also announced that a new rule for the UK now means that anyone who wishes to place an advert relating to a live political issue or promoting a UK political candidate, referencing political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate, will need to prove their identity, and prove that they are based in the UK. The adverts they post will also have to carry a “Paid for by” disclaimer to enable Facebook users to see who they are engaging with when viewing the ad.

– In October 2019, Facebook launched its own ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– In January this year, Monika Bickert, Vice President of Facebook’s Global Policy Management announced that Facebook is banning deepfakes and “all types of manipulated media”.

Other Platforms & Political Adverts

Political advertising has become mixed up with the spread of misinformation in the public perception in recent times.  With this in mind, some of the big tech and social media players have been very public about making new rules for political advertising.

For example, in November 2019, Twitter Inc banned political ads, including ads referencing a political candidate, party, election or legislation.  Also, at the end of 2019, Google took a stand against political advertising by saying that it would limit audience targeting for election adverts to age, gender and the general location at a postal code level.

Going Forward

With a U.S. election this year, and with the sheer number of sources, and with the scale and resources that some (state-sponsored) actors have, the spread of fake news is something that is likely to remain a serious problem for some time yet.  From the Finnish example of creating citizens who have a better chance than most of spotting fake news to browser-based extensions, moderated news platforms, the use of AI, government and other scrutiny and interventions, we are all now aware of the problem, the fight-back is underway, and we are getting more access to ways in which we can make our own more informed decisions about what we read and watch and how credible and genuine it is.

WhatsApp Ceases Support For More Old Phone Operating Systems

WhatsApp has announced that its messaging app will no longer work on outdated operating systems, which is a change that could affect millions of smartphone users.

Android versions 2.3.7 and Older, iOS 8 and Older

The change, which took place on February 1, means that WhatsApp has ended support for Android operating system versions 2.3.7 and older and iOS 8 meaning that users of WhatsApp who have those operating systems on their smartphones will no longer be able to create new accounts or to re-verify existing accounts.  Although these users will still be able to use WhatsApp on their phones, WhatsApp has warned that because it has no plans to continue developing for the old operating systems, some features may stop functioning at any time.

Why?

The change is consistent with Facebook-owned app’s strategy of withdrawing support for older systems and older devices as it did back in 2016 (smartphones running older versions of Android, iOS, Windows Phone + devices running Android 2.2 Froyo, Windows Phone 7 and older versions, and iOS 6 and older versions), and when WhatsApp withdrew support for Windows phones on 31 December 2019.

For several years now, WhatsApp has made no secret of wanting to maintain the integrity of its end-to-end encrypted messaging service, making changes that will ensure that new features can be added that will keep the service competitive, maintain feature parity across different systems and devices, and focus on the operating systems that it believes that the majority of its customers in its main markets now use.

Security & Privacy?

This also means that, since there will no longer be updates for older operating systems, this could lead to privacy and security risks for those who continue using older operating systems.

What Now?

Users who have a smartphone with an older operating system can update the operating system, or upgrade to a newer smartphone with model in order to ensure that they can continue using WhatsApp.

The WhatsApp messaging service can also now be accessed through the desktop by syncing with a user’s phone.

What Does This Mean For Your Business?

WhatsApp is used by many businesses for general communication and chat, groups and sending pictures, and for those business users who still have an older smartphone operating system, this change may be another reminder that the perhaps overdue time to upgrade is at hand.  Some critics, however, have pointed to the fact that the move may have more of a negative effect on those WhatsApp users in growth markets e.g. Asia and Africa where many older devices and operating systems are still in use.

For WhatsApp, this move is a way to stay current and competitive in its core markets and to ensure that it can give itself the scope to offer new features that will keep users loyal and engaged with and committed to the app.

Tech Tip – Using WhatsApp On Your PC

If you’re working at your PC and you need to access WhatsApp without having to keep looking at your phone, there’s an easy way to use WhatsApp on your PC – here’s how:

– Open web.whatsapp.com in a browser.

– Open WhatsApp on your phone.

– Open the Chats screen, select ‘Menu’, and select ‘WhatsApp Web’.

– Scan the QR code with your phone.

– You will now be able to see your WhatsApp chats on your PC every time you open web.whatsapp.com in a browser.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Facebook’s New Tool Allows You To Port Your Photos & Videos To Google

Facebook has announced that it is releasing a data portability tool that will enable Facebook users to transfer their Facebook photos and videos directly to other services, starting with Google Photos.

Why?

Facebook acknowledged in its white paper (published back in September) that under GDPR currently, and under the California Consumer Privacy Act rules next year, data portability is a legal requirement. Also, Facebook said that it had also been considering ways to improve people’s ability to transfer their Facebook data to other platforms and services for some time e.g. since 2010 Facebook has offered Download Your Information (“DYI”) to customers so they can share their information with other online services.

In addition to the legal requirements and Facebook’s existing DYI service, Facebook highlights its own belief in the principle of data portability, and how this could give people control and choice while encouraging innovation as the reason for the introduction of its new data portability tool.

What Is It?

Facebook says that its new photo transfer tool (the roll-out has just started) is a tool based on code that has been developed through participation in the open-source Data Transfer Project and can be accessed via Facebook settings within Your Facebook Information.

The tool will enable Facebook users to transfer their Facebook photos and videos directly to other services (Google Photos first).

The first part of the roll-out is in Ireland with worldwide availability planned for the first half of 2020.  Facebook says that the tool is still essentially in testing and that it will be refined based upon feedback from users and from conversations with stakeholders

Help From The Data Transfer Project

One of the key factors in the development of the portability tool was Facebook joining the Data Transfer Project (along with Google, Microsoft, Twitter, Apple, and others) which is an open-source software project that’s designed to help participants develop interoperable systems that will enable users to transfer their data seamlessly between online service providers.

What Does This Mean For Your Business?

Facebook has been offering its DYI service for nearly 10 years, but the new portability tool is something which will enable Facebook to meet its legal requirements under GDPR and the CCPA while helping Facebook to stay competitive with other online services.

Facebook is also acutely aware of the damage done to user trust over the data sharing with Cambridge Analytica, which is why the recent white paper that Facebook published about its portability ideas clearly acknowledged that portability products need to be built in a privacy-protective way.

For Facebook users, this new tool may be one of the many new services that help them to be more trusting of Facebook again by making them feel that they have real options and choices about what they do with their files from Facebook (even though it’s a legal requirement to give people the portability option).

$20 Million Fight Highlights Value of Social Media and PR

The popularity and influence of two YouTube celebrities making their boxing event an all-time global Top 20 pay-per-view phenomenon and splitting a $20 million prize is a reminder of the magnifying value of online PR.

What Happened?

Two of the world’s leading YouTube celebrities and ‘Generation Z’ heroes Logan Paul and Olajide “KSI” Olatunji followed up on their 800,000+ pay-per-view, £2.7 million earning 6-round boxing match from last year at Manchester Arena with the repeat bout at a Los Angeles basketball arena.  This time, after their fight in the early hours of Sunday morning they were able to split $20 million made from 2 million+ pay-per-view purchases generated from their combined 40 million subscriber fan-base.  Neither of these YouTube celebrities is a boxing professional and their fight was in stark contrast to that of two World Champions, fighting on the same bill, who were “only” paid less than $1 million.

Social Media Power & PR

The world’s biggest YouTube celebrities and social influencers, such as PewDiePie (102 million subscribers), Dude Perfect (47.1 million subscribers) and Badabun (43 million subscribers) are mainly young people who have managed to build a relationship with their generation audience by posting YouTube videos.  Generation Z subscribers (born between 1996-2010) who have grown up with the Internet and social media, and Millennials (born between 1981 and 1996) make up large parts of the subscriber audiences. Interestingly, in the case of boxing, this represents an opportunity for promoters to tap into a massive new audience who may not be familiar with the sport.

Even though these influencers may appear to be strongly linked to a generation that they have an innate understanding of (by being part of it) what they are essentially doing is leveraging public relations – building relationships with different publics, building their own credibility and raising their own visibility – on a grand scale. YouTube is simply the media and part of the message that allows them to achieve their PR aims.

PR Often Overlooked By Businesses

The power of PR to business is often overlooked in favour of apparently easier to understand advertising and measuring of responses, and rather than dismissing the kind of influence that some young people have via social media as a generational mystery that doesn’t apply to you, recognising that the value-adding use of PR is within the reach of all businesses is important.  So, what can PR do for your business/campaign/cause/event?

  • As YouTube celebrities show, influence is something that PR can achieve. Your own expertise and inside knowledge of your business and industry can be a valuable and persuasive asset in your messages that can make you appear to be a trusted and objective source.
  • Finding or creating an interesting and compelling story with a link to your products, services and brand can mean that the ‘reach’ of your message is increased as different outlets and channels pick up on it and share it.
  • The cost-effectiveness of your advertising can be dramatically increased when combined with PR.
  • The search engine optimisation (SEO) of your website can get a real boost from PR as you receive more visitors to your website and more shares of your story on social media and on other websites, and more links to your website thereby giving your rankings a boost for important key phrases.
  • Getting your own feature in an important publication can be a great way to attract investors and new customers as it strengthens your credibility.
  • Talented people such as potential employees and businesses as potential strategic alliances can also be attracted by good PR about your organisation.

What Does This Mean For Your Business?

The boxing event was not a demonstration of sporting expertise and prowess, but of the power of influence gained through social media and PR.  This event showed that business (and something that’s arguably greater than the sum of its parts) can be generated through paying attention to the building personal brands and online relationships with specific audiences which, over time, can generate its own momentum. One of the key messages for businesses to take away from this is that PR opportunities already exist all around and tapping into them could be a cost-effective way of boosting the power and reach of your messages.  This may be something that has been overlooked in your promotional mix but could make all the difference.

Tough Questions About Libra Cryptocurrency

Facebook’s CEO, Mark Zuckerberg faced a grilling from the US Congress last week over his company’s ‘Libra’ cryptocurrency plans.

Libra

‘Libra’ is Facebook’s new cryptocurrency and global payment system that’s due to be launched in 2020.  Unlike other cryptocurrencies, Libra is backed by a reserve of cash and other liquid assets.  The idea of Libra is that spending the new currency could be as easy and fast as texting as payments can be made by a special phone app and by messaging services such as WhatsApp.  Also, Libra is intended to be of particular value to the one billion+ people around the world (including 14 million in the US) with no access to a bank account, but who could use a mobile phone-based payment system.

Management of the currency, units of which can be purchased via Libra’s platforms and stored it in a digital wallet called “Calibra” will be the responsibility of an independent group of 21 companies and non-profit organisations called the Libra Association, of which Facebook’s subsidiary ‘Calibra’ is a member.

Problems and Criticism

Facebook has, however, found itself coming in for some tough criticism over its involvement with Libra. This includes:

  • Worries about whether Facebook can be trusted with peoples’ financial details in the light of its part in the personal data-sharing scandal with Cambridge Analytica.
  • Concerns from ‘Group of Seven’ democracies finance chiefs about whether Libra could address “serious regulatory and systemic concerns”.
  • President Trump Tweeting that he’s not a fan of Libra, and bank chiefs like Mark Carney also expressing concerns about Libra.
  • Worries that Libra could be used as a means to bypass rules relating to money laundering and tax evasion (which is believed to have led to PayPal leaving the Libra Association recently).
  • Warnings that Libra could be blocked in Europe (especially in France) unless concerns over risks to consumers and to the monetary systems of countries can be addressed.

Congress Grilling

The grilling of Mark Zuckerberg at the US Congress last week at the top of the House Financial Service Committee’s hearing focused on many of the key concerns.  For example:

  • Republican Nydia Velázquez asked Mark Zuckerberg why Facebook should be trusted after the recent privacy scandals and data breaches/data sharing relating to the Cambridge Analytica affair.
  • Republican Joyce Beatty criticised Mark Zuckerberg over an apparent lack of knowledge of diversity and housing advertisement issues and alleged that Zuckerberg hadn’t read her reports.
  • Republican Patrick McHenry criticised the technology industry and highlighted the current anger towards it.

Prepared Statement Covered Many Concerns

Mark Zuckerberg’s prepared statement for the hearing appears have anticipated and answered the main concerns.  For example, as well as stressing how Facebook is committed to strong consumer protections for the financial information they receive, Mark Zuckerberg addressed three main concerns, saying that:

  1. Where people are concerned that Facebook is moving too fast on the Libra project, Facebook is committed to taking the time to get this right.
  2. Where it has been suggested that Facebook could circumvent regulators and regulations with Libra, Facebook won’t actually be a part of launching the Libra payments system anywhere in the world unless all US regulators approve it.
  3. Libra is not an attempt to create a sovereign currency but, like existing online payment systems, it’s simply intended to be a way for people to transfer money.

So What?

Despite the grilling, many commentators have pointed out that the House Financial Service Committee and Congress don’t actually have the power to do much about the introduction of Libra.  Some commentators have also suggested that the hearing was as much about political grandstanding as it was about Libra and that politicians are finding it hard to stay up to speed with information about cryptocurrencies.

No Regulatory Approval = Facebook Leaves the Association

Mr Zuckerberg stressed just how much he intends to play by the rules with Libra by saying that if the Libra Association moved forward without regulatory approval, Facebook “would be forced to leave the Association.”

What Does This Mean For Your Business?

Banks and governments are unlikely to adopt a favourable attitude to a new type of currency that could potentially unbalance monetary systems, and could potentially get around regulations, scrutiny and control, and could even be used for money laundering and tax evasion. That said, the blockchain-anchored Libra is unlikely to suffer many of the huge fluctuations and problems that other cryptocurrencies like bitcoin have because Libra is backed by real assets.  Also, many of the big financial players are part of the Libra Association e.g. Mastercard and Visa, although it’s clear that Facebook needs to make sure that Libra can meet all regulatory requirements and is squeaky clean if the Association wants to keep these important members.

If, as Mr Zuckerberg says, Libra is simply and innocently another way of paying for things that could lead to a more inclusive society e.g. by helping those without bank accounts, this could benefit not just society but whole economies too.  It looks as though Facebook still has some way to go, however, to convince governments, finance chiefs and other critics that it is the right company to be trusted with a new currency and the financial data of those who use it.

Facebook ‘News’ Tab on Mobile App

Facebook has launched the ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

Large US Cities For Now

The ‘News’ tab on the Facebook mobile app, which will initially only be available to an estimated 200,000 people in select, large US cities, is expected by Facebook to become so popular that it could attract millions of users.

What?

The News tab will attempt to show users stories from local publishers as well as the big national news sources.  The full list of publishers who will contribute to the News tab stories has not yet been confirmed, although online speculation points to the likes of (U.S. publishers initially) Time, The Washington Post, CBS News, Bloomberg, Fox News and Politico.  It has not yet been announced when the service will be available to UK Facebook users. It has been reported that Facebook is also prepared to pay many millions for some of the content included in the tab.

Why?

Facebook has been working hard to restore some of the trust lost in the company when it was found to be the medium by which influential fake news stories were distributed during the UK Brexit referendum, the 2017 UK general election, and the U.S. presidential election.  There is also the not-so-small matter of 50 million Facebook profiles being shared/harvested (in conjunction with Cambridge Analytica) back 2014 in order to build a software program that was used to predict and generate personalised political adverts to influence choices at the ballot box in the last U.S. election.

Facebook CEO, Mark Zuckerberg, was made to appear before the U.S. Congress in April to talk about how Facebook is tackling false reports, and even recently a video that was shared via Facebook (which had 4 million views before being taken down) falsely suggested that smart meters emit radiation levels that are harmful to health. The information in the video was believed by many even though it was false.

Helping Smaller Publishers Too

Also, Facebook acknowledges that smaller news outlets have struggled to gain exposure with its algorithms, and that there is an opportunity to deliver more local news, personalised news experiences, and more modern digital-age, independent news.  It is also likely that, knowing that young people get most of their news from online sources but have been moving away to other platforms, this could be a good way for Facebook to retain younger users.

Working With Fact-Checkers

Back in January, for example, Facebook tried to help restore trust in its brand and publicly show that it was trying to combat fake news by announcing that it was working with London-based, registered charity ‘Full Fact’ who would be reviewing stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

Personalisation

The News tab will also allow users to see a personalised selection of articles, the choice of which is based upon the news they read. This personalisation will also include the ability to hide articles, topics and publishers that users choose not to see.

The Human Element

One of the key aspects of the News tab service that Facebook sees as adding value, keeping quality standards high, and providing a further safeguard against fake news is that many stories will be reviewed and chosen by experienced journalists acting as impartial and independent curators.  For example, Facebook says that “Unlike Google News, which is controlled by algorithms, Facebook News works more like Apple News, with human editors making decisions.”

Not The First Time

This is not the first time that Facebook has tried offering a news section, and it will hopefully be more successful and well-received than the ‘Trending News’ section that was criticised for bias in the 2016 presidential election and has since been phased out.

What Does This Mean For Your Business?

Only last week, Mark Zuckerberg found himself in front of the U.S. Congress answering questions about whether Facebook can be trusted to run a new cryptocurrency, and it is clear that the erosion of trust caused by how Facebook shared user data with Cambridge Analytica and how the platform was used to spread fake news in the U.S. election have cast a long shadow over the company.  Facebook has since tried many ways to regain trust e.g. working with fact-checkers, adding the ‘Why am I seeing this post?’ tool, and launching new rules for political ad transparency.

Users of social networks clearly don’t want to see fake news, the influences of which can have a damaging knock-on effect on the economic and trade environment which, in turn, affects businesses.

The launch of this News service with its human curation and fact-checking could, therefore, help Facebook kill several birds with one stone. For example, as well as going some way to helping to restore trust, it could increase the credibility of Facebook as a go-to trusted source of quality content, enable Facebook to compete with its rivals e.g. Google News, show Facebook to be a company that also cares about smaller news publishers, and act as a means to help retain younger users on its platform.

AI and the Fake News War

In a “post-truth” era, AI is one of the many protective tools and weapons involved in the battles that male up the current, ongoing “fake news” war.

Fake News

Fake news has become widespread in recent years, most prominently with the UK Brexit referendum, the 2017 UK general election, and the U.S. presidential election, all of which suffered interference in the form of so-called ‘fake news’ / misinformation spread via Facebook which appears to have affected the outcomes by influencing voters. The Cambridge Analytica scandal, where over 50 million Facebook profiles were illegally shared and harvested to build a software program to generate personalised political adverts led to Facebook’s Mark Zuckerberg appearing before the U.S. Congress to discuss how Facebook is tackling false reports. A video that was shared via Facebook, for example (which had 4 million views before being taken down), falsely suggested that smart meters emit radiation levels that are harmful to health. The information in the video was believed by many even though it was false.

Government Efforts

The Digital, Culture, Media and Sport Committee has published a report (in February) on Disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government has, therefore, been calling for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.

Fact-Checking

One way that social media companies have sought to tackle the concerns of governments and users is to buy-in fact-checking services to weed out fake news from their platforms.  For example, back in January London-based, registered charity ‘Full Fact’ announced that it would be working for Facebook, reviewing stories, images and videos to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

Moderation

A moderator-led response to fake news is one option, but its reliance upon humans means that this approach has faced criticism over its vulnerability to personal biases and perspectives.

Automation and AI

Many now consider automation and AI to be an approach and a technology that are ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be leveraged to combat fake news, and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Deepfake Videos

Deepfake videos are an example of how AI can be used to create fake news in the first place.  Deepfake videos use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create an embarrassing or scandalous video. Deepfake audio can also be manipulated in a similar way.  Deepfake videos aren’t just used to create fake news sources, but they can also be used by cyber-criminals for extortion.

AI Voice

There has also been a case in March this year, where a group of hackers were able to use AI software to mimic an energy company CEO’s voice in order to steal £201,000.

What Does This Mean For Your Business?

Fake news is a real and growing threat, as has been demonstrated in the use of Facebook to disseminate fake news during the UK referendum, the 2017 UK general election, and the U.S. presidential election. State-sponsored politically targeted campaigns can have a massive influence on an entire economy, whereas other fake news campaigns can affect public attitudes to ideas and people and can lead to many other complex problems.

Moderation and automated AI may both suffer from bias, but at least they are both ways in which fake news can be tackled, to an extent.  Through adding fact-checking services, other monitoring, and software-based approaches e.g. through browsers, social media and other tech companies can take responsibility for weeding out and guarding against fake news.

Governments can also help in the fight by putting pressure on social media companies and by collaborating with them to keep the momentum going and to help develop and monitor ways to keep tackling fake news.

That said, it’s still a big problem, no solution is infallible, and all of us as individuals would do well to remember that, especially today, you really can’t believe everything you read and an eye to source and bias of news coupled with a degree of scepticism can often be healthy.