Archive for Social Media

Research Indicates Zoom Is Being Targeted By Cybercriminals

With many people working from home due to coronavirus, research by Check Point indicates that cyber-criminals may be targeting the video conferencing app ‘Zoom’.

Domains

Cybersecurity company ‘Check Point’ reports witnessing a major increase in new domain registrations in the last few weeks where the domain name includes the word ‘Zoom’.  According to a recent report on Check Point’s blog, more than 1700 new domains have been registered since the beginning of the year with 25 per cent of them being registered over the past week. Check Point’s research indicates that 4 per cent of these recently registered domains have “suspicious characteristics”, such as the word ‘Zoom’.

Concern In The U.S.

The huge rise in Zoom’s user numbers, particularly in the U.S. has also led New York’s Attorney General, Letitia James, to ask Zoom whether it has reviewed its security measures recently, and to suggest to Zoom that it may have been relatively slow at addressing issues in the past.

Not Just Zoom

Check Point has warned that Zoom is not the only app that’s being targeted at the moment as new phishing websites have been launched to pass themselves off as every leading communications application.  For example, the official classroom.google.com website has been impersonated by googloclassroom.com and googieclassroom.com.

Malicious Files Too

Check Point also reports detecting malicious files with names related to the popular apps and platforms being used by remote workers during the coronavirus lockdown.  For example, malicious file names observed include zoom-us-zoom_##########.exe” and “microsoft-teams_V#mu#D_##########.exe” (# is used here to represent digits). Once these files are run, InstallCore PUA is loaded onto the victim’s computer.  InstallCore PUA is a program that can be used by cyber-criminals to install other malicious programs on a victim’s computer.

Suggestions

Some ways that users can protect their computers/devices, networks and businesses from these types of threats, as suggested by Check Point, include being extra cautious with emails and files from unfamiliar senders, not opening attachments or clicking on links in emails (phishing scams), and by paying close attention to the spelling of domains, email addresses and spelling errors in emails/on websites.  Check Point also suggests Googling the company you’re looking for to find their official website rather than just clicking on a link in an email, which could redirect to a fake (phishing) site.

What Does This Mean For Your Business?

This research highlights how cyber-criminals are always quick to capitalise on situations where people have been adversely affected by unusual events and where they know people are in unfamiliar territory.  In this case, people are also divided geographically and are trying to cope with many situations at the same time, may be a little distracted, and may be less vigilant than normal.

The message to businesses is that the evidence from security companies that are tracking the behaviour of cyber-criminals is that extra vigilance is now needed and that all employees need to be very careful, particularly in how they deal with emails from unknown sources, or from apparently known sources offering convincing reasons and incentives to click on links or download files.

Facebook Video Quality Reduced To Cope With Demand

Facebook and Instagram have reduced the quality of videos shared on their platforms in Europe as demand for streaming has increased due to self-isolation.

Lower Bitrate, Looks Similar

The announcement by Facebook that a lowering of the bit-rates for videos on Facebook and Instagram in Europe highlights the need to reduce network congestion, free-up more bandwidth, and make sure that users stay connected at a time where demand is reaching very high levels because of the COVID-19 pandemic.  The move could have a significant positive impact when you consider that Facebook has around 300 million daily users in Europe alone, and streaming video can account for as much as 60% of traffic on fixed and mobile networks.

Although a reduction in bit-rates for videos will, technically, reduce the quality, the likelihood is that the change will be virtually imperceptible to most users.

Many Other Platforms

Facebook is certainly not the only platform taking this step as Amazon, Apple TV+, Disney+ and Netflix have also made similar announcements.  For example, Netflix is reducing its back video bit rates while still claiming to allow customers to get HD and Ultra HD content (with lower image quality),  and Amazon Prime Video has started to reduce its streaming bitrates as has Apple’s streaming service.

Google’s YouTube is also switching all traffic in the EU to standard definition by default.

BT Say UK Networks Have The Capacity

BT’s Chief Technology and Information Officer, Howard Watson, has announced that the UK’s advanced digital economy means that it has overbuilt its networks to compensate for HD streaming content and that the UK’s fixed broadband network core has been built with the extra ‘headroom’ to support evening peaks of network traffic that high-bandwidth applications create. Mr Watson has also pointed out that since people started to work from home more this month, there has been a weekday daytime traffic increase of 35-60 per cent compared with similar days on the fixed network, peaking at 7.5Tb/s, which is still only half the average evening peak, and far short of the 17.5 Tb/s that the network is known to be able to handle.

What Does This Mean For Your Business?

For Amazon, Apple TV, Netflix, Facebook and others platforms, they are clearly facing a challenge to their service delivery in Europe but have been quick to take a step that will at least mean that there’s enough bandwidth for their services to be delivered with the trade-off being a fall in the level of viewing quality for customers.  Many customers, however, are likely not to be too critical about the move, given the many other big changes that have been made to their lives as a result of the COVID-19 outbreak and the attempts to reduce its impact.  Netflix has even pointed out the extra benefit that its European viewers are likely to use 25 per cent less data when watching films as a result of the bit rate changes. However, with online streaming services being one of the main pleasures that many people feel they have left to enjoy safely, the change in bit rate should be OK as long as the picture quality isn’t drastically reduced to the point of annoyance and distraction.

Surge In Demand For Teleconference Apps and Platforms That Enable Home Working

The need for people to work from home during the Covid-19 outbreak is reported to have led to a huge increase in the downloads of business teleconferencing apps and in the use of popular cloud-based services like G Suite.

Surge In Downloads

Downloads of remote and collaborative working and communication apps such as Tencent Conference (https://intl.cloud.tencent.com/), WeChat Work (from China), Zoom, Microsoft Teams and Slack are reported to have risen by a massive fivefold since the beginning of the year, driven by the effects of the Covid-19 outbreak.

For example, services such as Rumii (a VR platform, normally $14.99 per month) and Spatial, which enable users to digital meetings in virtual rooms with 3D versions of their co-workers have seen a boost in the number of users, as has video communications app zoom.

Freemium Versions

Even though many of these apps have seen a surge in user numbers which could see users continuing to use them and recommending them in future if their experiences of the apps are good, the ‘freemium’ versions (the basic program for free and advanced features must be paid for) appear to account for most downloads.

Some companies, such as Rumii, have now started to offer services for free after noticing a rise in the number of downloads as Covid-19 spread in the United States.

G Suite

Google’s cloud-based G Suite service (Gmail, Docs, Drive, Hangouts, Sheets, Slides, Keep, Forms, Sites) is reported to have gone past the two billion monthly active users mark at the end of last year. It appears to have gained many active users due to people preparing to work from home following the Covid-19 outbreak.

Google has also offered parts of its enterprise service e.g. Hangouts Meet (video conferencing) for free to help businesses during the period when many employees will need to work from home.

Microsoft

Microsoft is also reported to be offering a free six-month trial for its collaborative working platform ‘Teams’, which surpassed the 20 million active user mark back in November.

Unfortunately, Microsoft Teams suffered a reported two-hour outage across Europe on Monday, just as many employees tried to log in as part of their first experience of working at home in what some commentators are now calling the new “post-office” era.

What Does This Mean For Your Business?

Cloud-based, collaborative and remote working and communications platforms are now providing a vital mitigating lifeline to many businesses and workers at the start of what is likely to be a difficult, disruptive, dangerous and stressful time.  Companies that can get the best out of these cloud-based tools, especially if they can be used effectively on a smartphone, may have a better chance of helping their businesses survive a global threat. Also, the fact that many companies and employees are forced to seek out and use cloud-based apps and platforms like these could see them continuing to make good use of them when the initial crisis is over and we could be witnessing the trigger of a longer-term change in working towards a post-office era where businesses make sure they can last out the effects of future similar threats.

Facebook Sued Down-Under For £266bn Over Cambridge Analytica Data Sharing Scandal

Six years after the personal data of 87 million users was harvested and later shared without user consent with Cambridge Analytica, Australia’s privacy watchdog is suing Facebook for an incredible £266bn over the harvested data of its citizens.

What Happened?

From March 2014 to 2015 the ‘This Is Your Digital Life’ app, created by British academic, Aleksander Kogan and downloaded by 270,000 people which then provided access to their own and their friends’ personal data too, was able to harvest data from Facebook.

The harvested data was then shared with (sold to) data analytics company Cambridge Analytica, in order to build a software program that could predict and use personalised political adverts (political profiling) to influence choices at the ballot box in the last U.S. election, and for the Leave campaign in the UK Brexit referendum.

Australia

The lawsuit, brought by the Australian Information Commissioner against Facebook Inc alleges that, through the app, the personal and sensitive information of 311,127 Australian Facebook Users (Affected Australian Individuals) was disclosed and their privacy was interfered with.  Also, the lawsuit alleges that Facebook did not adequately inform those Australians of the manner in which their personal information would be disclosed, or that it could be disclosed to an app installed by a friend, but not installed by that individual.  Furthermore, the lawsuit alleges that Facebook failed to take reasonable steps to protect those individuals’ personal information from unauthorised disclosure.

In the lawsuit, the Australian Information Commissioner, therefore, alleges that the Australian Privacy Principle (APP) 6 has been breached (disclosing personal information for a purpose other than that for which it was collected), as has APP 11 (failing to take reasonable steps to protect the personal information from unauthorised disclosure).  Also, the Australian Information Commissioner alleges that these breaches are in contravention of section 13G of the Privacy Act 1988.

£266 Billion!

The massive potential fine of £266 billion has been arrived at by multiplying the maximum of $1,700,000 (£870,000) for each contravention of the Privacy Act by the 311,127 Australian Facebook Users (Affected Australian Individuals).

What Does This Mean For Your Business?

Back in July 2018, 16 months after the UK Information Commissioners Office (ICO) began its investigation into the Facebook’s sharing the personal details of users with political consulting firm Cambridge Analytica, the UK’s ICO announced that Facebook would be fined £500,000 for data breaches.  This Australian lawsuit, should it not go Facebook’s way, represents another in a series of such lawsuits over the same scandal, but the £266 billion figure would be a massive hit and would, for example, totally dwarf the biggest settlement to date against Facebook of $5 billion to the US Federal Trade Commission over privacy matters.  To put it in even greater perspective, an eye-watering potential fine of £266 billion would make the biggest GDPR fine to date of £183 million to British Airways look insignificant.

Clearly, this is another very serious case for Facebook to focus its attention on, but the whole matter highlights just how important data security and privacy matters are now taken and how they have been included in different national laws with very serious penalties for non-compliance attached. Facebook has tried hard since the scandal to introduce and publicise many new features and aspects of its service that could help to regain the trust of users in both its platform’s safeguarding of their details and in the area of stopping fake news from being distributed via its platform.  This announcement by the Australian Information Commissioner is, therefore, likely to be an extremely painful reminder of a regrettable and period in the tech giant’s history, not to mention it being a potential threat to Facebook.

For those whose data may have been disclosed, shared and used in a way that contravened Australia’s laws, they may be pleased that their country is taking such a strong stance in protecting their interests and this may send a very powerful message to other companies that store and manage the data of Australian citizens.

Google Indexing Makes WhatsApp Group Links Visible

A journalist has reported on Twitter that WhatsApp groups may not be as secure as users think because the “Invite to Group via Link” feature allows groups to be indexed by Google, thereby making them available across the Internet.

Links Visible

Chats conducted on the end-to-end encrypted WhatsApp can be joined by people who are given an invite URL link but until now it has not been thought that invite links could be indexed by Google (and other search engines) and found in simple searches. However, it appears that group links that have been shared outside of the secure, private messaging app could be found (and joined).

Exposed

The consequences of these 45,000+ invite links being found in searches is that they can be joined and details like the names and phone numbers of the participants can be accessed.  Targeted searches can reveal links to groups based around a number of sensitive subjects.

Links

Even though WhatsApp group admins can invalidate existing links, WhatsApp generates a new link meaning that the original link isn’t totally disabled.

Only Share Links With Trusted Contacts

Users of WhatsApp are warned to share the link only with trusted contacts, and the links that were shown in Google searches appeared because the URLs were publicly listed i.e. shared outside of the app.

Changed

Although Google already offers tools for sites to block content from being listed in search results, since the discovery (and subsequently publicity) of the WhatsApp Invite links being indexed, some commentators have reported that this no longer happens in Google.  It has also been reported, however, that publicly posted WhatsApp Invite links can still be found using other popular search engines.

Recent Security Incident

One other high profile incident reported recently, which may cause some users to question the level of security of WhatsApp was the story about Amazon CEO Jeff Bezo’s phone allegedly being hacked by unknown parties thought to be acting for Saudi Arabia after a mysterious video was sent to Mr Bezo’s phone.

Also, last May there were reports of an attack on WhatsApp which was thought to be a ‘zero-day’ exploit that was used to load spyware onto the victim’s phone.  Once the victim’s WhatsApp had been hijacked and the spyware loaded onto the phone, for example, access may have been given to encrypted chats, photos, contacts and other information.  That kind of attack may also have allowed eavesdropping on calls and turning on the microphone and camera, as well as enabling attackers to alter the call logs and hide the method of infection.  At the time, it was reported that the attack may have originated from a private Israeli company, The NSO Group.

What Does This Mean For Your Business?

In this case, although it’s alarming that the details of many group members may have been exposed, it is likely to be because links for those groups were posted publicly and not shared privately with trusted members as the app recommends.  That said, it’s of little comfort for those who believed that their WhatsApp group membership and personal details are always totally private.  It’s good news, therefore, that Google appears to have taken some action to prevent it from happening in future. Hopefully, other search engines will now do the same.

WhatsApp has end-to-end encryption, which should mean that it is secure, and considering that it has at least 1.5 billion users worldwide, surprisingly few stories have emerged that have brought the general security of the app into question.

Google In Talks About Paying Publishers For News Content

It has been reported that Google is in talks with publishers with a view to buying in premium news content for its own news services to improve its relationship with EU publishers, and to combat fake news.

Expanding The Google News Initiative

Reports from the U.S. Wall Street Journal indicate that Google is in preliminary talks with publishers outside the U.S. in order expand its News Initiative (https://newsinitiative.withgoogle.com/), the program where Google works with journalists, news organisations, non-profits and entrepreneurs to ensure that fake news is effectively filtered out of current stories in the ‘digital age’.  Examples of big-name ‘partners’ that Google has worked with as part of the initiative include the New York Times, The Washington Post, The Guardian and fact-checking organisations like the International Fact-Checking Network and CrossCheck (to fact-check the French Election).

As well as partnerships, the Google News Initiative provides a number of products for news publishing e.g. Subscribe With Google, News on Google, Fact Check tags and AMP stories (tap-operated, full-screen content).

This Could Please Publishers

The move by Google to pay for content should please publishers, some of whom have been critical of Google and other big tech players for hosting articles on their platforms that attract readers and advertising money, but not paying to display them. Google has faced particular criticism in France at the end of last year after the country introduced a European directive that should have made tech giants pay for news content but in practice simply led to Google removing the snippet below links to French news sites, and removing the thumbnail images that often appear next to news results.

Back in 2014 for example, Google closed its Spanish news site after it was required to pay “link tax” licensing fees to Spanish news sites and back in November 2018 Google would not rule out shutting down Google News in other EU countries if a “link tax” was adopted by them.

Competitors

Google is also in competition with other tech giants who now provide their own fact-checked and moderated news services.  For example, back in October 2019, Facebook launched its own ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources.

What Does This Mean For Your Business?

For European countries and European publishers, it is likely to be good news that Google is possibly coming to the table to offer some money for the news content that it displays on its platform, and that it may be looking for a way to talk about and work through some of the areas of contention.

For Google, this is an opportunity for some good PR in an area where it has faced criticism in Europe, an opportunity to improve its relationship with publishers in Europe, plus a chance to add value to its news service and to help Google to compete with other tech giants that also offer news services with the fake news weeded out.

Featured Article – Combatting Fake News

The spread of misinformation/disinformation/fake news by a variety of media including digital and printed stories and deepfake videos is a growing threat in what has been described as out ‘post-truth era’, and many people, organisations and governments are looking for effective ways to weed out fake news, and to help people to make informed judgements about what they hear and see.

The exposure of fake news and its part in recent election scandals, the common and frequent use of the term by prominent figures and publishers, and the need for the use of fact-checking services have all contributed to an erosion of public trust in the news they consume. For example, YouGov research used to produce annual Digital News Report (2019) from the Reuters Institute for the Study of Journalism at the University of Oxford showed that public concern about misinformation remains extremely high, reaching a 55 per cent average across 38 countries with less than half (49 per cent) of people trusting the news media they use themselves.

The spread of fake news online, particularly at election times, is of real concern and with the UK election just passed, the UK Brexit referendum, the 2017 UK general election, and the last U.S. presidential election all being found to have suffered interference in the form of so-called ‘fake news’ (and with the 59th US presidential election scheduled for Tuesday, November 3, 2020) the subject is high on the world agenda.

Challenges

Those trying to combat the spread of fake news face a common set of challenges, such as those identified by CEO of OurNews, Richard Zack, which include:

– There are people (and state-sponsored actors) worldwide who are making it harder for people to know what to believe e.g. through spreading fake news and misinformation, and distorting stories).

– Many people don’t trust the media or don’t trust fact-checkers.

– Simply presenting facts doesn’t change peoples’ minds.

– People prefer/find it easier to accept stories that reinforce their existing beliefs.

Also, some research (Stanford’s Graduate School of Education) has shown that young people may be more susceptible to seeing and believing fake news.

Combatting Fake News

So, who’s doing what online to meet these challenges and combat the fake news problem?  Here are some examples of those organisations and services leading the fightback, and what methods they are using.

Browser-Based Tools

Recent YouGov research showed that 26% per cent of people say they have started relying on more ‘reputable’ sources of news, but as well as simply choosing what they regard to be trustworthy sources, people can now choose to use services which give them shorthand information on which to make judgements about the reliability of news and its sources.

Since people consume online news via a browser, browser extensions (and app-based services) have become more popular.  These include:

– Our.News.  This service uses a combination of objective facts (about an article) with subjective views that incorporate user ratings to create labels (like nutrition labels on food) next to new articles that a reader can use to make a judgement.  Our.News labels use publisher descriptions from Freedom Forum, bias ratings from AllSides, information about an article’s sources author and editor.  It also uses fact-checking information from sources including PolitiFact, Snopes and FactCheck.org, and labels such as “clickbait” or “satire” along with and user ratings and reviews.  The Our.News browser extension is available for Firefox and Chrome, and there is an iOS app. For more information go to https://our.news/.

– NewsGuard. This service, for personal use or for NewsGuard’s library and school system partners, offers a reliability rating score of 0-100 for each site based on its performance on nine key criteria, ratings icons (green-red ratings) next to links on all of the top search engines, social media platforms, and news aggregation websites.  Also, NewsGuard gives summaries showing who owns each site, its political leaning (if any), as well as warnings about hoaxes, political propaganda, conspiracy theories, advertising influences and more.  For more information, go to https://www.newsguardtech.com/.

Platforms

Another approach to combatting fake news is to create a news platform that collects and publishes news that has been checked and is given a clear visual rating for users of that platform.

One such example is Credder, a news review platform which allows journalists and the public to review articles, and to create credibility ratings for every article, author, and outlet.  Credder focuses on credibility, not clicks, and uses a Gold Cheese (yellow) symbol next to articles, authors, and outlets with a rating of 60% or higher, and a Mouldy Cheese (green) symbol next to articles, authors, and outlets with a rating of 59% or less. Readers can, therefore, make a quick choice about what they choose to read based on these symbols and the trust-value that they create.

Credder also displays a ‘Leaderboard’ which is based on rankings determined by the credibility and quantity of reviewed articles. Currently, Credder ranks nationalgeographic.com, gizmodo.com and cjr.org as top sources with 100% ratings.  For more information see https://credder.com/.

Automation and AI

Many people now consider automation and AI to be an approach and a technology that is ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be employed to combat fake news and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Government

Governments clearly have an important role to play in the combatting of fake news, especially since fake news/misinformation has been shown to have been spread via different channels e.g. social media to influence aspects of democracy and electoral decision making.

For example, in February 2019, the Digital, Culture, Media and Sport Committee published a report on disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government called for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.

Also, in the US, Facebook’s Mark Zuckerberg has been made to appear before the U.S. Congress to discuss how Facebook tackles false reports.

Finland – Tackling Fake News Early

One example of a government taking a different approach to tackling fake news is that of Finland, a country that has recently been rated Europe’s most resistant nation to fake news.  In Finland, evaluation of news and fact-checking behaviour in the school curriculum was introduced in a government strategy after 2014, when Finland was targeted with fake news stories from its Russian neighbour.  The changes to the school curriculum across core areas in all subjects are, therefore, designed to make Finnish people, from a very young age, able to detect and do their part to fight false information.

Social Media

The use of Facebook to spread fake news that is likely to have influenced voters in the UK Brexit referendum, the 2017 UK general election and the last U.S. presidential election put social media and its responsibilities very much in the spotlight.  Also, the Cambridge Analytica scandal and the illegal harvesting of 50 million Facebook profiles in early 2014 for apparent electoral profiling purposes damaged trust in the social media giant.

Since then, Facebook has tried to be seen to be actively tackling the spread of fake news via its platform.  Its efforts include:

– Hiring the London-based, registered charity ‘Full Fact’, who review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.  Facebook is also reported to be working with fact-checkers in more than 20 countries, and to have had a working relationship with Full Fact since 2016.

– In October 2018, Facebook also announced that a new rule for the UK now means that anyone who wishes to place an advert relating to a live political issue or promoting a UK political candidate, referencing political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate, will need to prove their identity, and prove that they are based in the UK. The adverts they post will also have to carry a “Paid for by” disclaimer to enable Facebook users to see who they are engaging with when viewing the ad.

– In October 2019, Facebook launched its own ‘News’ tab on its mobile app which directs users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– In January this year, Monika Bickert, Vice President of Facebook’s Global Policy Management announced that Facebook is banning deepfakes and “all types of manipulated media”.

Other Platforms & Political Adverts

Political advertising has become mixed up with the spread of misinformation in the public perception in recent times.  With this in mind, some of the big tech and social media players have been very public about making new rules for political advertising.

For example, in November 2019, Twitter Inc banned political ads, including ads referencing a political candidate, party, election or legislation.  Also, at the end of 2019, Google took a stand against political advertising by saying that it would limit audience targeting for election adverts to age, gender and the general location at a postal code level.

Going Forward

With a U.S. election this year, and with the sheer number of sources, and with the scale and resources that some (state-sponsored) actors have, the spread of fake news is something that is likely to remain a serious problem for some time yet.  From the Finnish example of creating citizens who have a better chance than most of spotting fake news to browser-based extensions, moderated news platforms, the use of AI, government and other scrutiny and interventions, we are all now aware of the problem, the fight-back is underway, and we are getting more access to ways in which we can make our own more informed decisions about what we read and watch and how credible and genuine it is.

WhatsApp Ceases Support For More Old Phone Operating Systems

WhatsApp has announced that its messaging app will no longer work on outdated operating systems, which is a change that could affect millions of smartphone users.

Android versions 2.3.7 and Older, iOS 8 and Older

The change, which took place on February 1, means that WhatsApp has ended support for Android operating system versions 2.3.7 and older and iOS 8 meaning that users of WhatsApp who have those operating systems on their smartphones will no longer be able to create new accounts or to re-verify existing accounts.  Although these users will still be able to use WhatsApp on their phones, WhatsApp has warned that because it has no plans to continue developing for the old operating systems, some features may stop functioning at any time.

Why?

The change is consistent with Facebook-owned app’s strategy of withdrawing support for older systems and older devices as it did back in 2016 (smartphones running older versions of Android, iOS, Windows Phone + devices running Android 2.2 Froyo, Windows Phone 7 and older versions, and iOS 6 and older versions), and when WhatsApp withdrew support for Windows phones on 31 December 2019.

For several years now, WhatsApp has made no secret of wanting to maintain the integrity of its end-to-end encrypted messaging service, making changes that will ensure that new features can be added that will keep the service competitive, maintain feature parity across different systems and devices, and focus on the operating systems that it believes that the majority of its customers in its main markets now use.

Security & Privacy?

This also means that, since there will no longer be updates for older operating systems, this could lead to privacy and security risks for those who continue using older operating systems.

What Now?

Users who have a smartphone with an older operating system can update the operating system, or upgrade to a newer smartphone with model in order to ensure that they can continue using WhatsApp.

The WhatsApp messaging service can also now be accessed through the desktop by syncing with a user’s phone.

What Does This Mean For Your Business?

WhatsApp is used by many businesses for general communication and chat, groups and sending pictures, and for those business users who still have an older smartphone operating system, this change may be another reminder that the perhaps overdue time to upgrade is at hand.  Some critics, however, have pointed to the fact that the move may have more of a negative effect on those WhatsApp users in growth markets e.g. Asia and Africa where many older devices and operating systems are still in use.

For WhatsApp, this move is a way to stay current and competitive in its core markets and to ensure that it can give itself the scope to offer new features that will keep users loyal and engaged with and committed to the app.

Tech Tip – Using WhatsApp On Your PC

If you’re working at your PC and you need to access WhatsApp without having to keep looking at your phone, there’s an easy way to use WhatsApp on your PC – here’s how:

– Open web.whatsapp.com in a browser.

– Open WhatsApp on your phone.

– Open the Chats screen, select ‘Menu’, and select ‘WhatsApp Web’.

– Scan the QR code with your phone.

– You will now be able to see your WhatsApp chats on your PC every time you open web.whatsapp.com in a browser.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.

Facebook Bans Deepfake Videos

In a recent blog post, ahead of the forthcoming US election, Monika Bickert, Vice President, of Facebook’s Global Policy Management has announced that the social media giant is banning deepfakes and “all types of manipulated media”.

Not Like Last Time

With the 59th US presidential election scheduled for Tuesday, November 3, 2020, Facebook appears to be taking no chances after the trust-damaging revelations around unauthorised data sharing with Cambridge Analytica, and the use of the platform by foreign powers such as Russia in an attempt to influence the outcome of the 2016 election of Donald Trump.

The fallout of the news that 50 million Facebook profiles were harvested as early as 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election includes damaged trust in Facebook, a substantial fine, plus a fall in the number of daily users in the United States and Canada for the first time in its history.

Deepfakes

One of the key concerns to Facebook this time around appears to be so-called ‘deepfake’ videos.  These use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create very convincing videos of the subjects saying and doing whatever the video-maker wants them to. These videos could obviously be used to influence public thinking about political candidates, and as well as having an influence in election results, it would be very damaging for Facebook, which has been very public about trying to rid itself of ‘fake news’ and not to be seen as a platform for the easy distribution of deepfake videos.  No doubt Facebook’s CEO Mark Zuckerberg would like to avoid having to appear before Congress again to answer questions about his company’s handling of personal data, as he had to back in April 2018.

The New Statement From Facebook

This latest blog post statement from Facebook says that as a matter of policy, it will now remove any misleading media from its platform if the media meets two criteria, which are:

  • If it has been synthesised i.e. more than just adjustments for clarity or quality to the point where the ‘average person’ could be misled into thinking the subject of the media/video is saying words that they did not actually say, and…
  • If the media is the product of artificial intelligence or machine learning that has merged, replaced or superimposed content onto a video, in order to make it appear to be authentic.

Not Satire

Facebook has been careful to point out that this policy change will not affect content that is clearly intended to be parody or satire, or videos that have been edited just to omit or change the order of the words featured in them.

Existing Policies

Any media posted to Facebook is subject to the social media giant’s existing comply-or-be-removed ‘Community Standards’ policies which cover, among other things, voter suppression and hate speech.

What Will Happen?

Facebook says that any videos that don’t meet its standards for removal are still eligible for review by one its independent third-party fact-checkers (which include 50+ partners worldwide) and that any photos or videos rated as false or partly false (by a fact-checker) will have its distribution “significantly” reduced in News Feed and will be rejected if it’s being run as an ad. Also, those who see it and try to share it, or have already shared it, will be shown warnings alerting them that it’s false.

Measures

Facebook has taken many measures to ensure that it is not seen as a platform that can’t be trusted with user data or as a distributor of fake news.  For example:

– In January 2019 Facebook announced (in the UK) that it was working with London-based, registered charity ‘Full Fact’ to review stories, images and videos, in an attempt to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

– In September 2019, Facebook launched its Deep Fake Detection Challenge, with $10 million in grants and with a cross-sector coalition of organisations in order to encourage the production of tools to detect deepfakes.

– In October 2019, Facebook launched the ‘News’ tab on its mobile app to direct users to unbiased, curated articles from credible sources in a bid to publicly combat fake news and help restore trust in its own brand.

– Facebook has partnered with Reuters to produce a free online training course to help newsrooms worldwide to identify deepfakes and manipulated media.

Criticism

Despite this recent announcement of policy change to help eradicate deepfakes from its platform, Facebook has been criticised by some commentators for appearing to allow some videos which some could describe as misinformation in certain situations (apparently of its choosing).  For example, Facebook has said that content that violates its policies could be allowed if it is deemed newsworthy e.g. presumably, the obviously doctored videos of Labour’s Keir Starmer and US House Speaker Nancy Pelosi.

What Does This Mean For Your Business?

Clearly, any country would like to guard against outside influence in its democratic processes and the deliberate spread of misinformation, and bearing in mind the position of influence that Facebook has, it is good for everyone that it is taking responsibility and trying to block obvious attempts to spread misinformation by altering its policies and working with other organisations. Businesses that use Facebook as an advertising platform also need to know that Facebook users have trust in (and will continue to use) that platform (and see their adverts) so it’s important to businesses that Facebook is vigilant and takes action where it can.  Also, by helping to protect the democratic processes of the countries it operates in, particularly in the US at the time of and election (and bearing in mind what happened last time), it is in Facebook’s own interest to protect its brand against any accusations of not allowing political influence through a variety of media on its platform, and any further loss of trust by its public. This change of policy also shows that Facebook is trying to show readiness to deal with the most up to date threat of deepfakes (even though they are relatively rare).

That said, Google and Twitter (with its new restrictions on micro-targeting for example), have both been very public about trying to stop all lies in political advertising on their platforms, but Facebook has just been criticised by the IPA over its decision not to ban political ads that are using micro-targeting and spurious claims to sway the opinions of voters.