Archive for Internet Security

New Chrome 69 Creates Better Passwords, Among Other Features

Chrome 69, the latest version of the Google browser which is now 10 years old, has a number of value-adding new features, including the ability to automatically generate strong passwords.

Improved Password Manager

This latest version of Chrome has an improved password manager that is perhaps more fitting of the browser that is favoured by 60% of browser users, many of whom still rely upon using very weak passwords. For example, the most commonly used passwords in 2017 were reported to be 123456, password, 12345678 and qwerty.

The updated password manger in Chrome 69 hopes to make serious inroads into this most simple of human errors by recommending strong passwords when users sign up for websites or update settings. The Chrome 69 password manager will suggest passwords incorporating at least one lowercase character, one uppercase character and at least one number, and where websites require symbols in passwords it will be able to add these. Users will be able to manually edit the Chrome-generated password, and when Google is generating the password, every time users click away from its suggestion, a new one is created. Chrome 69 will then store the password on a laptop or phone so that users don’t have to write it down or try and remember it (as long as they are using the same device).

Other Features

Other new and improved features of Chrome 69 include:

Faster and more accurate form-filling: Google says that because information such as passwords, addresses and credit card numbers are saved in a user’s Google account and can be accessed directly from the from the Chrome toolbar, Chrome can make it much easier and faster to fill-out online checkout forms.

Combined search and address bar (improvements): In Chrome 69, users will have a combined search and address bar (the Omnibox), which shows the answers directly in the address bar without users having to open a new tab, thereby making it more convenient. Also, if there are several tabs open across three browser windows, for example, a search in the Omnibox will tell users if that website’s already open and will allow navigation straight to it with “Switch to tab”. Google says that users will soon also be able to search files from your Google Drive directly in the Omnibox too.

CSS Snap: This feature allows developers to create smoother browsing experiences. It does this by telling the browser where to stop after each scrolling operation, and is particularly useful for displaying carousels and paginated sections to guide users to the next slide or section.

Put The www. Back!

There was some controversy and protests from some Chrome users over the way that, in order to take account of the limited space on mobile screens, and for greater security (to stop confusion with phishing URLs), version 69 of Chrome has been made to no longer show the www. part of a URL (and the m. on mobiles) in the address bar. It is worth mentioning at this point that Apple’s Safari also hides URL characters. Some critics of Google’s move to this system have said that it could confuse users into thinking that they’re at the wrong website.

Other Criticism

Some more cynical / informed commentators have suggested that the change in URL display is actually more to do with AMP system and AMP cache which benefits the advertising side of Google’s business.

What Does This Mean For Your Business?

The changes in Chrome 69 that encourage and facilitate the use of much stronger passwords may be a little overdue, but it has to be good news for the security of all Chrome users. The speedier form-filling will also be a time-saver in an age where many people now carry out many of their daily transactions online and on mobile devices.

Even though stronger passwords are a good thing, security has now moved on again from those, because they have been found to be less secure than biometrics and other access methods.

The new Chrome 69 has been released, but so has the beta version of Chrome 70, and it remains to be seen how security is upgraded yet again in subsequent versions as cyber-crime threats become more wide-ranging and sophisticated.

Find Out What ‘Deep Fakes’ Are and Why They’re A Threat

Deep fakes are digitally manipulated videos that have been created using deep learning technology to make the subject of the video (often a famous person) say anything the video maker wants them to say, even incorporating the style and facial expressions of another person.

Example

An example here is a video that demonstrates the technique, and features a fake video of Barack Obama saying things that he would never normally (publicly) say. Example : https://www.youtube.com/watch?v=AmUC4m6w1wo

Improving Fast

The technique, which had its less than auspicious first uses in pornography, where porn actors were made to look and sound like famous people, has much improved and become arguably more convincing as deep learning and AI have led to more seamless and convincing results.

Style Transfer

The development of the technology used in deep fake videos has improved to the point where even a person’s style can be superimposed and incorporated. An example of this can be seen in videos created by researchers at Carnegie Mellon University, who have been able to use artificial intelligence technology to transfer the facial expressions of one person in a video to another.

See this example on YouTube: https://www.youtube.com/watch?v=ehD3C60i6lw where John Oliver is made to reflect the style of Stephen Colbert, a daffodil is made to bloom (time lapse) the same way as a hibiscus, and Barack Obama is given the same facial expressions and style as Dr Martin Luther King and President Donald Trump.

What’s The Danger?

The danger, according to US lawmakers and intelligence organisations, is that videos could be made by adversarial nation states and used as another tool in disinformation campaigns. For example, at key moments, politicians and other influential figures could be made to appear to make false and /or inflammatory statements that could be believed by less politically aware recipients. In short, these videos could be used to influence opinions e.g. at election-time, and could afford a foreign power a way to interfere that relies upon human error – the same thing that many successful cyber attacks have relied upon.

What Does This Mean For Your Business?

With the US Midterm elections on the way, with allegations of Russian interference and possible collusion still hanging over President Trump’s head, and with some evidence that Facebook was used by a foreign power to try an influence the last US election result, it is understandable that the US government is worried about any tools that could be used to interfere in their democratic process. This is one of the reasons why Microsoft has seized 6 phishing domains that allegedly belong to Russian government hackers, and has introduced a pilot AccountGuard secure email service for election candidates.

If the technology behind deep fake videos keeps improving, it is possible to see it being used as another tool in other types of cyber-crime.

There is, of course, an upside and some ways that deep fake technology can be used in a positive way. For example, deep fake could be used to help film-makers to reduce costs and speed up work, make humorous videos and advertisements, and even help in corporate training.

UK Government Guilty of Mass Surveillance Human Rights Breach

The European Court of Human Rights in Strasbourg has found the UK government guilty of violating the right to privacy of citizens under the European convention because the safeguards within the government’s system for bulk interception of communications were not strong enough to provide guarantees against abuse.

The Case

The case which led to the verdict, was brought against the UK government by 14 human rights groups, journalism organisations, and privacy organisations such as Amnesty International, Big Brother Watch and Liberty in the wake of the 2013 revelations by Edward Snowden, specifically that GCHQ was secretly intercepting communications traffic via fibre-optic undersea cables.

In essence, although the court, which voted by a majority of five to two votes against the UK government, accepted that police and intelligence agencies need covert surveillance powers to tackle threats, those threats do not justify spying on every citizen without adequate protections.

Three Main Points

The ruling against the UK government in this case centred on three points – firstly the regime for bulk interception of communications (under section 8(4) of RIPA), secondly the system for collection communications data (under Chapter II of RIPA), and finally the intelligence sharing programme.

The UK government was found to breach the convention on the first 2 points, but the ECHR didn’t find a legal problem with GCHQ’s regime for sharing sensitive digital intelligence with foreign governments. Also, the court decided that bulk interception with tighter safeguards was permissible.

Key Points

Some of the key points highlighted by the rulings against the UK government, in this case, are that:

  • Bulk interception is not unlawful in itself, but the oversight of that apparatus was not up to scratch in this case.
  • The system governing the bulk interception of communications is not capable of keeping interference to what is strictly necessary for a democratic society.
  • There was concern that the government could examine the who, when and where of a communication, apparently without restriction i.e. problems with safeguards around ‘related data’. The worry is that related communications data is capable of painting an intimate picture of a person e.g. through mapping social networks, location tracking and insights into who they interacted with.
  • There had been a violation of Article 10 relating to the right to freedom of expression for two of the parties (journalists), because of the lack of sufficient safeguards in respect of confidential journalist material.

Privacy Groups Triumphant

Privacy groups were clearly very pleased with the outcome. For example, the Director of Big Brother Watch is reported as saying that the judgement was a step towards protecting millions of law-abiding citizens from unjustified intrusion.

What Does This Mean For Your Business?

Like the courts, we are all aware that we face threats of terrorism, online sexual abuse and other crimes, and that advancements in technology have made it easier for terrorists and criminals to evade detection, and that surveillance is likely to be a useful technique to help protect us all, our families and our businesses.

However, we should have a right to privacy, particularly if we feel strongly that there is no reason for the government to be collecting and sharing information about us that, with the addition of related data, could identify us not just to the government but to any other parties who come into contact with that data.

The reality of 2018 is that we now live in a country where in addition to CCTV surveillance, we have the right to surveillance set in law. The UK ‘Snooper’s Charter’ / Investigatory Powers Act became law in November 2016 and was designed to extend the reach of state surveillance in Britain. The Charter requires web and phone companies (by law) to store everyone’s web browsing histories for 12 months, and also to give the police, security services and official agencies unprecedented access to that data. The Charter also means that security services and police can hack into computers and phones and collect communications data in bulk, and that judges can sign off police requests to view journalists’ call and web records.

Although businesses and many citizens prefer to operate in a safe and predictable environment, and trust governments to operate surveillance just for this purpose and with the right safeguards in place, many are not prepared to blindly accept the situation. Many people and businesses (communications companies, social media, and web companies) are uneasy with the extent of the legislation and what it forces companies to do, how necessary it is, and what effect it will have on businesses publicly known to be snooping on their customers on behalf of the state.

This latest ruling against the government won’t stop bulk surveillance or the sharing of data with intelligence partners, but many see it as a blow against a law that makes them uneasy in a time when GDPR is supposed to have given us power over what happens to our data.

Only 32% of Emails Clean Enough To ‘Make It’

A bi-annual study by FireEye has found that less than a third of over half a billion emails analysed were considered clean enough not to be blocked from entering our inboxes.

Phishing Problem Evident

The study found that even though 9 out of 10 emails that are blocked by email security / anti-virus didn’t actually contain malware, 81% of the blocked emails were phishing attacks. This figure is double that of the previous 6 months.

Webroot’s Quarterly Threat Trends Report data, for example, shows that 1.39 million new phishing sites are created each month, and that this figure was even as high as 2.3 million in May last year. It is likely that phishing attacks have increased so much because organisations have been focusing too much of their security efforts on detecting malware. Also, human error is likely to be a weak link in any company, and phishing has proven to be very successful, sometimes delivering results in a second wave as well as the first attack. For example, in the wake of the TSB bank system meltdown, phishing attacks on TSB customers increased by 843% in May compared with April.

A recent KnowBe4 study involved sending phishing test emails to 6 million people, and the study found that recipients were most likely to click on phishing emails when they promised money or threatened the loss of money. This highlights a classic human weakness that always provides hope to cyber-criminals, and the same criminals know that the most effective templates for phishing are the ones that cause a knee-jerk reaction in the recipient i.e. the alarming or urgent nature of the subject makes the recipient react without thinking.

Increase In Malicious Intent Emails

The FireEye study also highlighted the fact that there has been an increase over the last 6 months in the emails sent to us that have malicious intent. For example, the latest study showed that one in every 101 emails had malicious intent, whereas this figure was one in every 131 in the previous 6 months.

Biggest Vulnerability

As FireEye noted after seeing the findings of their research, email is the most popular vector for cyber attacks, and it is this that makes email the biggest vulnerability for every organisation.

What Does This Mean For Your Business?

It is very worrying that we can only really trust less than one third of emails being sent to businesses as being ‘clean’ enough and free enough of obvious criminal intent to be allowed through to the company inbox. It is, of course, important to have effective anti-virus / anti-malware protection in place on email programs, but phishing emails are able to get past this kind of protection, along with other methods such as impersonation attacks like CEO fraud. Organisations, therefore, need to focus on making sure that staff are sufficiently trained and educated about the threats and the warning signs, and that there are clear procedures and lines of responsibility in place to be followed when emails relating to e.g. transfer of money (even to what appears to be the CEO) are concerned.

Cyber-criminals are getting bolder and more sophisticated, and companies need to ensure that there is no room for weak ‘human error’ links of the front line.

Microsoft Launches ‘AccountGuard’ Email Service For Election Candidates

A new kind of pilot secure email service called ‘AccountGuard’ has been launched by Microsoft, specifically for use by election candidates, and as one answer to the kind of interference that took place during the last US presidential election campaign.

Ready For The Midterm Elections

The new, free email service (which people must useOffice 365 to register for) is an off-shoot of Microsoft’s ‘Defending Democracy’ Program. This program was launched in April with the aim of protecting campaigns from hacking, through increased cyber resilience measures, enhanced account monitoring and incident response capabilities.

The AccountGuard pilot has been launched in time for the US Midterm elections which are the general elections held in November every four years, around the midpoint of a president’s four-year term of office.

Who Can Use AccountGuard?

Microsoft says that its AccountGuard service can be used by all current candidates for federal, state and local office in the United States and their campaigns; the campaign organisations of all sitting members of Congress, national and state party committees, any technology vendors who primarily serve campaigns and committees, and some non-profit organisations and non-governmental organizations. Microsoft AccountGuard is offered free of charge and is full service, coming with free email and phone support.

Three Core Offerings

AccountGuard has three core offerings. These are:

  1. Unified threat detection and notification across accounts. This means providing notification about any cyber threats in a unified way across both email systems run by organisations and the personal accounts of these organizations’ leaders and staff who opt in. This part of the service will only be available only for Microsoft services including Office 365, Outlook.com and Hotmail to begin with, and Microsoft says it will draw on the expertise of the Microsoft Threat Intelligence Center (MSTIC / MSTIC).
  2. Security guidance and ongoing education. Registering for Microsoft AccountGuard gives organisations best practice guidance and materials. These are in the form of off-the-shelf materials and in-depth live sessions.
  3. Early adopter opportunities. This means access to private previews of the kind of security features that are usually offered by Microsoft to large corporate and government account customers.

Similar To Google

Some commentators have highlighted similarities between the AccountGuard idea and Google’s Advanced Protection Program (APP), also launched this year, although APP is open to anyone, requires log in with hardware authentication keys, and locks out third-party app access.

What Does This Mean For Your Business?

When you think about it, what Microsoft appears to be admitting is that its everyday email programs are simply not secure enough to counter many of the threats that now look likely to come from other states when elections are underway. Microsoft’s other, non-political business customers who are also at risk from common cyber attacks e.g. phishing, may feel a little left out that they are apparently not being offered the same level of security.

Also, protecting democracy sounds like quite a grand aim for a service provider offering an email service. Microsoft does, however, accept that it can’t solve the threat to US democracy on its own and that it believes this will require technology companies, government, civil society, the academic community and researchers working together. Microsoft also acknowledges that AccountGuard is limited to protecting those using enterprise and consumer services, and that attacks can actually reach campaigns through a variety of other ways. Microsoft also appears to be hinting that it may be thinking of expanding AccountGuard to industry as well as government depending on how the pilot works.

Google To Kill Dodgy Tech Support Ads

A rise in the number of adverts appearing in Google placed by scammers offering fake tech support has led Google to announce the rollout of a new advert verification programme.

Can’t Tell The Good From The Bad

Google’s Director of Global Product Policy, David Graff, made the announcement on the Google blog. Mr Graff said that, after seeing a rise in misleading ad experiences stemming from third-party technical support providers, Google had taken the decision to begin restricting ads in that category globally. Mr Graff also said that, because the fraudulent activity takes place off the Google platform, it has made it difficult to separate the bad actors from the legitimate providers, and this has necessitated the roll out in the coming months of a verification program to ensure that only legitimate providers of third-party tech support can use the Google platform to reach consumers.

The Scam Adverts

According to Google, last year it took down more than 3.2 billion ads that violated its advertising policies. Google has banned ads for payday loans and bail bonds services, and has introduced verification programmes to fight fraudulent ads for other services such as local locksmith services and addiction treatment centres. It now appears that the scammers have moved into the tech support category to find their victims.

How The Scam Works

According to FBI’s Internet Crime Complaint Centre (IC3), it received approximately 11,000 complaints related to tech support fraud in 2017. This kind of fraud can use several methods for the initial contact with the victim e.g. telephone, search engine adverts, pop-up messages or locked screens (accompanied by a recorded, verbal message to contact a phone number for assistance), or a warning in a phishing e-mail.

The way the fake tech support scam works using search engine adverts, which is the method that Google has highlighted is that:

  • Criminals pay to have fraudulent tech support company links and ads show higher in search results. Victims click on the links / ads, and the ads provide a phone number.
  • When the victim calls the fake tech support company, a representative criminal attempts to convince the victim to provide remote access to their device. If the device is a tablet or a smart-phone, the criminal usually try to make the victim connect the device to a desktop computer.
  • When a remote connection has been made, the criminal will claim to find expired licenses, viruses, malware or other (bogus) issues and will tell the victim that there will be a charge to remove the issue.
  • The criminal will then request payment through personal/electronic check, bank/wire transfer, debit/credit card, prepaid card, or virtual currency.

The scam has other variations which can also involve re-targeting previous victims by posing as government officials / police, offering assistance in recovering losses from a previous tech support fraud incident.

What Does This Mean For Your Business?

For those companies legitimately offering tech support services online using advertising, as well as for the many previous and potential victims, this announcement by Google will be welcomed. It is also in Google’s interest to police its own advertising platform because it provides a significant source of revenue.

We can all take precautions to stop ourselves / our businesses from falling victim to this type of scam. These precautions include:

  • Remembering that any legitimate tech support company are unlikely to initiate unsolicited contact with you / your company.
  • Installing ad-blocking software to eliminate / reduce pop-ups and malvertising (online advertising to spread malware), and making sure that all computer anti-virus, security, and malware protection is up to date.
  • Being very cautious of any support numbers that have been obtained via open source searching i.e. via sponsored links /
  • Google ads.
  • Not giving any unverified people remote access to any devices or accounts.

‘Five Eyes’ Demand Back Door Access To Encrypted Services … Or Else

The frustration of the so-called ‘Five Eyes’ governments in not being allowed access to end-to-end encrypted apps such as WhatsApp has boiled-over into the threat of enforcement via legislative (or other) measures.

Who Are The ‘Five Eyes’?

The so-called ‘Five Eyes’ refers to the intelligence alliance of the governments of the UK, US, Canada, Australia, and New Zealand. Dating back to just after World War 2, the alliance is now secured by the UKUSA Agreement, a treaty for joint cooperation in signals intelligence.

What’s The Problem?

The argument from the government perspective is that end-to-end encryption in apps such as WhatsApp and services such as Google is preventing them from gaining access to conversations of criminals, terrorists and organized crime groups, and that tech companies are refusing to build ‘back doors’ into these services to enable governments to snoop.

The argument from tech companies that use end-to-end encryption in their services is that they are private companies with a duty and responsibility to protect the personal details of their customers, to protect the free speech that takes place on their platforms, and to prevent the likely loss of customers / users and damage to their brand and image if they were known publicly to be allowing government snooping. Also, tech companies argue that if ‘back doors’ are built into supposedly encrypted and secure services, then they are no longer secure or fully encrypted, and they could be accessed by cyber-criminals, thereby posing a security threat to users.

Example

Former Home Secretary Amber Rudd (since replaced by Sajid Javid) was particularly vocal about the subject, and pressed for a back door to be built-in to WhatsApp and other encrypted messaging services after the London terror attacks in 2017 and after it was discovered that terrorist Khalid Masood, who killed four people outside parliament had used WhatsApp a few minutes before he launched his attack.

Also, an assessment by the UK’s National Crime Agency (NCA) earlier this year said that that encryption impacts how effective law enforcement organisations can be in gathering intelligence and collecting evidence. This is particularly topical in the UK now, since Facebook recently refused to give the login details of a murder suspect to police, who are investigating the murder of Lucy McHugh.

Threats From The Five Eyes

The Five Eyes are reported to have warned that if the tech industry does not voluntarily establish lawful access to their products e.g. back doors they may pursue enforcement, via legislative or other measures in order to guarantee entry.

The Five Country Ministerial (FCM) has also concluded that the industry needs to implement functions that prevent illicit and harmful content from being uploaded in the first-place, and build user safety into the design of all online platforms.

What Does This Mean For Your Business?

While it sounds reasonable and understandable that law enforcement and intelligence services would like to be able to have access to encrypted apps and services in the interests of national security in fighting terrorism and reducing crime, building in back doors to encryption means that it’s no longer encrypted and secure. These ‘back-doors’ could also, therefore, be accessed by cyber-criminals, thus causing a security threat to millions of users, most of whom aren’t terrorists or criminals. A security breach (e.g. using a back-door) could also cause major damage to the app / service-providing company in fines, lost customers/revenue and bad publicity.

There is also an argument that the privacy of users of currently encrypted apps and services could be compromised in a ‘big brother’ style way as governments and intelligence agencies are given carte blanche to snoop, and are unlikely to be transparent about just what they are snooping on. Many privacy campaigners feel that we already have enough surveillance e.g. CCTV and the power granted by the Investigatory Powers Act (aka the ‘Snoopers Charter’).

Tech companies have good commercial and other reasons for not budging in their stance, while governments can also provide convincing arguments for the building of back-doors. As with so many other powerful private companies such as the tech companies, it may take the threat of (or actual) imposed regulation and legislation to make them give any ground in an argument that is likely to run further yet.

New Australian Law Gets The Thumbs-Down From Tech Firms

In Australia, a new draft Bill proposing ways for tech firms, software developers and others to assist security agencies and police has been given the thumbs-down by a major industry group over its ambiguity, and the potential security risks it could create.

What Bill?

The new “Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018” is a Bill for an Act to amend the law relating to telecommunications, computer access warrants and search warrants, and for ‘other purposes’.

The bill proposes that a ‘technical assistance request’ may be given to a tech company e.g. a social media or chat app company asking that provider to offer ‘voluntary’ help in the form of ‘technical assistance’ to the Australian Secret Intelligence Service or an ‘interception agency’ with a view to enforcing / helping to enforce the criminal law, protecting the public revenue, and / or acting in the interests of Australia’s national security, foreign relations, or economic well being.

What Kind of Technical Assistance?

In essence, those who have interpreted and reacted publicly to the contents of the bill have taken it to mean that as part of the Australian government’s fight against the criminal use of encrypted communications (end-to-end encryption), tech firms will be asked to build weaknesses / ‘back doors’ into their products/ services that will enable government monitoring.

For example, the UK government (under then Home Secretary Amber Rudd) were seeking ‘back door’ access to encrypted apps such as Facebook’s WhatsApp on the grounds that terror suspects were known to have used it for communication prior to the Westminster attack. At the time, WhatsApp refused to co-operate on the grounds that end-to-end encryption prevented even its own technicians from reading people’s messages.

WhatsApp has also been blocked three times in Brazil for failing to hand over information relating to criminal investigations.

Worked In Germany

Presumably and ideally, the kind of thing that the new bill would be used for in Australia would be in the same way that German encrypted communications App ‘Telegram” had a back-door built into it which allowed law enforcement agencies to access messages, enabling them to foil a planned suicide attack on a Christmas market in 2016.

Digi Objects

The loudest critic of the new Bill in Australia has been the Digital Industry Group (known as ‘Digi’) whose members include Facebook, Google and Twitter. Their main arguments against the bill are that it is ambiguous and lacks judicial oversight, and building any back-doors for government agencies into encrypted services will also be creating access for criminals to exploit. Big social media tech firms say, for example, that building such potential vulnerabilities into their services could not only leave the majority of their customers vulnerable to attack for the sake of catching a minority, but could also undermine the essential trust in their services.

What Does This Mean For Your Business?

Privacy, security, and freedom from unnecessary surveillance are valued concerns by individuals and businesses, but national security is also an issue, and is something that affects the wider economy. The bill from the Australian government is the latest in a long line of similar requests that the big tech companies are facing from governments around the world. The conundrum, however, is the same. Tech companies are private businesses whose services allow users to share personal data, and they need the trust of their users that privacy and security will be preserved, and yet governments would like access to the private conversations, hopefully just for national security purposes. Also, once a back-door is built-in to an encrypted service (e.g. end-to-end encrypted services), it is no longer really secure, and all users could potentially be at risk. Bills suggesting that help by tech firms would be ‘voluntary’ are also likely to mean that failure to comply voluntarily would undoubtedly have negative consequences for tech firms (e.g. fines).

As freedom and privacy groups would point out, there is also some mistrust over government motives for accessing more of our private conversations and details, and in the wake of the Facebook / Cambridge Analytica scandal for example, there are questions about just who else our details and private conversations and opinions could be shared with and how that could be used. It is also a fact that governments tend not to like communications tools and currencies (e.g. Bitcoin) that they can’t access, control, or regulate.

The ‘big brother’ element to bills like these worries citizens in all countries, and some tech companies, which are certainly not blameless (e.g. on user tracking and data sharing activities) are likely to try and hold out as long as possible from publicly being seen to be co-operating with any wide-scale government surveillance.

Social Mapper Can Trace Your Face

Trustwave’s SpiderLabs has created a new penetration testing tool that uses facial recognition to trace your face through all your social media profiles, link your name to it, and identify which organisation you work for.

Why?

According to its (ethical) creators, Trustwave’s SpiderLabs, Social Mapper has been designed to help penetration testers (those tasked with conducting simulated attacks on a computer systems to aid security) and red teamers (ethical hackers) to save time and expand target lists in the intelligence gathering phase of creating the social media phishing scenarios that are ultimately used to test an organisation’s cyber defences.

What Does It Do?

Social Mapper is an open source intelligence tool that employs facial recognition to correlate social media profiles across a number of different sites on a large scale. The software automates the process of searching the most popular social media sites for names and pictures of individuals in order to accurately detect and group a person’s presence. The results are then compiled in a report that can be quickly viewed and understood by a human operator.

How Does It Work?

Social Mapper works in 3 phases. Firstly, it is provided with names and pictures of people. e.g. via links in a csv file, images in a folder or via people registered to a company on LinkedIn.

Secondly, in a time-consuming phase, it uses a Firefox browser to log in to social media sites and search for its targets by name. When it finds the top results, it downloads profile pictures and uses facial recognition checks to try and find a match. The social media sites it searches are LinkedIn, Facebook, Twitter, Google+, Instagram, VKontakte, Weibo, and Douban.

Finally, it generates a report of the results.

What’s The Report Used For?

The report is designed to give the user a starting point to target individuals on social media for phishing, link-sharing, and password-snooping attacks.

For example, a user can create fake social media profiles to ‘friend’ targets and send them links to credential capturing landing pages or downloadable malware, trick users into disclosing their emails and phone numbers e.g. using vouchers and offers to tempt them into phishing traps, create custom phishing campaigns for each social media site, or even to physically look at photos of employees to find access card badges or to study aspects of building interiors.

What Does This Mean For Your Business?

In the right hands, Social Mapper sounds as though it could ultimately help businesses to improve their online security because it helps to create much better quality and more realistic testing scenarios on a larger scale that could uncover loopholes and shortcomings that current testing may not be able to fund.

The worry, however, is that in the wrong hands it could be used by cyber-criminals to quickly gather information about a target business and its employees, thereby enabling potentially very effective phishing and password-snooping campaigns to be created. This detailed information could also be shared among and sold to other criminals which could mean that individuals could be subjected to a number of attacks over time through multiple channels.

The obvious hope is, therefore, that enough checks and security measures will be put in place by its creators thereby not allowing the software to fall into the wrong hands in the first place and be used by criminals against the businesses and organisations that it was designed to help.

Microsoft To Launch App-Testing Sandbox ‘InPrivate Desktop’ Feature

It has been reported that Microsoft is to launch InPrivate Desktop for a future version of Windows 10, a kind of throwaway sandbox that gives Admins a secure way to operate one-time tests of any untrusted apps / software.

Like A Virtual Machine

Although the new feature is still a bit hush-hush, and has actually been removed from the Windows 10 Insider programme, it is believed to act like a kind of in-box, speedy VM (virtual machine) that is then refreshed to use again after it has been used on a particular App.

Why?

The reason for the new feature in the broader sense , is that it fits with moves announced by Microsoft last June 2017 to introduce next-generation security features to Windows 10.

ATP & WDAG

Back in June 2017, Microsoft specifically mentioned the integration of Windows Defender Advanced Threat Protection (ATP) as one of the next-generation security measures. ATP, for example, was designed to isolate and contain the threat if a user on a corporate network accidentally downloaded malicious software via their browser.

A security feature that some commentators have likened InPrivate Desktop to, that was also specifically mentioned last June, was Windows Defender Application Guard (WDAG). Interestingly, WDAG isolates potential malware and exploits downloaded via a users’ browser and contains the threat using virtualisation-based security.

Spec Needed For InPrivate Desktop

Although the exact details of InPrivate Desktop are sketchy, we know that it is likely to be aimed at enterprises rather than individual users and that, as such, it is likely to need a reasonable spec to operate. It has been reported that in order to run the new feature / app at least 4GB of RAM, at least 5GB of free disk space, and two CPU cores will be needed.

When?

There is also still some speculation as to exactly when the InPrivate Desktop feature will make it to Windows 10. Some commentators have noted that it may not make it into Windows 10 ‘Redstone 5’, and looks likely to be rolled-out in a subsequent Windows 10 update which has been codenamed 19H1.

What Does This Mean For Your Business?

With support stopping for previous versions of Windows, and with all of us being forced into using Windows 10’s SaaS model, it makes sense that Microsoft adds more features to protect users, particularly businesses.

Adding malicious code to apps has been a method increasingly used by cyber-criminals to sneak under the radar, and having a secure space to test and isolate dubious / suspect apps will give Admins an extra tool to protect their organisation from evolving cyber-threats. It is extra-convenient that the testing feature / app sandbox will already be built-in to Windows 10.