Your Latest IT News Update

Facial Recognition In The Classroom

A school in Hangzhou, capital of the eastern province of Zhejiang, is reportedly using facial recognition software to monitor pupils and teachers.

<More>

Data Breach Fine For UK University

The Information Commissioner (ICO) has imposed a fine of £120,000 on the University of Greenwich for a data breach that left the personal details of thousands of students exposed online.

<More>

TalkTalk Super Router Security Fears Persist

An advisory notice from software and VR Company IndigoFuzz has highlighted the continued potential security risk posed by a vulnerability in the WPS feature in TalkTalk’s Super Router.

<More>

BYODs Linked To Security Incidents

A study by SME card payment services firm Paymentsense has shown a positive correlation between bring your own device (BYOD) schemes and increased cyber -security risk in SMEs.

<More>

Slack ‘Actions’

Chat App ‘Slack’ has announced the introduction of a new ‘Actions’ feature that makes it easier for users to create and finish tasks without leaving by having access to more 3rd party tools.

<More>

Tech Tip – Enable ‘Do Not Track’ In Microsoft Edge

If you want the general added security of not being tracked when you’re browsing without having to switch to full security incognito mode, here’s how to enable ‘Do Not Track’ in Microsoft Edge:

<More>

Facial Recognition In The Classroom

A school in Hangzhou, capital of the eastern province of Zhejiang, is reportedly using facial recognition software to monitor pupils and teachers.

Intelligent Classroom Behaviour Management System

The facial recognition software is part of what has been dubbed The “intelligent classroom behaviour management system”. The reason for the use of the system is reported to be to supervise both the students’ learning, and the teachers’ teaching.

How?

The system uses cameras to scan classrooms every 30 seconds. These cameras are part of a facial recognition system that is reported to be able to record students’ facial expressions, and categorize them into happy, angry, fearful, confused, or upset.

The system, which acts as a kind of ‘virtual teaching assistant’, is also believed to be able to record students’ actions such as writing, reading, raising a hand, and even sleeping at a desk.

The system also measures levels of attendance by using a database of pupils’ faces and names to check who is in the classroom.

As well as providing the school with added value monitoring of pupils, it may also prove to be a motivator for pupils to modify their behaviour to suit the rules of the school and the expectations of staff.

Teachers Watched Too

In addition to monitoring pupils, the system has also been designed to monitor the performance of teachers in order to provide pointers on how they could improve their classroom technique.

Safety, Security and Privacy

One other reason why these systems are reported to be increasing in popularity in China is to provide greater safety for pupils by recording and deterring violence and questionable practices at Chinese kindergartens.

In terms of privacy and security, the vice principal of the Hangzhou No.11 High School is reported to have said that the privacy of students is protected because the technology doesn’t save images from the classroom, and stores data on a local server rather than on the cloud. Some critics have, however, said that storing images on a local server does not necessarily make them more secure.

Inaccurate?

If the experiences of the facial recognition software that has been used by UK police forces is anything to go by, there may be questions about the accuracy of what the Chinese system records. For example, an investigation by campaign group Big Brother Watch, the UK’s information Information Commissioner, Elizabeth Denham, has recently said that the Police could face legal action if concerns over accuracy and privacy with facial recognition systems are not addressed.

What Does This Mean For Your Business?

There are several important aspects to this story. Many UK businesses already use their own internal CCTV systems as a softer way of monitoring and recording staff behaviour, and as a way to modify their behaviour i.e. simply by knowing their being watched. Employees could argue that this is intrusive to an extent, and that a more positive way of getting the right kind of behaviour should (also) have a system that rewards positive / good behaviour and good results.

Using intelligent facial recognition software could clearly have a place in many businesses for monitoring customers / service users e.g. in shops and venues. It could be used to enhance security. It could also, as in the school example, be used to monitor staff in any number of situations, particularly those where concentration is required and where positive signals need to be displayed to customers. These systems could arguably increase productivity, improve behaviour and reduce hostility / violence in the workplace, and provide a whole new level of information to management that could be used to add value.

However, it could be argued that using these kinds of systems in the workplace could make people feel as though ‘big brother’ is watching them, could lead to underlying stress, and could have big implications where privacy and security rights are concerned. It remains to be seen how these systems are justified, regulated and deployed in future, and how concerns over accuracy, cost-effectiveness, and personal privacy and security are dealt with.

Data Breach Fine For UK University

The Information Commissioner (ICO) has imposed a fine of £120,000 on the University of Greenwich for a data breach that left the personal details of thousands of students exposed online.

What Happened?

The breach was discovered back in February 2016, but actually dates back to 2004 and concerns a microsite that was made for a training conference. In the incident that the University attributed to “unauthorised access to some data on the university’s systems”, the personal details of around 96,000 students were accidentally uploaded to the university’s website, as well as minutes from the university’s Faculty Research Degrees Committee. The microsite with the student details left on was not secured or closed down.

What was most shocking and distressing to many of those affected by the breach was the very personal nature of some of the data. For example, as well as the names, addresses, dates of birth, mobile phone numbers and even signatures of students, data concerning medical and other personal issues was also posted. Reports at the time indicated that in some cases, information concerning the mental health and other medical problems of some students were mentioned to explain why students had fallen behind with their work. Also, it was reported that comments about the students’ progress, and even emails between staff and students were revealed.

Made Without The University’s Knowledge

It has been reported that the main reason that the breach was not noticed earlier is that the training microsite was made by one of the University’s departments without the knowledge of the University, which is the data controller.

Fine

Bearing in mind the seriousness and nature of the breach, and the number of people affected, the ICO have imposed a fine of £120,000 or £96,000 for early payment. It is understood that the University will not appeal against the decision.

Changes Made

The ICO saw no need for enforcement action in this case because the University of Greenwich is reported to have made a number of changes to upgrade security. These changes include investing in new security architecture, tools and technologies, hiring new dedicated internal security experts, conducting vulnerability testing across the entire organisation every day, making information security training mandatory for all staff; reforming the system of internal IT governance, and developing a rapid incident response to tackle threats as they arise and learn from incidents.

What Does This Mean For Your Business?

Even though this incident dates back many years to a time when online security was given less priority by many businesses and organisations, it is an illustration of how things can easily slip through the net with regards to security, particularly in larger organisations and / or where full checks / audits are not carried out and where there is clear no clear line of responsibility for data matters e.g. data controllers and DPOs.

This story is particularly poignant because of the introduction of GDPR on Friday, and should be another reminder to companies that as well as the distress caused to victims of breaches, the ICO will take breaches seriously and can impose stiff penalties.

In this case, the University (which had also suffered another high profile data breach after this one) took the opportunity to seriously upgrade its security, and this will no doubt go a long way to making it GDPR compliant, as all businesses now need to be in order to retain the trust of customers, maintain supplier relationships, protect the business reputation, avoid fines, and deter and protect against attacks by cyber-criminals.

TalkTalk Super Router Security Fears Persist

An advisory notice from software and VR Company IndigoFuzz has highlighted the continued potential security risk posed by a vulnerability in the WPS feature in TalkTalk’s Super Router.

What Vulnerability?

According to IndigoFuzz, the WPS connection is insecure and the WPS pairing option is always turned on i.e. the WPS feature in the router is always switched on, even if the WPS pairing button is not used.

This could mean that an attacker within range could potentially hack into the router and steal the router’s Wi-Fi password.

Tested

It has been reported that in tests involving consenting parties, IndigoFuzz found a method of probing the router to steal the passwords to be successful on multiple TalkTalk Super Routers.

The test involved using a Windows-based computer, wireless network adapter, a TalkTalk router within wireless network adapter range, and the software ‘Dumpper’ available on Sourceforge. Using this method, the Wi-Fi access key to a network could be uncovered in a matter of seconds.

Scale

The ease with which the Wi-Fi access key could be obtained in the IndigoFuzz tests has prompted speculation that the vulnerability could be on a larger scale than was first thought, and a large number of TalkTalk routers could potentially be affected.

No Courtesy Period Before Announcement

When a vulnerability has been discovered and reported to a vendor, it is normal protocol to allow the vendor 30 days to address the problem before the vulnerability is announced publicly by those who have discovered / reported the vulnerability.

In this case, the vulnerability was first reported to TalkTalk back in 2014, so IndigoFuzz chose to issue the advisory as soon as possible.

Looks Bad After Last October

News that a vulnerability has remained unpatched after it was reported 4 years ago to TalkTalk looks bad on top of major cyber attack and security breach there back in October 2017. You may remember that the much publicised cyber-attack on the company resulted in an estimated loss of 101,000 customers (some have suggested that the number of lost customers was twice as much as this figure). The attack saw the personal details of between 155,000 and 157,000 customers (reports vary) hacked, with approximately 10% of these customers having their bank account number and sort code stolen.

The trading impact of the security breach in monetary terms was estimated to be £15M with exceptional costs of £40-45M.

What Does This Mean For Your Business?

It seems inconceivable that a widely reported vulnerability that could potentially affect a large number of users may still not have been addressed after 4 years. Many commentators are calling for a patch to be issued immediately in order to protect TalkTalk customers. This could mean that many home and business customers are still facing an ongoing security risk, and TalkTalk could be leaving itself open to another potentially damaging security problem that could impact its reputation and profits.

Back in August last year, the Fortinet Global Threat Landscape Report highlighted the fact that 9 out of 10 businesses are being hacked through un-patched vulnerabilities, and that many of these vulnerabilities are 3 or more years old, and many even have patches available for them. This should remind businesses to stay up to date with their own patching routines as a basic security measure.

Last year, researchers revealed how the ‘Krack’ method could take advantage of the WPA2 standard used across almost all Wi-Fi devices to potentially read messages, banking information and intercept sensitive files (if a hacker was close to a wireless connection point and the website doesn’t properly encrypt user data). This prompted fears that hackers could turning their attention to what may be fundamentally insecure public Wi-Fi points in e.g. shopping centres / shops, airports, hotels, public transport and coffee shops. This could in turn generate problems for businesses offering WiFi.

BYODs Linked To Security Incidents

A study by SME card payment services firm Paymentsense has shown a positive correlation between bring your own device (BYOD) schemes and increased cyber -security risk in SMEs.

BYOD

Bring your own device (BYOD) schemes / policies have now become commonplace in many businesses, with the BYOD and enterprise mobility market size growing from USD $35.10 Billion in 2016 to USD $73.30 Billion by 2021 (marketsandmarkets.com).

BYOD policies allow employees to bring in their personally owned laptops, tablets, and smart-phones and use them to access company information and applications, and solve work problems. This type of policy has also fuelled a rise in ‘stealth IT’ where employees go outside of IT and set up their own infrastructure, without organizational approval or oversight, and can, therefore, unintentionally put corporate data and service continuity at risk.

Positive Correlation Between BYOD and Security Incidents

The Paymentsense study, involving more than 500 SMEs polled in the UK found a positive correlation between the introduction of a BYOD policy and cyber-security incidents. For example, 61% of the SME’s said that they had experienced a cyber-security incident since introducing a BYOD policy.

According to the study, although only 14% of micro-businesses (up to 10 staff) reported a cyber-security incident since implementing BYOD, the figure rises to 70% for businesses of 11 to 50 people, and to 94% for SMEs with 101 to 250 employees.

Most Popular Security Incidents

The study showed that the most popular types of security incidents in the last 12 months were malware, which affected two-thirds (65%) of SMEs, viruses (42%), DDoS distributed denial of service (26%), data theft (24%), and phishing (23%).

Positive Side

The focus of the report was essentially the security risks posed by BYOD. There are, however, some very positive reasons for introducing a BYOD policy in the workplace. These include convenience, cost saving (company devices and training), harnessing the skills of tech-savvy employees, perhaps finding new, better and faster ways of getting work done, improved morale and employee satisfaction, and productivity gains.

Many of these benefits are, however, inward-focused i.e. on the company and its staff, rather than the wider damage that could be caused to the lives of data breach victims or to the company’s reputation and profits if a serious security incident occurred.

What Does This Mean For Your Business?

This is a reminder that, as well as the benefits of BYOD to the business, if you allow employees or other users to connect their own devices to your network, you will be increasing the range of security risks that you face. This is particularly relevant with the introduction of GDPR on Friday.

For example, devices belonging to employees but containing personal data could be stolen in a break-in or lost while away from the office. This could lead to a costly and public data breach. Also, allowing untrusted personal devices to connect to SME networks or using work devices on untrusted networks outside the office can put personal data at risk.
Ideally, businesses should ensure that ensure that personal data is either not on the device in the first place, or has been appropriately secured so that it cannot be accessed in the event of loss or theft e.g. by using good access control systems and encryption.

Businesses owners could reduce the BYOD risk by creating and communicating clear guidelines to staff about best security practices in their daily activities, in and out of the office. Also, it is important to have regular communication with staff at all levels about security, and having an incident response plan / disaster recovery plan in place can help to clarify responsibilities and ensure that timely action is taken to deal with situations correctly if mistakes are made.

Slack ‘Actions’

Chat App ‘Slack’ has announced the introduction of a new ‘Actions’ feature that makes it easier for users to create and finish tasks without leaving by having access to more 3rd party tools.

What Is Slack?

Slack, launched way back in 2013, is a Silicon Valley-produced, cloud-based set of proprietary team collaboration tools and services. It provides mobile apps for iOS, Android, Windows Phone, and is available for the Apple Watch, enabling users to send direct messages, see mentions, and send replies.

Slack teams enable users (communities, groups, or teams) to join through a URL or invitation sent by a team admin or owner. It was intended as an organisational communication tool, but it has gradually been morphing into a community platform i.e. it is a business technology that has crossed-over into personal use.

In March 2018, Slack and financial and human capital management firm Workday formed a partnership that allowed Workday customers to access features from directly within the Slack interface. Slack is believed to have 8 million daily active users.

What Is ‘Actions’ and How Does It Help?

The new tool / feature dubbed ‘Actions’ will bring enterprise developers deeper into Slack, because it allows for better / more integration with enterprise software from third-party software providers e.g. Jira, HubSpot, and Asana.

Slack knows that many users now like to choose what software they use to get their job done, and the Actions feature will, therefore, be of extra value to the 90% Slack’s 3 million paid users who regularly use apps and integrations.

Actions can be accessed using a click or tap of any Slack message, require no slash commands, and are being made available to all developers using the platform to deploy bots and integrations. To begin with, Actions will be displayed based on what individuals use most frequently.

What Does This Mean For Your Business?

If you use / your business uses Slack, the interoperability of these systems resulting from integration between software from third-parties means that you have greater choice in what software you use to complete your tasks without having to leave Slack. This offers time and cost saving benefits, as well as a considerable boost in convenience.

Slack knows that there are open source and other alternatives out there, and the addition of Actions will help Slack to provide more valuable tools to users, thereby helping it to retain loyalty and compete in a rapidly evolving market.

Tech Tip – Enable ‘Do Not Track’ In Microsoft Edge

If you want the general added security of not being tracked when you’re browsing without having to switch to full security incognito mode, here’s how to enable ‘Do Not Track’ in Microsoft Edge:

– For Microsoft Edge, click on the three horizontal dots at the top right.

– Click on ‘Settings’ at very bottom.

– Click on ‘View advanced settings’ at the bottom.

– Scroll down to the Privacy and Services section, and toggle the ‘Send Do Not Track’ requests option.

– This should mean that all HTTP and HTTPS requests will include ‘Do Not Track’.

Your Latest IT News Update

AI Drones: Smaller and Smarter

Researchers from ETH Zurich, Switzerland and the University of Bologna have built the smallest completely autonomous quadrotor nano-drone that uses AI to fly itself, and doesn’t need human guidance.

<More>

Police Face Recognition Software Flawed

Following an investigation by campaign group Big Brother Watch, the UK’s information Information Commissioner, Elizabeth Denham, has said that the Police could face legal action if concerns over accuracy and privacy with facial recognition systems are not addressed.

<More>

Efail – Encryption Flaw

A German newspaper has released details of a security vulnerability, discovered by researchers at Munster University of Applied Sciences, in PGP (Pretty Good Privacy) data encryption.

<More>

Fewer Shop Visits Due To Digital. But More Spending

British Retail Consortium (BRC) figures show that footfall in retail stores fell by 3.3% in April 2018 compared to last year, marking a further shift in consumer behaviour towards digital adoption.

<More>

Handy Location Tracker

A peanut-shaped, hand-held, smart, long-range tracking device called LynQ has been launched that can tell you how far and in what direction your friends are, all without the need for a data connection, and without monthly fees.

<More>

Tech Tip – Play Almost Any File Format

If you sometimes have trouble opening and playing certain file formats e.g. for videos, free and open-sourced VLC software makes it easy to play almost any file format you throw at it.

<More>

AI Drones: Smaller and Smarter

Researchers from ETH Zurich, Switzerland and the University of Bologna have built the smallest completely autonomous quadrotor nano-drone that uses AI to fly itself, and doesn’t need human guidance.

Neural Network

The technology at the heart of the Crazyflie 2.0 Nano Quadcopter is the DroNet neural network. This is able to processes incoming images from a camera at 20 frames per second. From this, the nano-drone is able to work out how to steer, and calculate the probability of a collision, thereby giving it the ability to know when to stop.

Fully On-Board Computation

The fact that the drone needs no external sensing and computing because all computation is fully on-board thanks to the PULP (Parallel Ultra Low Power) platform, means that it is truly autonomous, and is, therefore, a real first in terms of how a small drone can be controlled.

The new autonomous version is an improvement on the first test version, which involved putting the DroNet neural network system in a larger commercial-off-the-shelf, Parrot Bebop 2.0 drone, and using radio contact with a laptop to control it.

Trained Using Images

Since AI requires training so that it can learn to become better at a task, the drone’s neural network was trained using thousands of images taken from bicycles and cars driving along different roads.

Only Horizontal Movement

One major drawback at the current time is that, because it was trained using images from a single plane, the drone can only move horizontally and cannot yet fly up or down.

Even Smaller

Technologies involved in making drones have evolved to such a degree that even a robot ‘fly’ has now been built.

As the successor to RoboBee, the so-called RoboFly it is so small (the size of a fly) that it can’t support the weight of a battery to power it. The power for flight is currently delivered by a laser being trained on an attached photovoltaic cell.

The tiny device has wings that are flapped by sending a series of pulses of power in rapid succession and then slowing the pulsing down as it gets near the top of the wave (with the whole process in reverse for the downward flap).

The RoboFly, developed by a team of researchers based in Australia, can only just take off and travel a very short distance at present. Future plans for RoboFly reportedly include improving the onboard telemetry so it can control itself, and making a steered laser that can follow the bug’s movements and continuously beam power in its direction.

What Does This Mean For Your Business?

Up until now, the main uses for drones have been specialist applications such as within the military, in construction (viewing and mapping sites), film and TV, leisure use, and even for delivery of parcels (Amazon tests). All of these involve the use of larger drones that are remotely controlled.

The ideas that a drone can be made in a miniature size, and / or can control itself using AI could open up many more new areas of opportunity for businesses and other organisations. Such drones could be used in confined spaces or in very specialised situations.

The idea of an AI drone has, however, led to some alarm being expressed by some commentators. Even though AI autonomy could help drones to e.g. to monitor environments, be used in spying, and develop swarm intelligence for military use, some have expressed worries that they could become better at delivering lethal payloads, and could pose other unforeseen security risks.

Police Face Recognition Software Flawed

Following an investigation by campaign group Big Brother Watch, the UK’s information Information Commissioner, Elizabeth Denham, has said that the Police could face legal action if concerns over accuracy and privacy with facial recognition systems are not addressed.

What Facial Recognition Systems?

A freedom of information request sent to every police force in the UK by Big Brother Watch shows that The Metropolitan Police used facial recognition at the Notting Hill carnival in 2016 and 2017, and at a Remembrance Sunday event, and South Wales Police used facial recognition technology between May 2017 and March 2018. Leicestershire Police also tested facial recognition in 2015.

What’s The Problem?

The two main concerns with the system (as identified by Big Brother Watch and the ICO) are that the facial recognition systems are not accurate in identifying the real criminals or suspects, and that the images of innocent people are being stored on ‘watch’ lists for up to a month, and this could potentially lead to false accusations or arrests.

How Do Facial Recognition Systems Work?

Facial recognition software typically works by using a scanned image of a person’s face (from the existing stock of police photos of mug shots from previous arrests), and then uses algorithms to measure ‘landmarks’ on the face e.g. the position of features and the shape of the eyes, nose and cheekbones. This data is used to make a digital template of a person’s face, which is then converted into a unique code.

High-powered cameras are then used to scan crowds. The cameras link to specialist software that can compare the camera image data to data stored in the police database (the digital template) to find a potential ‘match’. Possible matches are then flagged to officers, and these lists of possible matches are stored in the system for up to 30 days.

A real-time automated facial recognition (AFR) system, like the one the police use at events, incorporates facial recognition and ‘slow time’ static face search.

Inaccuracies

The systems used by the police so far have been criticised for simply not being accurate. For example, of the 2,685 “matches” made by the system used by South Wales Police between May 2017 and March 2018, 2,451 were false alarms.

Keeping Photos of Innocent People On Watch Lists

Big Brother Watch has been critical of the police keeping photos of innocent people that have ended up on lists of (false) possible matches, as selected by the software. Big Brother Watch has expressed concern that this could affect an individual’s right to a private life and freedom of expression, and could result in damaging false accusations and / or arrests.
The police have said that they don’t consider the ‘possible’ face selections as false positive matches because additional checks and balances are applied to them to confirm identification following system alerts.

The police have also stated that all alerts against watch lists are deleted after 30 days, and faces in the video stream that do not generate an alert are deleted immediately.

Criticisms

As well as accusations of inaccuracy and possibly infringing the rights of innocent people, the use of facial recognition systems by the police has also attracted criticism for not appearing to have a clear legal basis, oversight or governmental strategy, and for not delivering value for money in terms of the number of arrests made vs the cost of the systems.

What Does This Mean For Your Business?

It is worrying that there are clearly substantial inaccuracies in facial recognition systems, and that the images of innocent people could be sitting on police watch lists for some time, and could potentially result in wrongful arrests. The argument that ‘if you’ve done nothing wrong, you have nothing to fear’ simply doesn’t stand up if police are being given cold, hard computer information to say that a person is a suspect and should be questioned / arrested, no matter what the circumstances. That argument is also an abdication from a shared responsibility, which could lead to the green light being given to the erosion of rights without questions being asked. As people in many other countries would testify, rights relating to freedom and privacy should be valued, and when these rights are gone, it’s very difficult to get them back again.

The storing of facial images on computer systems is also a matter for security, particularly since they are regarded as ‘personal data’ under the new GDPR which comes into force this month.

There is, of course, an upside to the police being able to use these systems if it leads to the faster arrest of genuine criminals, and makes the country safer for all.

Despite the findings of a study from YouGov / GMX (August 2016) that showed that UK people still have a number of trust concerns about the use of biometrics for security, biometrics represents a good opportunity for businesses to stay one step ahead of cyber-criminals. Biometric authentication / verification systems are thought to be far more secure than password-based systems, which is the reason why banks and credit companies are now using them.

Facial recognition systems have value-adding, real-life business applications too. For example, last year, a ride-hailing service called Careem (similar to Uber but operating in more than fifty cities in the Middle East and North Africa) announced that it was adding facial recognition software to its driver app to help with customer safety.