Archive for AI

Google Announces New ‘Teachable Machine 2.0’ No-Code Machine Learning Model Generator

Two years on from its first incarnation, Google has announced the introduction of its ‘Teachable Machine 2.0’, a no-code custom machine learning model generating platform that can be used by anyone and requires no coding experience.

First Version

Back in 2017, Google introduced its first version of Teachable Machine which enabled anyone to teach their computer to recognise images using a webcam. This first version enabled many children and young people to gain their first experience of training their own machine learning model i.e. teaching their computer how to recognise patterns in data (images) and assign new data to categories.

Teachable Machine 2.0

Google’s new ‘Teachable Machine 2.0’ is a browser-based system that records from the user’s computer’s webcam and microphone, and with the click of a ‘train’ button (no coding required), it can be trained to recognise images, sounds or poses.  This enables the user to quickly and easily create their own custom machine learning models which they can download and use on their own device or upload and host online.

Fear-Busting and Confidence

One of the key points that Google wants to emphasise is that the no-code, click-of-a-button aspect of this machine learning model generator can instil confidence in young users that they are able to successfully use advanced computer technology creatively without coding experience.  This, as Google mentions on its blog, has been identified as being important by parents of girls as girls face challenges in becoming interested in and finding available jobs in computer science.

What Can It Be Used For?

In addition to being used as a teaching aid, examples of how Teachable Machine 2.0 has been used include:

  • Improving communication for people with impaired speech. For example, this has been done by turning recorded voice samples into spectrograms that can be used to “train” a computer system to better recognise less common types of speech
  • Helping with game design.
  • Making physical sorting machines. For, example, Google’s own project has used Teachable Machine to create a model that can classify and sort objects.

What Does This Mean For Your Business?

The UK has a tech skills shortage that has been putting pressure on UK businesses that are unable to find skilled people to drive innovation and tech product and service development forward.  A platform that enables young people to feel more confident and creative in using the latest technologies from a young age without being thwarted by the need for coding could lead to more young people choosing computer science in further and higher education and seeking careers in IT.  This, in turn, could help UK businesses.

No-coding solutions such as Teachable Machine 2.0 represent a way of democratising app and software development and utilising ideas and creativity that may have previously been suppressed by a lack of coding experience.  Businesses always need creativity and innovation in order to create new opportunities and competitive advantage and Teachable Machine 2.0 may be one small step in helping that to happen further down the line.

‘Moore’s Law’ and Business Innovation Challenged By Slow-Down In Rate of Processing Power Growth

Many tech commentators have noted a stagnation or slow-down period in computing related to ‘Moore’s Law’ being challenged, but has the shrinking of transistors within computer chips really hit a wall and what could drive innovation further?

What Is Moore’s Law?

Moore’s Law, named after Intel co-founder Gordon Moore, is based on his observation from 1965 that transistors were shrinking so quickly that twice as many would be able to fit into a micro-chip every year, which he later amended to a doubling every two years.  In essence, this Law should mean that processing power for computers doubles every two years.

The Challenge

The challenge to this Law that many tech commentators have noted is that technology companies may be reaching their limit in terms of fitting ever-smaller silicon transistors into ever-smaller spaces, thereby leading to a general slowing of the growth of processing power.  The knock-on effect of this appears slowing of computer innovation that some say could have a detrimental effect on new, growing industry sectors such as self-driving cars.

What’s Been Happening?

Big computer chip manufacturers like Intel have delayed the next generation of smaller transistor technology and increased the time between introducing the future generations of their chips. Back in 2016 for example, Intel found that it could shrink chips to as little as 14 nanometres, but 10 nanometres is going to be a challenge that would take longer to achieve.

The effect has not only been a challenge to Moore’s Law, and a challenge to how the big tech companies can keep improving their data centres, but also how computers are able to work for (and keep up with) the demands of business.

Mobile devices, which use chips other than Intel’s may also have the brakes put on them slightly as they now also rely, to a large extent, on the data-centres to run the apps that their users value.

What About Supercomputers?

Some experts have also noted that the rate of improvement of supercomputers has been slowing in recent years and this may have had a negative impact on the research programs that use them.

That said, the cloud means that IBM is now able to offer quantum computing to tens of thousands of users, thereby empowering what it calls “an emerging quantum community of educators, researchers, and software developers that share a passion for revolutionising computing”.  It is doing this by opening a Quantum Computation Centre in New York which will bring the world’s largest fleet of quantum computing systems online, including the new 53-Qubit Quantum System for broad use in the cloud.

What Does This Mean For Your Business?

Many smaller businesses that are less directly reliant upon the most-up-to-date computers may not be particularly concerned at the present time about the challenge to Moore’s Law,  but all businesses are likely to be indirectly affected as their tech giant suppliers struggle to keep improving the capacity of their data-centres.

Many see AI and machine learning as the gateway to finding innovative solutions to improving computing power, but these also rely on data-centres and other areas of computing that have been challenged by the pressure on Moore’s Law.

A more likely way forward may be that chip designs will need to be improved and highly specialised versions will need to be produced, and Microsoft and Intel have already made a start on this by working on reconfigurable chips.  Also, the big tech companies may need to collaborate on their R &D in order to find the way forward in increasing the rate of improvement of computing power that can ensure that businesses can drive their products, services and innovation forward.

ICO Warns Police on Facial Recognition

In a recent blog post, Elizabeth Denham, the UK’s Information Commissioner, has said that the police need to slow down and justify their use of live facial recognition technology (LFR) in order to maintain the right balance in reducing our privacy in order to keep us safe.

Serious Concerns Raised

The ICO cited how the results of an investigation into trials of live facial recognition (LFR) by the Metropolitan Police Service (MPS) and South Wales Police (SWP) led to the raising of serious concerns about the use of a technology that relies on a large amount of sensitive personal information.

Examples

In December last year, Elizabeth Denham launched the formal investigation into how police forces used FRT after high failure rates, misidentifications and worries about legality, bias, and privacy.  For example, the trial of ‘real-time’ facial recognition technology on Champions League final day June 2017 in Cardiff, by South Wales and Gwent Police forces was criticised for costing £177,000 and yet only resulting in one arrest of a local man whose arrest was unconnected.

Also, after trials of FRT at the 2016 and 2017 Notting Hill Carnivals, the Police faced criticism that FRT was ineffective, racially discriminatory, and confused men with women.

MPs Also Called To Stop Police Facial Recognition

Back in July this year, following criticism of the Police usage of facial recognition technology in terms of privacy, accuracy, bias, and management of the image database, the House of Commons Science and Technology Committee called for a temporary halt in the use of the facial recognition system.

Stop and Take a Breath

In her blog post, Elizabeth Denham urged police not to move too quickly with FRT but to work within the model of policing by consent. She makes the point that “technology moves quickly” and that “it is right that our police forces should explore how new techniques can help keep us safe. But from a regulator’s perspective, I must ensure that everyone working in this developing area stops to take a breath and works to satisfy the full rigour of UK data protection law.”

Commissioners Opinion Document Published

The ICO’s investigations have now led her to produce and publish an Opinion document on the subject, as is allowed by The Data Protection Act 2018 (DPA 2018), s116 (2) in conjunction with Schedule 13 (2)(d).  The opinion document has been prepared primarily for police forces or other law enforcement agencies that are using live facial recognition technology (LFR) in public spaces and offers guidance on how to comply with the provisions of the DPA 2018.

The key conclusions of the Opinion Document (which you can find here: https://ico.org.uk/media/about-the-ico/documents/2616184/live-frt-law-enforcement-opinion-20191031.pdf) are that the police need to recognise the strict necessity threshold for LFR use, there needs to be more learning within the policing sector about the technology, public debate about LFR needs to be encouraged, and that a statutory binding code of practice needs to be introduced by government at the earliest possibility.

What Does This Mean For Your Business?

Businesses, individuals and the government are all aware of the positive contribution that camera-based monitoring technologies and equipment can make in terms of deterring criminal activity, locating and catching perpetrators (in what should be a faster and more cost-effective way with live FRT), and in providing evidence for arrests and trials.  The UK’s Home Office has also noted that there is general public support for live FRT in order to (for example) identify potential terrorists and people wanted for serious violent crimes.  However, the ICO’s apparently reasonable point is that moving too quickly in using FRT without enough knowledge or a Code of Practice and not respecting the fact that there should be a strict necessity threshold for the use of FRT could reduce public trust in the police and in FRT technology.  Greater public debate about the subject, which the ICO seeks to encourage, could also help in raising awareness about FRT, how a balanced approach to its use can be achieved and could help clarify matters relating to the extent to which FRT could impact upon our privacy and data protection rights.

Microsoft Beats Amazon to $10 Billion AI Defence Contract for ‘Jedi’

After a long and difficult bidding process, Amazon has lost out to Microsoft in the battle to win a $10bn (£8bn) US Defence Department AI and Cloud computing contract.

For ‘Jedi’

The contract was for the Joint Enterprise Defence Infrastructure (Jedi).  This infrastructure will be designed to enable US forces to get fast access to important Cloud-held data from whichever battlefield they are on. The project will also see AI being used to enhance and speed up the delivery of data to US forces, thereby potentially giving them an advantage.

Amazon Was Thought To Be In Front…Before Trump Comments

Amazon, led by Jeff Bezos, was believed by many tech commentators to have been the front-runner of the two tech giants in the battle for the contract as it is the biggest provider of cloud-computing services.  Also, Amazon had already won an important computing services contract with the CIA in 2013 and is already a supplier of cloud services and technologies to thousands of U.S. agencies.

Unfortunately for Amazon, in August the Pentagon appeared to put the brakes on the final decision-making process following concerns expressed by President Trump.

The President is reported to have said back in July that he was concerned about the contact not being “competitively bid” and that he had heard “complaints” about the contract with Amazon and the Pentagon.

The President, however, was not the only one with concerns as tech giant Oracle (which was also in the running for the contract at one point) had gone to the federal court earlier in the year with allegations (which were dismissed) that the bidding process had been rigged in Amazon’s favour.

Difficult Relationship

Many media reports have suggested that a difficult relationship between President Trump and Jeff Bezos in the past has possibly had some influence on the outcome of the Pentagon’s decision about the project.  For example, Mr Bezos has been criticised before by President Trump, and Mr Bezos also owns the Washington Post.  President Trump has been critical of several news outlets, such as CNN, the New York Times, and The Washington Post.  For example, it has been reported by the Wall Street Journal that President Trump has now instructed his agencies not to renew their subscriptions to those newspapers.

Great News For Microsoft

Winning the contract is, of course, good news for Microsoft which will receive a large amount of U.S. Defence funds for the Jedi contact, and possibly for another defence -related multi-billion-dollar contract (‘Deos’) to supply cloud-based Office 365.

What Does This Mean For Your Business?

With a contract of this value up for grabs and the possibility of further lucrative contracts too, this was never going to be a clean and uncomplicated fight between the tech giants.  In this case, however, it being a defence contract, one of the key influencers was the U.S. President and it appears that his relationship with Amazon’s Jeff Bezos along with other factors may have played a part in Microsoft coming out on top.  The size and complexity of the contract meant that it was only ever going to be something for the big, established tech names, and Microsoft winning the contract was undoubtedly an important victory against its competitor Amazon, will add value to its brand, will bring in a sizeable source of revenue at a time when it’s already seen a 21 per cent rise in its profits on last year, and puts Microsoft in a much closer 2nd position behind Amazon’s AWS in the cloud computing services market.

Amazon Echo and Google Home ‘Smart Spies’

Berlin-based Security Research Labs (SRL) discovered possible hacking flaws in Amazon Echo (Alexa) and Google Home speakers and installed their own voice applications to demonstrate hacks on both device platforms that turned the assistants into ‘Smart Spies’.

What Happened?

Research by SRL led to the discovery of two possible hacking scenarios that apply to both Amazon Alexa and Google Home which can enable a hacker to phish for sensitive information in voice content (vishing) and eavesdrop on users.

Knowing that some of the apps offered for use with Amazon Echo and Google Home devices are made by third parties with the intention of extending the capability of the speakers, SRL was then able to create its voice apps designed to demonstrate both hacks on both device platforms. Once approved by both device platforms, the apps were shown to successfully compromise the data privacy of users by using certain ‘Skills and actions’ to both request and collect personal data including user passwords by eavesdropping on users after they believed the smart speaker has stopped listening.

Amazon and Google Told

SRL’s results and the details of the vulnerabilities were then shared with Amazon and Google through a responsible disclosure process. Google has since announced that it has removed SRL’s actions and is putting in place mechanisms to stop something similar happening in future.  Amazon has also said that it has blocked the Skill inserted by SRL and has also put in preventative mechanisms of the future.

What Did SRL’s Apps Do?

The apps that enabled the ‘Smart Spy’ hacks took advantage of the “fallback intent”, in a voice app (the bit that says I’m sorry, I did not understand that. Can you please repeat it?”), the built-in stop intent which reacts to the user saying “stop” (by changing the functionality of that command after the apps were accepted), and leveraged a quirk in  Alexa’s and Google’s Text-to-Speech engine that allows inserting long pauses in the speech output.

Examples of how this was put to work included:

  • Requesting the user’s password through a simple back-end change by creating a password phishing Skill/Action. For example, a seemingly innocent application was created such as a horoscope.  When the user asked for it, they were given a false error message e.g. “it’s not available in your country”.  This triggered a minute’s silence which led to the user being told “An important security update is available for your device. Please say start update followed by your password.” Anything the user said after “start” was sent to the hacker, in this case, thankfully, SRL.
  • Faking the Stop Intent to allow eavesdropping on users. For example, when a user gave a ‘stop’ command and heard the ‘Goodbye’ message, the app was able to continue to secretly run and to pick up on certain trigger words like “I” or words indicating that personal information was about to follow, i.e. “email”, “password” or “address”. The subsequent recording was then transcribed and sent back to SRL.

Not The First Time

This is not the first time that concerns have been raised about the spying potential of home smart speakers.  For example, back in May 2018, A US woman reported that a private home conversation had been recorded by her Amazon’s voice assistant, and then sent it to a random phone contact who happened to be her husband’s employee. Also, as far back as 2016, US researchers found that they could hide commands in white noise played over loudspeakers and through YouTube videos in order to get smart devices to turn on flight mode or open a website. The researchers also found that they could embed commands directly into recordings of music or spoken text.

Manual Review Opt-Out

After the controversy over the manual, human reviewing of recordings and transcripts taken via the voice assistants of Google, Apple and Amazon, Google and Apple had to stop the practice and Amazon has now added an opt-out option for manual review of voice recordings and their associated transcripts taken through Alexa.

What Does This Mean For Your Business?

Digital Voice Assistants have become a popular feature in many home and home-business settings because they provide many value-adding functions in personal organisation, as an information point and for entertainment and leisure.  It is good news that SRL has discovered these possible hacking flaws before real hackers did (earning SRL some good PR in the process), but it also highlights a real risk to privacy and security that could be posed by these devices by determined hackers using relatively basic programming skills.

Users need to be aware of the listening potential of these devices, and of the possibility of malicious apps being operated through them.  Amazon and Google may also need to pay more attention to the reviewing of third party apps and of the Skills and Actions made available in their voice app stores in order to prevent this kind of thing from happening and to close all loopholes as soon as they are discovered.

AI and the Fake News War

In a “post-truth” era, AI is one of the many protective tools and weapons involved in the battles that male up the current, ongoing “fake news” war.

Fake News

Fake news has become widespread in recent years, most prominently with the UK Brexit referendum, the 2017 UK general election, and the U.S. presidential election, all of which suffered interference in the form of so-called ‘fake news’ / misinformation spread via Facebook which appears to have affected the outcomes by influencing voters. The Cambridge Analytica scandal, where over 50 million Facebook profiles were illegally shared and harvested to build a software program to generate personalised political adverts led to Facebook’s Mark Zuckerberg appearing before the U.S. Congress to discuss how Facebook is tackling false reports. A video that was shared via Facebook, for example (which had 4 million views before being taken down), falsely suggested that smart meters emit radiation levels that are harmful to health. The information in the video was believed by many even though it was false.

Government Efforts

The Digital, Culture, Media and Sport Committee has published a report (in February) on Disinformation and ‘fake news’ highlighting how “Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised ‘dark adverts’ from unidentifiable sources, delivered through the major social media platforms”.  The UK government has, therefore, been calling for a shift in the balance of power between “platforms and people” and for tech companies to adhere to a code of conduct written into law by Parliament and overseen by an independent regulator.

Fact-Checking

One way that social media companies have sought to tackle the concerns of governments and users is to buy-in fact-checking services to weed out fake news from their platforms.  For example, back in January London-based, registered charity ‘Full Fact’ announced that it would be working for Facebook, reviewing stories, images and videos to tackle misinformation that could “damage people’s health or safety or undermine democratic processes”.

Moderation

A moderator-led response to fake news is one option, but its reliance upon humans means that this approach has faced criticism over its vulnerability to personal biases and perspectives.

Automation and AI

Many now consider automation and AI to be an approach and a technology that are ‘intelligent’, fast, and scalable enough to start to tackle the vast amount of fake news that is being produced and circulated.  For example, Google and Microsoft have been using AI to automatically assess the truth of articles.  Also, initiatives like the Fake News Challenge (http://www.fakenewschallenge.org/) seeks to explore how AI technologies, particularly machine learning and natural language processing, can be leveraged to combat fake news, and supports the idea that AI technologies hold promise for significantly automating parts of the procedure human fact-checkers use to determine if a story is real or a hoax.

However, the human-written rules underpinning AI, and how AI is ‘trained’ can also lead to bias.

Deepfake Videos

Deepfake videos are an example of how AI can be used to create fake news in the first place.  Deepfake videos use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create an embarrassing or scandalous video. Deepfake audio can also be manipulated in a similar way.  Deepfake videos aren’t just used to create fake news sources, but they can also be used by cyber-criminals for extortion.

AI Voice

There has also been a case in March this year, where a group of hackers were able to use AI software to mimic an energy company CEO’s voice in order to steal £201,000.

What Does This Mean For Your Business?

Fake news is a real and growing threat, as has been demonstrated in the use of Facebook to disseminate fake news during the UK referendum, the 2017 UK general election, and the U.S. presidential election. State-sponsored politically targeted campaigns can have a massive influence on an entire economy, whereas other fake news campaigns can affect public attitudes to ideas and people and can lead to many other complex problems.

Moderation and automated AI may both suffer from bias, but at least they are both ways in which fake news can be tackled, to an extent.  Through adding fact-checking services, other monitoring, and software-based approaches e.g. through browsers, social media and other tech companies can take responsibility for weeding out and guarding against fake news.

Governments can also help in the fight by putting pressure on social media companies and by collaborating with them to keep the momentum going and to help develop and monitor ways to keep tackling fake news.

That said, it’s still a big problem, no solution is infallible, and all of us as individuals would do well to remember that, especially today, you really can’t believe everything you read and an eye to source and bias of news coupled with a degree of scepticism can often be healthy.

AI and Facial Analysis Job Interviews

Reports of the first job interviews conducted in the UK using Artificial Intelligence and facial analysis technology have been met with mixed reactions.

The Software

The AI and facial analysis technology used for the interviews comes from US firm HireVue. The main products available from HireVue for interviewing are Pre-Employment Assessments and Video Interviewing.

For the Pre-Employment Assessments, the software uses AI, video game technology, and game-based and coding challenges to collect candidate insights related to work style, how the candidate works with people, and general cognitive ability. The Assessments are customisable to specific hiring objectives or ready to deploy based on pre-validated models. The data points are analysed by HireVue’s proprietary machine learning algorithms, and the insights gained are intended to enable businesses to save time and use recruitment resources more effectively by enabling businesses to quickly prioritise which candidates to shortlist for interviews.

The Video Interviewing product uses real-time evaluation tools and can assess around 25,000 data points in one interview.  During interviews, candidates are asked to answer pre-scripted questions with HireVue Live offering a real-time collaborative video interview that can involve a whole recruitment team. The benefits of on-demand video-based assessments, which can be conducted in less than 30 minutes, are that recruiters and managers don’t have to synchronize candidates and calendars, and can evaluate more candidates, thereby being able to spend their time deciding between the best candidates.

Who Is Using The Software?

According to HireVue, 700+ companies use the software (not all in the UK) including Vodafone, Urban Outfitters, Intel, Ikea, Hilton, Unilever, Singapore Airlines, JP Morgan and Goldman Sachs. It has been reported, however, that the technology has already been used for 100,000 interviews in the UK.

Concerns

Even though there are obvious on-demand expertise, time and cost savings for companies, and HireVue displays case studies from satisfied customers on its website, AI and facial analysis technology use in interviews has been met with criticism by privacy and rights groups.

For example, it has been reported that Big Brother Watch representatives have voiced concerns about the ethics of using this method, possible bias and discrimination (if the AI hasn’t been trained on a diverse-enough range of people), and that unconventional but still good potential candidates could fall foul of algorithms that can’t take account of the complexities of human speech, body language and expression.

Robot Interviewer

Back in March, it was reported that TNG and Furhat Robotics in Sweden have developed a social, unbiased recruitment robot called “Tengai” that can be used to conduct job interviews with human candidates. The basic robot was developed several years ago and looks like an internally projected human face on a white head sitting on top of a speaker (with camera and microphone built-in).  The robot is made with pre-built expressions and gestures as part of a pre-loaded OS which can be further customised to fit any character, and the HR-tech application software that Tengai uses means that it can conduct situation and skill-based interviews in a way that is as close as possible to a human interviewer. This includes using “hum”, nodding its head, and asking follow-up questions.

What Does This Mean For Your Business?

Like the Swedish Tengai robot Interviews, the HireVue Pre-Employment Assessment (and possibly the video) appear to be have been designed to be used at the early part of the recruitment process as a way of enabling big companies to quickly create a shortlist of candidates to focus on. As businesses become used to, and realise the value of outsourcing as a way of making better use of resources and buying in scalable and on-demand skills and resources, it appears that bigger companies are also willing to trust new technology to the point where they outsource expertise and human judgement in exchange for the promise of better, and more cost-effective recruitment management.

AI, facial recognition, and other related new technologies and algorithms are being trusted and adopted more by big businesses which also need to remember, for the benefit of themselves and their customers and job candidates that they need to make sure that bias is minimised, and that technology is unlikely to be able to pick up on every (potentially important) nuance of human behaviour and speech.  It should never be forgotten that we each have the most powerful, amazing and perceptive ‘computer’ available in the form of our own brain, and for vast amount of medium and small businesses that probably can’t afford or don’t want to use AI to choose recruits, experienced human interviewers can also make good recruitment decisions.

That said, as technology progresses, AI-based recruitment systems are likely to improve by gaining their own experience, and be augmented, and become more widely available and affordable to the point that they become a standard first challenge for job applicants in many situations.

Deepfake Ransomware Threat Highlighted 

Multinational IT security company ‘Trend Micro’ has highlighted the future threat of cybercriminals making and posting or threatening to post malicious ‘deep fake’ videos online in order to cause damage to reputations and/or to extract ransoms from their target victims.

What Are Deepfake Videos?

Deep fake videos use deep learning technology and manipulated images of target individuals (found online), often celebrities, politicians, and other well-known people to create an embarrassing or scandalous video such as pornography or violent behaviour. The AI aspect of the technology means that even the facial expressions of those individuals featured in the video can be eerily accurate, and on first viewing, the videos can be very convincing.

An example of the power of deepfake videos can be seen on the Mojo top 10 (US) deep fake video compilation here: https://www.youtube.com/watch?v=-QvIX3cY4lc

Audio Too

Deepfake ‘ransomware’ can also involve using AI to manipulate audio in order to create a damaging or embarrassing recording of someone, or to mimic someone for fraud or extortion purposes.

A recent example was outlined in March this year, when a group of hackers were able to use AI software to mimic (create a deep fake) of an energy company CEO’s voice in order to successfully steal £201,000.

Little Fact-Checking

Rik Ferguson, VP of security research and Robert McArdle, director of forward-looking threat research at Trend Micro recently told delegates at Cloudsec 2019 that deepfake videos have the potential to be very effective not just because of their apparent accuracy, but also because we live in an age when few people carry out their own fact-checking.  This means that by simply uploading such a video, the damage to reputation and the public opinion of the person is done.

Scalable & Damaging

Two of the main threats of deepfake ransomware videos is that they are very flexible in terms of subject matter i.e. anyone can be targeted, from teenagers for bullying to politicians and celebrities for money, and they are a very scalable way for cybercriminals to launch potentially lucrative attacks.

Positive Use Too

It should be said that deepfakes don’t just have a negative purpose but can also be used to help filmmakers to reduce costs and speed up work, make humorous videos and advertisements, and even help in corporate training.

What Does This Mean For Your Business?

The speed at which AI is advancing has meant that deepfake videos are becoming more convincing, and more people have the resources and skills to make them.  This, coupled with the flexibility and scalability of the medium, and the fact that it is already being used for dishonest purposes means that it may soon become a real threat when used by cybercriminals e.g. to target specific business owners or members of staff.

In the wider environment, deepfake videos targeted at politicians in (state-sponsored) political campaigns could help to influence public opinion when voting which in turn could have an influence on the economic environment that businesses must operate in.

IBM To Offer Largest Quantum Computer Available For External Access Via Cloud

IBM has announced that it is opening a Quantum Computation Centre in New York which will bring the world’s largest fleet of quantum computing systems online, including the new 53-Qubit Quantum System for broad use in the cloud.

Largest Universal Quantum System For External Access

The new 53-quantum bit/qubit model is the 14th system that IBM offers, and IBM says that it is the single largest universal quantum system made available for external access in the industry, to date. This new system will (within one month) give its users the ability to run more complex entanglement and connectivity experiments.

IBM Q

It was back in March 2017 that IBM announced that it was about to offer a service called IBM Q that would be the first time that a universal quantum computer had been commercially available, giving access to (and use of) a powerful, universal quantum computer, via the cloud.

Since then, a fleet composed of five 20-qubit systems, one 14-qubit system, and four 5-qubit systems have been made available, and since 2016 IBM says that a global community of users have run more than 14 million experiments on their quantum computers through the cloud, leading to the publishing of more than 200 scientific papers.

Who?

Although most uses of quantum computers have been for isolated lab experiments, IBM is keen to make quantum computing widely available in the cloud to tens of thousands of users, thereby empowering what it calls “an emerging quantum community of educators, researchers, and software developers that share a passion for revolutionising computing”.

Why?

The hope is that by making quantum computing more widely available, it could lead to greater innovation, more scientific discoveries e.g. new medicines and materials, improvements in the optimisation of supply chains, and even better ways to model financial data leading to better investments which could have an important and positive knock-on effect in businesses and economies.

Partners

Some of the partners and clients that IBM says it has already worked with its quantum computers include:

  • J.P. Morgan Chase for ‘Option Pricing’ – a way to price financial options and portfolios. The method devised using the quantum computer has speeded things up dramatically so that financial analysts can now perform option pricing and risk analysis in near real-time.
  • Mitsubishi Chemical, Keio University and IBM, on a simulation related to reactions in lithium-air batteries which could lead to making more efficient batteries for mobile devices or automotive vehicles.

Quantum Risk?

Back in November 2018, however, security architect for Benelux at IBM, Christiane Peters, warned of the possible threat of commercially available quantum computers being used by criminals to try and crack encrypted business data.

As far back as 2015 in the US, the National Security Agency (NSA) warned that progress in quantum computing was at such a point that organisations should deploy encryption algorithms that can withstand such attacks from quantum computers.

The encryption algorithms that can stand up to attacks from quantum computers are known by several names including post-quantum cryptography / quantum-proof cryptography, and quantum-safe / quantum-resistant cryptographic (usually public-key) algorithms.

What Does This Mean For Your Business?

The ability to use a commercially available quantum computer via the cloud will give businesses and organisations an unprecedented opportunity to solve many of their most complex problems, develop new and innovative potentially industry-leading products and services and perhaps discover new, hitherto unthought-of business opportunities, all without needed to invest in hitherto prohibitively expensive hardware themselves. The 14 hugely powerful systems now available to the wider computing and business community could offer the chance to develop products that could provide a real competitive advantage in a much shorter amount of time and at much less cost than traditional computer architecture and R&D practices previously allowed.

As with AI, just as new technologies and innovative services can be used for good, their availability could also mean that in the wrong hands they could be used to pose a new threat that’s very difficult for most business to defend against. Quantum computing service providers, such as IBM, need to ensure that the relevant checks, monitoring and safeguards are in place to protect the wider business community and economy against a potentially new and powerful threat.

Autonomous AI Cyber Weapons Inevitable Says Security Research Expert

Speaking at a recent CloudSec event in London, Trend Micro’s vice-president of security research, Rik Ferguson said that AI cyberattacks operated autonomously are an inevitable threat that security professionals must adapt to tackling.

If Leveraged By Cybercriminals

Mr Ferguson said that when cybercriminals manage to leverage the power of AI, organisations may find themselves experiencing attacks that happen very quickly, contain malicious code, and can even adapt themselves to target specific people in an organisation e.g. impersonating senior company personnel in order to get payments authorised, pretending to be a penetration testing tool, or finding ways to motivate targeted persons to fall victim to a phishing scam.

AI Vs AI

Mr Ferguson suggested that the inevitability of cybercriminals developing autonomous AI-driven attack weapons means that it may be time to be thinking in a world of AI versus AI.

Example of Attack

One close example given by Ferguson is the Emojet Trojan.  This malware, which obtains financial information by injecting computer code into the networking stack of an infected Microsoft Windows computer, was introduced 5 years ago but has managed to adapt and cover its tracks even though it is not even AI-driven.

AI Launching Own Attacks Without Human Intervention

Theresa Payton, who was the first women to be a White House CIO (under president George W Bush) and is now CEO of security consultancy Fortalice, has been reported as saying that the advent of genuine AI has posed serious questions, that the cybersecurity industry is falling behind, and that we may even be facing a situation where AI will be able to launch its own attacks without human intervention.

Challenge

One challenge to responding effectively to AI cyber-attacks is likely to be that cybersecurity and law enforcement agencies must move at the speed of law, particularly where procedures must be followed to request help from and arrange coordination between foreign agencies.  The speed of the law, unfortunately, is likely to be much slower than the speed of an AI-powered attack.

What Does This Mean For Your Business?

It is a good thing for all businesses that the cybersecurity industry recognises the inevitability of AI-powered attacks, and although it fears that it risks falling behind, it is talking about the issue, taking it seriously, and looking at ways in which it needs to change in order to respond.

Adopting AI Vs AI thinking now may be a sensible way to help security professionals, and those in charge of national security to focus thinking and resources on finding ways to innovate and create their own AI-based detection and defensive systems and tools, and the necessary strategies and alliances in readiness for a new kind of attack.