Archive for AI

Facial Recognition For Border Control

It has been reported that the UK Home Office will soon be using biometric facial recognition technology in a smartphone app to match a user’s selfie against the image read from a user’s passport chip as a means of self-service identity verification for UK border control.

Dutch & UK Technology

The self-service identity verification ‘enrolment service’ system uses biometric facial recognition technology that was developed in partnership with WorldReach Software, and immigration and border management company, with support from (Dutch) contactless document firm ReadID.

Flashmark By iProov

Flashmark technology, which will be used provide the biometric matching of a user’s selfie against the image read from a user’s passport chip, was developed by a London-based firm called iProov.  The idea behind it is to be able to prove that the person presenting themselves at the border for verification is genuinely the owner of an ID credential and not a photo, screen image, recording or doctored video.

Flashmark works by using a sequence of colours to illuminate a person’s face and the reflected light is analysed to determine whether the real face matches the image being presented.

iProov is a big name in the biometric border-control technology world, having won the 2017 National Cyber Security Centre’s Cyber Den competition at CyberUK, and winning a contract from the US Department of Homeland Security (DHS) Science and Technology Directorate’s Silicon Valley Innovation Program.  In fact, iProov was the first British and non-US company to be awarded a contract by the DHS to enable travellers to use self-service of document checks at border crossing points.

Smartphone App

The new smartphone-based digital identity verification app from iProov has been developed to help support applications for The EU Settlement Scheme.  This is the mechanism for resident EU citizens, their family members, and the family members of certain British citizens, to apply on a voluntary basis for the UK immigration status which they will need to remain in the UK beyond the end of the planned post-exit implementation period on 31 December 2020.

It is believed that the smartphone app will help the UK Home Office to deliver secure, easy-to-use interactions with individuals.

What Does This Mean For Your Business?

Accurate and secure, automated biometric / facial recognition and identification / i.d. verification systems have many business applications and are becoming more popular.  For example, iProov’s technology is already used by banks (ING in the Netherlands) and governments around the world, and banks such Barclays already uses voice authentication for telephone banking customers.

Biometrics are already used by the UK government.  For example, in the biometric residence permit (BRP) system, those planning to stay longer than 6 months, or apply to settle in the UK need a biometric permit. This permit includes details such as name, date and place of birth, a scan of the applicant’s fingerprints and a digital photo of the applicant’s face (this is the biometric information), immigration status and conditions, and information about access public funds (benefits and health services).

Many people are already used to using some biometric element as security on their mobile device e.g. facial recognition, fingerprint, or even Samsung’s iris scanner on its Note ‘phablet’. Using a smartphone-based i.d. verification app for border purposes is therefore not such a huge step, and many of us are used to having our faces scanned and matched with our passports anyway as part UK border control’s move towards automation.

Smartphone apps have obvious cost and time savings as well as convenience benefits, plus biometrics provide a reliable and more secure verification system for services than passwords or paper documents. There are, of course, matters of privacy and security to consider, and as well as an obvious ‘big brother’ element, it is right that people should be concerned about where, and how securely their biometric details are stored.

Facial Recognition For Buyers Of Alcohol & Cigarettes

A pilot scheme involving NCR, the US self check-out machine maker for Asda, Tesco and other UK supermarkets, and Yoti’s digital identity app will use an integrated camera linked to facial recognition software to help improve, simplify and speed up age approval at self check-outs.

Speed & Frustration Reduction

The system is intended to tackle problems such as frustration and delays caused when customers wait for approval when buying alcohol at self check-outs, challenges faced by supermarket employees who have to determine a shopper’s age and either accept or deny them a sale of alcohol or cigarettes, and to help the supermarket to stay on the right side of the law.

How Will The System Work?

An AI-equipped camera will be integrated in the vicinity of the checkout and the facial recognition software will use AI to help it estimate the age of shoppers when they are buying age-restricted items. The Yoti app does, however, require a customer to register their ID and face with the company beforehand.

What About Privacy and Data Security?

Wherever facial recognition software is used, there are always concerns about how the processing and storage of those images (that count as personal data under GDPR) is managed in terms of privacy and security. Yoti is reported to have said that its system will not retain any visual information about users after they have made a purchase.

Where and When?

There are no confirmed details as yet about exactly which supermarket(s) will be involved in the pilot, although some media reports appear to indicate that Tesco, Morrisons and Asda could be the most likely candidates for piloting the technology at some point later this year.

Face Scanning Used For Adverts

A face-scanning system, made by Lord Alan Sugar’s company Amscreen, is known to have been used already by Tesco at petrol station tills in order to target advertisements at customers depending on their estimated age.

What Does This Mean For Your Business?

Anything that reduces customer frustration, as well as speeding-up and simplifying the passage through tills, and leveraging staff resources through saving them from having to constantly go to different tills to approve purchases is likely to be good news for the supermarkets. If this system proves to be effective, accurate and successful, it could have many other opportunities for use in other age-restricted services e.g. venue / event entry, and the purchase of certain dangerous / restricted products, and the gambling industry.

While it may make perfect economic and practical sense for companies to use this kind of system, it could be a double-edged sword with some customers. For example, whereas some customers may see the practical and responsible side of the system, others may consider it an unnecessary intrusion with the potential to impact on their privacy and security.

Ubicoustics Overhears Everything You Do … And Understands

Researchers in the US have presented a paper based on their research that identified a real-time, activity recognition system capable of interpreting collected sounds that could well be used by home smart speakers.

Identify Other Sounds, and Issue Responses

Researchers at Carnegie Mellon University in the US claim to have discovered a way that the ubiquity of microphones in modern computing devices, and software that could use a device’s always-on built-in microphones could be used to identify all sounds in room, thereby enabling context-related responses from smart devices. For example, if a smart device such as an Amazon Echo were equipped with the technology, and could identify the sound of a tap running in the background in a home, it could issue a reminder to turn the tap off.


The research project, dubbed ‘Ubicoustics’, identified how using an AI /machine learning based sound-labeling mode, drawing on sound effects libraries, could be linked to the microphone (as the listening element) of a smart device e.g. smart-watches, computers, mobile devices, and smart speakers.

As Good As A Human

The sound-identifying, machine-learning model used in the research system was able to achieve human-level performance in recognition accuracy and false positive rejection. The reported accuracy level of 80.4%, and the misclassification level of around one sound in five sounds, means that it is comparable to a person trying to identify a sound.

As well as being comparable to other high-performance sound recognition systems, the Ubicoustics system has the added benefit of being able to recognise a much wider range of activities without site-specific training.


The researchers noted several possible applications of the system used in conjunction with smart devices e.g. sending a notification when a laundry load finished, promoting public health by detecting frequent coughs or sneezes and enabling smart-watches to prompt healthy behaviours after tracking the onset of symptoms.

Privacy Concerns

The obvious worry with a system of this kind is that it could represent an invasion of privacy and could be used to take eavesdropping to a new level i.e. meaning that we could all be living in what is essentially a bugged house.

The researchers suggest a potential privacy protection measure could be to convert all live audio data into low resolution Mel spectrograms (64 bins), thereby making speech recovery sufficiently difficult, or simply running the acoustic model locally on devices so no audio data is transmitted.

What Does This Mean For Your Business?

The ability of a smart device to be able to recognise all sounds in a room (as well as a person can) and to deliver relevant responses could be valued if used in a responsible, helpful, and not an annoying way. It doesn’t detract from the fact that, knowing that having a device with these capabilities in the home or office could represent a privacy and security risk, and has more than a whiff of ‘big brother’ about it. Indeed, the researchers recognised that people may not want sensitive, fine-grained data going to third-parties, and that operating a device with this system but without transmission of the data could provide a competitive edge in the marketplace.

Nevertheless, it could also represent new opportunities for customer service, diagnostics for home and business products / services, crime detection and prevention, targeted promotions, and a whole range of other possibilities.

New Tech Laws For AI Bots & Better Passwords

It may be no surprise to hear that California, home of Silicon Valley, has become the first state to pass laws to make AI bots ‘introduce themselves’ (i.e. identify themselves as bots), and to ban weak default passwords. Other states and countries (including the UK) may follow.

Bot Law

With more organisations turning to bots to help them create scalable, 24-hour customer services, together with the interests of transparency at a time when AI is moving forward at a frightening pace, California has just passed a law to make bots identify themselves as such on first contact. Also, in the light of the recent US election interferences, and taking account of the fact that AI bots can be made to do whatever they are instructed to do, it is thought that the law has also been passed to prevent bots from being able to influence election votes or to incentivise sales.


The ability of Google’s Duplex technology to make the Google Assistant AI bot sound like a human and potentially fool those it communicates with is believed to have been one of the drivers for the new law being passed. Google Duplex is an automated system that can make phone calls on your behalf and has a natural-sounding human voice instead of a robotic one. Duplex can understand complex sentences, fast speech and long remarks, and is so authentic that Google has already said that, in the interests of transparency, it will build-in the requirement to inform those receiving a call that it is from Google Assistant / Google Duplex.

Amazon, IBM, Microsoft and Cisco are also all thought to be in the market to get highly convincing and effective automated agents.

Only Bad Bots

The new bot law, which won’t officially take effect until July 2019 is only designed to outlaw bots that are made and deployed with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving.

Get Rid of Default Passwords

The other recent tech law passed in California and making the news is a law banning easy to crack but surprisingly popular default passwords, such as ‘admin’, ‘123456’ and ‘password’ in all new consumer electronics from 2020. In 2017, for example, the most commonly used passwords were reported to be 123456, password, 12345678 and qwerty (Splashdata). ‘Admin’ also made number 11 on the top 25 most popular password lists, and it is estimated that 10% of people have used at least one of the 25 worst passwords on the list, with nearly 3% of people having used the worst password, 123456.

The fear is, of course, that weak passwords are a security risk anyway, and leaving easy default passwords in consumer electronics products and routers from service providers has been a way to give hackers easier access to the IoT. Devices that have been taken over because of poor passwords can be used to conduct cyber attacks e.g. as part of a botnet in a DDoS attack, without a user’s knowledge.

Password Law

The new law requires each device to come with a pre-programmed password that is unique to each device, and mandates any new device to contain a security feature that asks the user to generate a new means of authentication before access is granted to the device for the first time. This means that users will be forced to change the unique password to something new as soon as the device is switched on for the first time.

What Does This Mean For Your Business?

For businesses using bots to engage with customers, if the organisation has good intentions, there should not be a problem with making sure that the bot informs people that it is a bot and not a human, As AI bots become more complex and convincing, this law may become more valuable. Some critics, however, see the passing of this law as another of the many reactions and messages being sent about interference by foreign powers e.g. Russia, in US or UK affairs.

Stopping the use of default passwords in electrical devices and forcing users to change the password on first use of the item sounds like a very useful and practical law that could go some way to preventing some hackers from gaining easy access to and taking over IoT devices e.g. for use as part of a botnet in bigger attacks. It has long been known that having the same default password in IoT devices and some popular routers has been a vulnerability that, unknown to the buyers of those devices, has given cyber-criminals the upper hand. A law of this kind, therefore, must at least go some way in protecting consumers and the companies making smart electrical devices.

Businesses Set For Augmented Reality

A report based on research by IT Consultancy Group Capgemini has predicted a big shift towards the use of virtual reality and augmented reality by businesses over the next 3 years.

Mainstream Soon

The results of a survey of 700 business executives across multiple sectors show that 46% think that VR and AR technologies will become a major part of their organisation in the next 3 years. Nearly 40% also said that VR and AR would be mainstream in just 5 years.

Based on the findings of its survey, Capgemini thinks that half of all businesses not already using AR and VR technology will start using it as they accept the value-adding and cost-saving benefits that it brings.

Good Results, So Far

The report showed that 82% of businesses already using AR and VR tech said it’s either exceeding or meeting their expectations in terms of can enhancing productivity, efficiency and safety in the workplace.

The optimism and positive predictions for AR and VR being used by businesses is not just being driven by the positive reinforcement of those who are ready using them, but also by the impressive evolution of immersive technology in a short space time of time.


Some companies may be struggling to see how AR and VR could be applied to their businesses now unless it makes up part of a product, but tech commentators believe that some of the most popular areas where they will be used are in offering remote real-time support to customers and in training staff.


Two of the key challenges to the growth of the use of AR and VR by businesses in the UK are a shortage of skilled people (the UK has a tech skills gap) and a shortage of investors.

What Does This Mean For Your Business?

The results of the Capgemini survey show promise and optimism for AR and VR being used by businesses to add value and gain a competitive edge in the marketplace, in much the same way that AI is being embraced and is producing good results.
It is unfortunate that UK businesses are still facing a challenge to their use of technology for growth because of a skills gap that was exacerbated by Brexit fears. As far as this challenge goes, the UK government, the education system and businesses need to continue to find ways to work together to develop a base of digital skills in the UK and to make sure that the whole tech eco-system finds effective ways to address the skills gap and keep the UK’s tech industries and business attractive and competitive. This can only help to boost AR and VR development in business.

It is also a shame that the UK, which wants to be a technology centre, is also at a disadvantage in terms of investors compared to places such as the US and China. Capgemini suggests that UK businesses can meet this challenge by streamlining investment to seize the long-term growth potential of AR and VR technology. Also, Capgemini’s report suggests that in order to leverage the business value of AR and VR, UK companies should adopt a centralised governance structure, as well as proofs of concept that are aligned with business strategy, and that they should work on employee change management in order to able to drive innovation in these new fields.

Microsoft Introduces AI Automated Audio and Video File Transcription

Microsoft’s new AI tool in OneDrive and SharePoint automatically transcribes the contents of video, audio, and image files, thereby making it much faster and easier to find specific topics and references made in those files.

No More Lengthy Transcribing

The growth of digital content, particularly in rich file types such as image, video, and audio files has made things particularly challenging when trying to search through them to find specific references, details, topics or quotes.

Up until now, it’s been a case of physically watching and listening, and transcribing the file into to text to get what you want.

Also, if you need to track down lost screenshots, snapshots and receipts, or if you have to categorise images by keywording them, or if you’re trying to search for images relating to a certain subject, this too has been a time-consuming challenge, up until now.

Search Through Audio or Video By What’s Said

The new AI-based automatic transcription system that’s been added to OneDrive and SharePoint means that users can now search through audio or video by what’s said in the file, and users can quickly find images by conducting searches using keywords based on the content.

How Does It Work?

According to a post on the Microsoft website by Omar Shahine, Partner Director of Program Management for OneDrive and SharePoint, AI can be used to extract the content from an audio or video file, and provide a full transcript which is shown in a viewer, which supports over 320 different file types.

Where automatic photo transcripts are concerned, native, secure AI is used to determine where photos were taken, recognize objects, and extract text in photos and images.

What Does This Mean For Your Business?

With the web, email, text / comms and chat apps now being regularly used as part of businesses, and with digital files and rich format files being favoured, used / displayed, swapped / shared and stored, and with the rise of collaborative online working, this new feature could prove very useful to users of OneDrive and SharePoint.

The many benefits it could bring include saved costs and time in searching and having to physically transcribe, helping to leverage existing content and improve productivity, improving accessibility, and making make life a lot easier for anyone who regularly transcribes audio files e.g. content writers, journalists and anyone involved with archiving and categorising different media types. It’s only a matter of time until other technology will be bolted-on to features like this e.g. facial recognition.

Also, for Microsoft this is a feature that can help it to compete in the collaborative working platform market.

Apple’s Autonomous Car Involved in Crash

Apple’s new autonomous vehicle, part of its ‘Project Titan’, has joined an expanding list of self-driving car prangs.

What Autonomous Vehicle?

Apple is reported to have been working on vehicle projects since 2015 under the name of ‘Project Titan’. Ever since the early reports, there has been much speculation about when an iCar will come onto the market.

The evidence, that this would be likely, came in the form of reports of hundreds of Apple employees working on a car project, hints during an interview with CEO Tim Cook back in June 2007, reports of two Apple computer scientists publishing research about a 3D detection system (that could be used in an autonomous car), and in July this year, news that an ex-Apple employee had been charged with stealing trade secrets from Apple to take to a Chinese car start-up.

Apple is also reported to be working with VW on a driverless vehicle to shuttle its employees to and from work.
The Apple car involved in the recent accident is a Lexus SUV that is being used as part of the testing for its autonomous car project.

In Driverless Mode, But Rear-Ended

In this case, even though the Apple autonomous vehicle was in driverless mode at the time, the cause of the crash (last month) is thought to be that the driver of a Nissan Leaf rear-ended the Apple car while it was doing less than 1 mph, trying to find a safe space to merge onto Lawrence Expressway in California.

Most Autonomous Vehicle Crashes Caused By Humans

It’s tempting to think that testing autonomous vehicles on public roads is bound to result in crashes caused by faults with the technology. In fact, the statistics tell a different story and indicate that human error has been the main cause of accidents involving autonomous vehicles.

For example, Axios research shows that only 8% of these types of crashes were caused by a vehicle fault, and only one such crash happened when the vehicle was in autonomous mode. In fact, six out of seven accidents happened while a human was driving, and only one of a total 57 accidents to date involving moving autonomous vehicles was caused by a fault with the AI. One more left-field statistic is that self-driving vehicles have actually been attacked by humans 3 times!

That said, and joking aside, it’s worth acknowledging that there has already been one fatality related to driverless cars. It happened when a woman was hit by a driverless car that was being tested by Uber while she was crossing the street in Tempe, Arizona.

What Does This Mean For Your Business?

Although we may not be entirely convinced yet, or used to the idea of cars, lorries, and even planes operating autonomously on our roads and above our heads, the fact is that all have been tested, and look likely to become a more regular reality. At this time, it is still relatively early days for autonomous vehicles which means that there are still many untapped opportunities to use autonomous vehicles commercially, and there are of course many challenges and issues to consider around safety, insurance, regulations and reliability. For the time being, autonomous vehicles are, therefore, likely to be adopted more quickly on closed sites but operators who decide to adapt such sites to work for autonomy could expect significant improvements in productivity and safety.

As the technology to operate these vehicles becomes more advanced, prices decrease, and technical and operational problems are ironed-out, their potential to add-value to businesses / organisations / cities .e.g. for distribution / logistics, public transport, and many other uses will become apparent. They may also offer cost savings, greater reliability and easier management and planning, which are appealing benefits to businesses.

Diabetes Eye Disease Diagnosing System Needs No Doctors

An AI system for diagnosing eye disease caused by diabetes that has been approved for use in the US works autonomously and doesn’t need a doctor to interpret its results.

New Way To Solve Old Problem

Diabetic retinopathy, a leading cause of blindness among adults, is caused by high blood sugar levels damaging the blood vessels of the light-sensitive tissue at the back of the eye / the retina. The condition affects up to eight out of 10 people who have had diabetes for 10 years or more.

Given the extent of the problem, Google and DeepMind are reported to have been working on building machine-learning algorithms for detecting diabetic retinopathy for some time.

The new AI-based device from Iowa diagnostics company IDx LLC is the first FDA-approved AI system for diagnosing this particular eye disease.

No Doctors Required For Diagnosis

The system can be used to spot the disease i.e. signs of mild diabetic retinopathy in scans of people’s retinas. This would normally be a job that would require human input, and as such, the new device is a first in eye care.

Although the system can diagnose the disease on its own, and therefore, doesn’t require a doctor’s input for diagnosis, it cannot recommend treatment plans, as this requires human doctors.

How Does It Work?

The system uses two convolutional neural networks.

The first one studies and analyses the image quality of retinal scans, from this it can determine if the focus, colour balance, and exposure are good enough to pass the photos to the diagnostic algorithm.

The second stage / network looks for common signs of damage related to the disease e.g. haemorrhages from burst blood vessels which may be caused by unstable blood sugar levels.

From these processes, the system is able to make a diagnosis.

How Accurate Is It?

Given the complicated nature of the medical condition, the accuracy of the system has been tested (using 900 subjects) in terms of its sensitivity, specificity and imageability. The device is reported to have scored 87% sensitivity i.e. identifying patients who have a mild version of the condition, 90% cent specificity i.e. indentifying those with no eye damage, and 96% imageability i.e. a high enough quality of image was generated to achieve a diagnosis.

What Does This Mean For Your Business?

AI is being incorporated in more value adding and innovative ways to solve many problems across all industries and sectors, and as such, represents an opportunity for those businesses developing devices and systems with an AI element.

Not only does this device perform an important part of a service that hitherto required expert human input, it also frees up time that the human expert would have spent on diagnosis, thereby allowing valuable medical resources to be extended and allocated elsewhere. This demonstrates how AI can add value, save time / costs, and allow more leverage to be gained from existing services.

We already trust devices / machinery to handle many important aspects of medical care, and with this in mind, there should be no real reason to mistrust the accuracy and fitness for purpose of this system, particularly given that it has been tested, and that there will be human input at the treatment plan stage that may help to spot any errors.

AI in medical care represents an important step into the future that could bring some incredible benefits.

AI, ML & ‘Robot’ Business Spending Will Hit $232bn by 2025 Says Report

A recent KPMG reports claims that whereas business spending on artificial intelligence (AI), machine learning(ML) and robotic process automation (RPA) technologies is $12.4bn this year, it will increase to $232bn in 2025.

Ready, Set, Fail?

The report, entitled “Ready, set, fail? Avoiding setbacks in the intelligent automation race” highlights how the potential of AI technology is already being examined by 37% of enterprises, and how its uptake is expected to accelerate over the next three years, with all enterprises using the technology to some extent, 49% of enterprises using it at scale, and 29% using it selectively. Currently, 13% of enterprises are missing out altogether on the opportunity of using AI to add value to their business.

Can’t All Be Like Leading ‘Digital First’ Companies

The report accepts that while most businesses can’t realistically expect to be leading ‘digital first’ companies, such as Amazon with its one-click experience linked to a complex back-end and digital supply chain, they can make good ground from now on by acting quickly, understanding the need for urgency, and defining and executing a comprehensive AI strategy.

What Is Digital First?

A ‘digital first’ / digital by default approach involves giving priority to new media channels and technologies to improve the business by bringing it into line with the needs and behaviours of today’s consumers. It involves adopting a whole new way of looking at the business in order to add the skills, and to change to culture and mindset in order to make it more effective.

What Is Robotic Process Automation (RPA)?

While many of us are now familiar with the terms artificial intelligence (AI), and machine learning (ML), the report also focuses on ‘robotic process automation’ (RPA). This refers to an emerging form of business process automation technology that uses software robots or artificial intelligence (AI) workers.

Instead of software developers producing a list of actions to automate a task and interface to the back-end system using internal application programming interfaces (APIs) or dedicated scripting language, RPA systems develop the action list by watching the user perform that task in the application’s graphical user interface (GUI), and then they perform the automation by repeating those tasks directly in the GUI.

Expectations High But Readiness Low

The KMPG report shows that even though managers’ expectations are high for AI use in their company in the coming years, the readiness to implement AI is low. The reasons for this include the fact that two-thirds of enterprises lack the in-house talent, and half of businesses are still struggling to define goals and objectives for AI.

Also, the 33% of respondents in KPMG’s study said that management are lacking readiness to implement AI because of a concern over AI’s impact on employees.

Investment Available

According to the report, even though readiness is low, the investment needed for intelligent automation is available, and is expected to increase over the next 3 years, with 32% of organisations having approved more funding for robotic process automation, and 40% saying that they will increase spending on artificial intelligence by at least 20% over the next three years.

What Does This Mean For Your Business?

Artificial Intelligence holds many opportunities for businesses, and those businesses that have moved successfully to a digital first approach appear to be reaping the benefits in terms of competitive advantage and profitability in the modern marketplace.

There are many ways in which businesses can meet high marketplace expectations for AI. These include:

– Long-term planning with a sequence of steps, beginning with prioritised projects that can realise scale in one or two years, with the help of C-level buy-in and sponsorship. This can lead to a successful transformation built on new blueprints and architectures for operating models and business models.
– Taking a comprehensive and holistic approach to automating the service delivery model.
– Taking another look at the whole operating model and how AI can be best adopted and applied to the core business. This involves looking at the operational and technology infrastructure, organisational structure and governance, and people culture. This can be supported by measurement and incentive systems, and implemented in a way that causes minimum disruption to existing business processes.

Now You Can Search eBay Via A Photo

Ebay has launched Image Search in the UK, an AI-based technology that means you can now enter a photo into the search box to help find the product you’re looking for.

Smart Phone Camera Search

With so many of us now using smart-phones, this innovative new feature means that users can take a photo on their phone of a product they’re inspired by and interested in, and use the machine learning technology that’s been added to eBay’s 1.1 billion item catalogue to quickly search for that product.

Technology Push at eBay

This latest addition to eBay’s search is part of a general push by eBay to bolt-on more technologies and forge alliances to increase the reach of its platform and to take the fight to competitors.

For example, eBay recently collaborated with worldwide media and entertainment company for culture and tech ‘Mashable’ so that an eBay widget could be introduced into Mashable. The widget allows Mashable’s audience to see and use a small eBay shop window overlaid on the page, and populated by products that are featured in Mashable articles, thereby allowing people to instantly buy what they they’re reading about. The benefit for eBay (according to eBay) is that eBay’s marketing team will be able to use it to better understand the factors that matter most to buyers making purchases off the eBay platform e.g. seller reputation and delivery time, and to use learned consumer insights from the pilot to deliver scalable solutions that accelerate eBay’s growth.

Smart Search Benefits

The sheer size of eBay’s catalogue means that it can sometimes take a long time for users to find the item they’re looking for, particularly if that item is very difficult to describe. Also, the watching and waiting aspect of eBay, its reputation as an auction site, and its lack of ability to actively engage have appeared to put it slightly at odds with a generation who simply want to quickly find what they’re looking for via their smart-phone, and purchase it. eBay also needed to find a way to get the most out of the vast number of user-generated images and item data that they’d accumulated through the years, and to capitalise on the instant product inspiration that people get e.g. from their social media feeds.

It is believed that the Image Search feature will be able to address all of these challenges, and will allow users to quickly find what they’re looking for while on the move. It may also encourage more seller to take to platform.

What Does This Mean For Your Business?

This is another illustration of how AI / machine learning is being put to practical and value-adding use as a medium for brand / company growth and user convenience. For businesses in retail such as for fashion and apparel, this new feature could bring increased sales and brand recognition, and could help new lines to generate sales rapidly.

For eBay, this innovative search feature could kill many birds with one stone towards the aim of delivering scalable solutions that can accelerate eBay’s growth.

Visual search is a growing trend, particularly in retail e.g. ASOS, Zalando and John Lewis have adopted visual search into their apps to save customers time, to make themselves more socially discoverable, to drive up-sell activity, and to ultimately increase app revenue. Visual search technology is likely to find its way onto many more platforms, retail websites and apps yet.