Data Driven Business Solutions

We specialise in Data Science and AI solutions for complex engineering and business problems.

Search
  • Rahul Rao

Introducing Iris

Updated: Jun 15, 2021

Can AI lighten the load on doctors by diagnosing disease automatically?
Can we improve primary food production by automatically recognising and treating invasive weeds?

We have previously discussed the many applications of machine learning in various industries, such as retail sales forecasting and machinery failure prediction. To many, numbers and computers are seen as a natural fit. Numerical operations follow well-defined rules that computers can be programmed to recognise. The following of well-defined rules is a task that computers have excelled at from day one. Data - generally numbers or categories - that can be put into a table is known as structured data.


On the other hand, text, pictures, and audio - collectively termed unstructured data - are comparatively difficult for machines to understand. A human can look at a picture of a dog and know instantly that it is a dog, regardless of the lighting or the positioning of the dog relative to the camera. What rules can one develop for a computer to follow that will enable it to determine what is and what isn't a dog? Similarly, what rules can one develop to distinguish bank (of a river) from bank (the financial institution) in a piece of text? To put this shortcoming into perspective, Gartner estimates that 80% of enterprise data today is unstructured. Must we discard 80% of our data for want of a good algorithm?


80% of enterprise data today is unstructured and challenging to apply machine learning to.

Deep learning says no! The latest advances in deep learning have blurred those boundaries between structured and unstructured data, enabling us to put that 80% to good use. Using deep neural networks, computers can learn these well-defined rules from a set of labelled inputs and can apply those rules to new inputs in fractions of a second, to detect speech, translate between languages, conduct a conversation or identify objects in images.


When you say "Hey Siri, call John" or "OK Google, what's the weather today?", a trained machine learning model listens to your voice, extracts important features from the audio clip, and determines what words are in the clip to take the appropriate action. When text similarity engines such as Deep Blue AI's Elbrus determine that the sentences "I liked the pasta" and "The pasta was great" are similar, but "I liked the pasta" and "I liked the fish" are different, a machine learning model is computing word meanings in the background. These models have reached near human-level understanding in these fields.


Computer vision is another exciting field where computers are approaching human-level accuracy. The academic world abounds with research papers on new computer vision models, and the proliferation of GPUs in cloud computing makes ever more complex models possible. However, until recently a lack of general understanding has left industrial computer vision applications with untapped potential. Due in large measure to increasing public awareness, this exciting field is forecast to grow at 6.1% annually between 2020 and 2025.


The manufacturing and automotive sectors are expected to hold the lion's share of the market during this time, with increasing demand for picking and positioning of parts. Medical imaging is also expected to make use of AI-driven computer vision, using complex neural networks to diagnose illness or injuries through X-rays, CT scans or MRIs in hitherto-impossible timeframes.


Computer vision is approaching human-level accuracy in several fields.

Deep Blue AI is proud to announce the launch of Iris, its computer vision framework for image classification. Iris uses cutting-edge architectures and our experience in both research and industry to take computer vision from the lab to production in compressed timeframes. Iris has the power and the flexibility to consider the most minute details in images to determine which group they fall into. Read our case studies below on diagnosis of lung conditions using chest X-rays and on identification of weeds to see Iris in action.


Case Study: Diagnosing lung conditions using chest X-rays

This case study makes use of an open dataset comprising 3886 chest X-rays from three categories of patients:

  • Healthy individuals

  • Individuals diagnosed with viral pneumonia

  • Individuals diagnosed with COVID-19

A sample of images from each category is shown below:


Iris was exposed to 80% of this dataset to learn from and was evaluated on the unseen 20%.

Based on the unseen dataset, Iris correctly classifies 96.4% of images.


96.4% of images are correctly classified into normal, viral pneumonia, and COVID-19.

Currently, the lung conditions corresponding to chest X-rays can only be identified by a medical professional trained in this area of medicine. While such people do exist, their time is valuable and not scalable - to get twice the output, twice the number of medical professionals must be hired. With the COVID-19 pandemic overloading the healthcare systems of several badly affected countries, any alleviation in workload for medical professionals leaves them more time to care for patients.


An artificial intelligence model that could use chest X-rays to accurately identify lung conditions has significant value. With the observed level of accuracy, Iris can either augment human judgement or replace it entirely.


Case Study: Identification of weeds from pictures

This case study makes use of an open dataset comprising pictures of plants in two categories:

  • Broad-leaved docks (a common species of invasive weed)

  • Different types of grass

A sample of images from both categories is shown below:


Iris was exposed to 90% of this dataset to learn from and was evaluated on the unseen 10%.

Based on the unseen dataset, Iris correctly classifies 94.8% of images.


94.8% of images are correctly identified as weeds or not weeds.

A machine with the ability to tell the difference between pictures in these two categories to this level of accuracy can enable the automation of spraying weeds with herbicide, which would be of immense utility to farmers. John Deere last year unveiled "See and Spray", an automatic weed spraying system at the nerve centre of which is an AI algorithm; this system is yet to make production. Their vision is of a tractor driving across cultivated fields, while a See and Spray device towed along sprays targeted areas of weeds with small quantities of herbicide.

John Deere's vision of automated weed spraying


Summary

Computer vision technology has come a long way in the last decade, largely driven by the increase in computing power available. Industry adoption, however, lags well behind the capabilities of AI in computer vision demonstrated in research labs. With frameworks such as Iris, cutting-edge computer vision and image classification is now accessible to small-to-medium businesses, accelerating your AI project from conception to production.


The base version of Iris, as demonstrated here, is sufficiently flexible to meet the demands of many customers. If required, we can customise architectures for your particular application to achieve even higher accuracy, based on an initial proof-of-concept. Our other computer vision capabilities include object detection and facial recognition. Contact us to discuss your specific situation at contact@deep-blue.ai



29 views0 comments

Recent Posts

See All