top of page

Data Driven Business Solutions

We specialise in Data Science and AI solutions for complex engineering and business problems.

Search
  • Varun Rao

Why AI/ML, and why now?

Artificial intelligence/machine learning are this season's buzzwords, with all manner of grandiose projections assigned to this endeavour (see this and this). Companies of all sizes and flavours are falling over themselves to jump on the AI/ML bandwagon, lured by promises of super-human intelligence and profound insights.


What is not always obvious is how rooted machine learning techniques are in traditional, boring mathematics. Most feed-forward neural networks can be constructed from scratch with undergraduate level linear algebra, calculus and statistics; decision trees are even simpler. While the implementation details of more complex networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are fairly involved, the mathematical foundation required to construct and understand these techniques is surprisingly modest.


It may surprise readers to learn how old some of the building blocks of machine learning models truly are. Isaac Asimov, one of the author's heroes, introduced his seminal "Three Laws of Robotics" in a short story published in 1942. In yet another case of life imitating art, the first paper in artificial intelligence was published shortly thereafter by McCulloch and Pitts in 1943. Alan Turing devised his eponymous test in the seminal paper he published in 1950. Early experiments in game-playing algorithms and machine translation were conducted in the 1950s. Perceptrons, early precursors to feedforward neural networks, were also first studied by Frank Rosenblatt in this decade. Back-propagation, which is the crucial technique used to help a model learn from its mistakes, was invented in 1969 by Bryson and Ho. IBM's Deep Blue computer beat the then-world chess champion in 1997, before many of this blog's readers were born.

Given these facts, the obvious question is - why has the AI/ML field exploded in the recent past? This meteoric rise is ascribed to two factors:


Availability of data: ML algorithms are data-hungry beasts. Happily, as we discussed in our previous blog post, there has been a recent explosion in the quantity (if not quality) of data available for ML researchers and users to use. It is difficult to overstate the phenomenal influence of the internet in making this data so readily available at negligible cost. The ML community has also whole-heartedly adopted the open-source philosophy, resulting in very large databases of open source images, text, and numerical data that anybody can use for free.


Increasing computing power: Similar to the petrolhead truism "there's no replacement for displacement", the notoriously greedy ML algorithms operate on the simple principle that more computing power is better. The widespread use of these algorithms would not have been possible without the incredible 1 trillion-fold increase in computing performance between 1956 and 2015. One staggering statistic puts this in perspective - most modern cellphones have over 100,000 times the processing power of the Apollo 11 computer that first transported man to our nearest celestial neighbour.



These pioneering astronauts got to the moon with a tiny fraction of the processing power of the device you are reading this article on. Picture courtesy NASA.



By and large, these two factors explain the remarkable ascendance of ML. It is now so widely used as to be virtually ubiquitous - every major tech company uses some form of ML in its operations. We characterise these as first-tier applications: companies for whom AI/Ml is the obvious weapon of choice.


What is not obvious is how extensively AI can be used for what we call second-tier applications. This relates to companies and business environments that have hitherto used traditional analytical techniques with a focus on human decision making, but have large amounts of data to work with. Typically, this type of organisation relies on well established processes that rely heavily on subject matter expertise and deep institutional knowledge. The rapid emergence of AI/ML may just be the quantum leap they are looking for.


Some unusual examples of ML applications include:

Wildlife research: wildbook.org allows researchers to perform large-scale analysis of wildlife populations using image recognition algorithms, including automated tagging of individual animals in their natural habitats.


Agriculture: peat have developed image recognition technology that identifies plant diseases based on photographs taken by individual farmers.


Sexual orientation: Researchers at Stanford University recently published a paper claiming to have developed an image recognition algorithm capable of determining the sexual orientation of a person based on a facial photograph.


Synthesising music: NSynth uses neural networks to synthesis music based on a large set of training data.




A word of caution

As a parting shot, students of the history of AI would do well to pay heed to the cautionary tale of the so-called 'AI Winter'. Observant readers would have noticed a gap in the brief history of AI discussed in an earlier paragraph roughly corresponding to the 1970s and 1980s, when public opinion turned deeply negative about the prospects of usable AI.


"It is not my aim to surprise or shock you - but the simplest way I can summarize is to say that there are now in the world machines that think, learn and that create. Moreover, their ability to do these things is going to increase rapidly until - in a visible future - the range of problems they can handle will be coextensive with the range to which human mind has been applied"

Herbert Simon (1957)


Bombastic predictions such as Simon's would prove to be extremely optimistic, inevitably resulting in disappointment setting in. Researchers had written cheques their algorithms could not cash, and sky-high expectations were repeatedly belied. Funding for AI projects dried up, and public opinion turned markedly pessimistic. A pivotal moment was Sir James Lighthill's 1973 report prepared for the British Science Research Council, in which his damning assessment was "In no part of the field have the discoveries made so far produced the major impact that was then promised".


Already, concerns are growing about the applicability of AI to research fields, particularly with respect to their accuracy and reproducibility. Prospects for self-driving cars are being quietly downgraded (see this and this). Medical applications of AI have been found wanting. Promises of high-performing chatbots have not come to fruition.


Most worryingly, researchers have shown that with malicious intent it is possible to trick ML algorithms into nonsensical behaviour, such as the famous example below. The image on the left is clearly a panda and is correctly identified by an image recognition algorithm with a confidence of 58%. An imperceptible (to the human eye) perturbation shown in the middle figure is added to the image of the panda to yield the image on the right. The resulting image is still clearly a panda to the human eye, but the algorithm classifies this as a gibbon with over 99% confidence.


Result of adversarial input (middle) added to original image (left), resulting in misclassified image (right). From Goodfellow et al (2015).



It would behoove AI/Ml researchers and users to learn from the mistakes of the past. As machine learning moves from research to business applications, it is crucial that its practitioners are brutally honest about its limitations and common pitfalls.


History is not doomed to repeat itself. The foundations on which we have built the current framework of algorithms is subject to an unprecedented amount of scientific, regulatory, and end-user scrutiny, and these approaches are currently being applied with tremendous efficacy to a wide range of applications. A significantly larger public awareness of the wide range of AI applications also helps hold the high priests of the AI/ML movement to account. The result is a nuanced, moderate view of the field, with all the usual caveats that should rightfully apply to the uncertainties and vagaries of this exciting technological advancement.


In our view, another AI Winter of the same scale as before is unlikely (a view shared by other researchers), but as responsible practitioners of the art, we must remain on our guard.

45 views0 comments

Recent Posts

See All
bottom of page