Artificial Intelligence can be traced back to the 1950’s where it remained in relative isolation, barring a few ‘boom’ phases, until the prevalence of the term heightened during the past few years – becoming the buzzword of the 21st Century. Advances in computing power helped machine learning grow in leaps and bounds, neural networks became more prevalent and companies that invested early on started leveraging the benefits of these technologies. This has ultimately led to the hype around AI, with people and companies now promising to solve a myriad of the world’s problems through this technology. The biggest issue is that the term ‘AI’ is widely misunderstood, and is easily thrown around with promises of solving problems without a true understanding of the meaning and capabilities of AI.
So what is Artificial Intelligence?
“Artificial Intelligence (AI) is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit characteristics we associate with intelligence in human behaviour – understanding language, learning, reasoning, solving problems, and so on.” 1
Let’s pause and reflect on that for a moment, as this in essence means that true AI is to artificially mimic the human brain in computer systems.
AI is not there yet, in fact, no one has achieved the goal of creating a self-aware AI system that can think, understand, interact and make decisions like a human. AI research is advancing in many ways, and is already capable of some amazing feats, such as beating humans at various games and perusing data faster than humans can, but there is still a long road ahead. Specifically the areas around understanding human motivations, or drawing nuanced conclusions from data, are still outside the grasp of current systems. However, AI research has also spawned many useful technologies that are relevant today and impact our lives – some of which we don’t necessarily realise are actually offshoots of AI.
Here are some examples of how AI is already impacting our lives:
- Personal assistants that hear, understand and respond to us such as Siri and the Google Assistant.
- Businesses are using chatbots at their frontlines to offer automated or first level support.
- Medical research is leveraging neural networks and machine learning to diagnose medical conditions with greater accuracy than before and even creating mechanical hands that can ‘touch’ for amputees2.
- Systems exist that can recognise and classify objects, people, faces and text from images and videos. Most smartphones today already use this to offer up smart scene suggestions as well as run facial recognition to categorise our photos by person.
- Machine learning is also widely deployed and utilised in real-time for traffic reporting in multiple navigation systems such as Google Maps, as well as serving up personalised ads and recommendations.
- Netflix utilises machine learning to analyse your watch history, your ‘likes’ and what is in your watch list to understand your tastes and offer up what are essentially machine picked suggestions. Interestingly, by providing better search results Netflix estimates that it is avoiding cancelled subscriptions that could reduce its revenue by $1b annually.3
- AI and machine learning is also playing a big part in the challenges that industries face in combating fraud and some companies such as Kount4 are building entire fraud prevention systems around machine learning5.
- Other major industries benefiting from AI are banking and retailers, where machine learning is helping them target their respective audience’s better, offering more personalised services and giving them the ability to leverage their vast amounts of data to respond to competitiveness in the market place3.
The above examples mention machine learning, neural networks, personal assistants and chatbots, data analysis, and personalised recommendations – but how do they fall into AI? Simply put, all of the above fall under the broader scope of AI research and most of the examples provided leverage multiple sub-fields of that same research. Let’s take a look at some of the primary sub-fields:
Machine learning – uses algorithms to learn autonomously from data and information in order to make decisions or provide insights into data.
Robotics – automates systems to complete complex tasks that would ordinarily require human input.
Natural Language Processing – is aimed at getting systems to understand input from human speech as well as generating responses in human languages.
Computer Vision – seeks to develop techniques to help computers see and understand the content and context of the world around them.
Neural Networks – are designed to recognise patterns and are loosely modelled on the human brain.
These are just a few of the many ways AI is touching our lives, and even though none of the above are classified as true AI, if it wasn’t for the important research being done in this field we would not have any of these technologies today.
Data scientists mainly approach AI from a mathematical viewpoint to solve their theoretical problems by devising the models and algorithms necessary to interpret and extract meaning from their data. Developers are fundamentally no different, as they approach their work from what they know will work and where required, look for and utilise existing frameworks and improve these or develop new code or frameworks to solve problems they may face when building solutions.
Getting started in AI may sound daunting as many people assume that one requires a computer science degree specialising in AI, together with a good understanding of linear algebra in order to write new algorithms. The examples mentioned earlier may have started with data scientists, but they were all built by developers. For a developer looking to build solutions in the AI space, the initial learning curve is quite steep, especially around understanding the various frameworks, but the big secret here is that in all those examples mentioned, as well as others, there is a vast scope of work one can leverage without being required to write a single algorithm. These range from machine learning, neural network frameworks right through to the underlying algorithms that can be incorporated.
There are various open source projects on Github and tutorials where one can look at the code, test out the technology and from there, see how it can be leveraged going forward. While many of the AI type projects that are on Github are great for getting started, most are far from production ready. For those projects, be prepared to put in the effort to create stable, production ready code that supports threading, databases, handles exceptions properly and in most cases these will need to be converted from their original language to your own programming language of choice. If we think of these as just new frameworks and/or tools that we can learn and introduce to use in our projects, then developers have plenty to sink their teeth into to get started.
While many developers aim to work in the AI space, thanks to the hype around this term, developers should actually be looking at how they can leverage current AI research to solve existing problems. This will open up a whole new world of opportunity to create even more advanced solutions for our clients, ones that are smarter, faster and relevant in today’s AI centric market.
Written by Jason Elder, Senior Technical Consultant at Saratoga.
1 – The Handbook of Artificial Intelligence Volume 1, Barr & Feigenbaum, 1981.
2 – Neurotechnology Provides Near-Natural Sense – DARPA, 2015.
3 – Artifical Intelligence The Next Digital Frontier – McKinsey Global Institute, 2017.
5 – Top 9 Ways Artificial Intelligence Prevents Fraud – Forbes, 2019.