In 2017, I visited one of the world’s first and most innovative artificial intelligence companies, DeepMind Technologies on Pancras Square in London. This was an eye-opening experience for me as it heightened my understanding of what the best people in the game worked on and just how far exactly the technology has reached beyond hype. I came face to face with applications that not only demonstrated the immense possibilities of Artificial Intelligence like the AlphaGo which famously beat the human world champion of the complex board game, ‘Go’ but also some great examples of how machine learning is helping revolutionise areas like Healthcare with DeepMind’s work with some hospitals and even the United Kingdom’s NHS.
However, my earlier observation was that Artificial Intelligence was at best aspirational at the time, with a lot of future potential but not quite there yet as the best examples where games, or toys or social media influence.
Since my visit to DeepMind, they have gone from AlphaZero to AlphaGo and now to AlphaFold which is far from a chess winning machine learning algorithm. AlphaFold accurately predicts the shape of proteins. Not the protein in the context of nutrition but the self-assembling nanomachines that do almost everything in the body. Our cellular processes — everything which can be said to make us alive — are tasks carried out by proteins. Protein folding (one of the toughest problems in science) could thus help scientist understand the biological processes of every living thing. This is because a protein’s shape is closely linked with its function, and the ability to predict this structure unlocks a greater understanding of what it does and how it works. This means drugs could be discovered more rapidly, diseases treated faster and the unlocking of many great mysteries. Thanks to Machine Learning and Artificial Intelligence. But just what are these terms that people throw around a lot and how do these techniques work?
In global terms, 2021 heralded us into a new decade of exciting advancements in technology. One of the most talked about aspects of technology is Artificial Intelligence or AI. Because it is constantly evolving and transforming, the definition of AI complicated, depending on who you ask. Hollywood movies have for long used sentient humanoid robots to define it, while other popular culture have popularized self-driving cars and chess-playing bots. In simple terms AI refers to machines that can reason and act on their own. Like humans and animals, artificially intelligent machines can make decisions for themselves when faced with new situations.
The quest of artificial intelligence and its builders is the creation of machines that can reason, learn, and act intelligently. Most of the advancements in AI you hear about today are machine learning based. Machine Learning is therefore a type of artificial intelligence that utilizes statistics to find patterns in huge sets of data. Data here refers to anything in the digital form fed into the machine learning algorithm ranging from numbers, words, pictures, likes, clicks, etc. Popular examples of machine learning applications are recommendation systems on YouTube and Netflix, search engines like google, social network feeds like Facebook and voice assistants like Siri. Within all these examples is the deep observation of your behaviour whether it’s what you watch, click, listen to, like or what you ignore, dislike or say in the case of voice assistance. A pattern is then established by the machine learning and an educated guess is made about what you may like or do on the platforms.
Which brings me to a type of machine learning called Deep Learning. Imagine a tremendously better machine learning that can identify and amplify even the smallest pattern. It is also referred to as deep neural network due to its multi- layered computational nodes that work cohesively to predict with relative accuracy based on the data it is fed. So in essence, deep learning is basically neural networks; which get their name inspired from the inner workings of the human brain with the nodes acting like neurons and the network acting like a brain.
You now get the idea of Artificial Intelligence and you know that Machine Learning is by far its most prevalent application. You understand how machine and deep learning work. To complete this gamut, you should know that machine learning (and deep learning) have three varieties. Supervised, unsupervised and reinforced learning. Supervised learning is the most prevalent variety whereby the computer or machine is told what to look for by labelling the data in order to signal to the machine to look for similar patterns. Every time you watch a show on Netflix, the algorithm will remember it and try to find similar shows based on that reference.
On the contrary, unsupervised learning allows the machine to look for whatever patterns it can find. Here the data has no labels and the algorithm just combs through everything and sorts or arranges based on a wide ranging parameters identified in the patterns. This is way less popular than supervised learning but has strong applications and is gaining a lot of ground in cybersecurity.
Lastly there is Reinforcement Learning which involves the algorithm learning from trial and error to achieve a clear objective. Many consider this to be the latest frontier of machine learning. The algorithm tries out various things towards achieving set objective and gets either rewarded or penalized depending on how much the particular behaviour helps or hinders it from achieving the set objective.
These are the basic concepts around artificial intelligence and are pretty much the areas to get the most attention from this decade onwards, as we set into a different type of world where artificial intelligence will take on new definitions and become much more practical rather than aspirational as it has been in the past decades.
This article was first published on January 4, 2021
Read more here: Source link