We’ve described before the nuanced distinction between AI and machine learning: machine learning is a subset of AI and encompasses cutting-edge work primarily using neural networks to mimic how the brain computes, applied to the more superior capacity of computers, enabling them to learn.
In his BETT talk, Joel Hellermark, founder of Sana technologies of Sweden, and all-round AI-prodigy, (not to be ageist but at 22, his approach, clear insight and success is quite astounding), defines machine learning via neural networks using a simple comparison:
Deep Blue versus AlphaGo
In May 1997 Deep Blue, an IBM computer, won a six-game chess match against Kasparov — the world’s prevailing chess master at the time. It was an astounding feat at the time, but one which has subsequently been eclipsed by AlphaGo, and subsequently AlphaZero.
A quote from Wikipedia is instructive here: Deep Blue employed custom VLSI chips to execute the alpha-beta search algorithm in parallel, an example of GOFAI (Good Old-Fashioned Artificial Intelligence) rather than of deep learning which would come a decade later. It was a brute force approach, and one of its developers even denied that it was artificial intelligence at all.
As Hellermark succinctly explains in his BETT talk, and mirroring the comments above, as impressive as Deep Blue’s achievement was, it in no way showcased what we understand today as being Artificial Intelligence. Deep Blue was a large, complex brain capable of memorizing and retrieving - at speed - any one of the 1070 chess moves available, but essentially was a brute chess force - muscling Kasparov out simply with the speed of calculation and data retrieval.
AlphaGo on the other hand was a whole different beast. The game of Go has 10170 possible board configurations! Essentially Go cannot be won using brute computing power, but what even Go experts and computer scientists would simply refer to as instinct and experience. How does one imbue a computer with intuition and experience? You teach it to learn. By inputting not every possible move into the computer - a physical impossibility - but instead designing it with artificial neural networks (ANN).
ANNs are seriously fascinating pieces of computing technology: designed as a mesh of progressive, tiered layers of nodes, the input and processing of information mirrors that of the human brain. Inputs from, say our optic nerve while reading letters on a screen, are filtered through increasingly “smart” layers of deeper neuron networks, until an intelligent output is achieved, such as comprehension of the text. Each node sends and receives inputs to and from a variety of other nodes, and based on the rules that they are programmed to follow, can weigh the output of other nodes depending on what is defined in the rules as success or “a win”. As the system operates it “learns” which of its nodes are trustworthy, and which have misinterpreted results or rules, and can assign a higher value to the output of some nodes, over others.
It may seem like I have made quite a meal of this distinction (brute force computing vs. neural networks) however the distinction has huge consequences for how we design learning software and management systems going forward.
AI in the classroom: Are we there yet?
Currently most LMSs take by-and-large the “brute force” approach. Content, even massive data sets, are encoded into the software and served to learners based on a number of immovable rubrics. What I mean by “immoveable rubrics” is that the system itself is not adapting to how it is being used, or how the learner is accessing and responding to the information. The content can of course be personalized, to a degree, by hard coding various pathways through the content, but the content does not itself adapt on the fly, to the learner’s outputs.
Read more: The role of the LMS in designing personalized learning paths for students
While there are certainly many AI applications already at work in classrooms and blended learning environments — machine learning is responsible for text-to-speech software for instance — the majority of what is termed “AI” in most blog posts and articles hoping to herald the dawn of AI in education, are in fact still describing rudimentary AI programs (Good Old Fashioned Artificial Intelligence), such as grading tools, interactive math gaming, robotics etc.
The age of “true” AI in the classroom is still dawning, but is swiftly moving towards a point where the neural networks of AI computers will begin to mirror the neural networks of learners in a way that can anticipate every nuance of difficulty, skills development, application and input that will bring about personalized learning at a scale and depth we have never seen before.
The race is on to see which of the edtech behemoths (or indeed which of the smart, agile startups) will capture this potential and leverage it to design revolutionary yet also real learning environments destined to dominate classrooms of the future.
Join me next time as we explore the front runners in this space, and what makes their ideas and products distinctive.