I had some time to myself recently so I was able to do things that I like — like…thinking about machine learning. What started off as tinkering, puttering, and being mildly spontaneous led to following up on some of my own curiosities. Freeing myself even briefly from my predictable, repetitive life is satisfying enough but I found myself falling into watching documentaries and movies that no one that I share a household with would ever tolerate.
In two days, with the help of a cold and a house-sitting stint, I’ve watched:
The narratives of these movies provided a broad context within which I could re-think about programming. The questions they raise are interesting. In my day to day life for example, programming has a very narrow context, that is, programming in an educational context is still very much focused on finding efficiencies in workflow. No objection to that. It can be fun and at least it’s useful. I also don’t kid myself by assuming that it’s particularly ground breaking, at least from a computing perspective. Automating repetitive tasks has been a stable delivery of computing since its inception and distributing educational materials broadly, in different formats, or through different media channels is not a hot research topic. An area that is notable, in the very least that it’s hard to not to see the money flowing towards it is finding answers to questions around how machines (but really by this we mean programs) help us learn. Machine learning, adaptive learning, artificial intelligence are different terms used to describe what is essentially programming.
Understanding how humans learn informs how programs are written for machines to ‘learn’. Displaying ‘intelligence’ requires that these programs are able to predict future behaviour based on previously acquired data. Advertising, surveillance and education are at least three areas where the ability to predict human behaviour has some perceived value — all peak either commercial or government interests. Indeed there are many more areas and most could make an argument that they will benefit humanity, or at least an entrepreneur. Putting aside for a moment a discussion about whether or not all of these areas are ethical, they all have similarities in implementation.
The problem that wants to be solved in education is how can a machine analyze what it ‘knows’ about you, predict what your next move will be, receive your next move, analyze, adjust, repeat. The goal is to bring you, the learner, closer to a learning outcome through the application of predictive analytics. The online course you might be taking could be tied into a program on a machine that is ‘learning’ how you learn. It will adapt what it presents to you in order to maximize the probability that the learning outcome will be achieved. Sounds not too bad; clever even.
Speculation about future possibilities
A rather optimistic speculation about the future is that there will come a time when machines and humans will physically merge. Whether this is seen as threatening, or desirable is a current debate to which the documentary ‘Trancendent Man’ sided with the desirable side of things. I liken the pleasure I got from watching ‘Trancendent Man’ to reading any piece of science fiction. Speculation about the future can be pleasant, especially if it’s coming from a millionaire. A millionaire that stands to benefit financially and emotionally from that speculative narrative…well then, bring out the popcorn!
Yeah, I’ll side with the U.S. Court of Appeals, civil rights groups, business leaders, journalists, lawyers, activists and many others on this one. Indiscriminate, mass surveillance the way it’s currently being implemented is illegal, and completely unjustifiable. The role I see predictive analytics playing in this world is troubling not only because of the consequences if the programming makes an inaccurate determination about you or your future behaviour, but because of how the data sets are collected (involuntarily). Not least of all is the concern that you can never know what lawful behaviour will be deemed ‘wrong’ or ‘undesirable‘.
It’s about ethics. The likes of Stephen Hawking, Elon Musk and Nick Bostrom have made strong, cautionary statements against AI. Acknowledging this gives me pause. While I’m not actively contributing to ‘Adaptive Learning’ from a programming perspective, it seems that if I do, I would also be contributing to the other questionable objectives that stand to benefit from it. While I’m sure farmers don’t worry about who eats the food they grow, I have at least that much of a reason not to worry about it.
[update – July13, 2015. Linus Torvalds pooh-poohs fears over Artificial Intelligence. “Unending exponential growth? What drugs are those people on?”]
Machine Learning by Brad is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.