Artificial Intelligence is a multi-faceted term. According to Wikipedia: “Colloquially, the term ‘artificial intelligence’ is applied when a machine mimics ‘cognitive’ functions that humans associate with other human minds, such as ‘learning’ and ‘problem solving’.” Others would argue that these days, anything gets the label “artificial intelligence” until you understand how it works… then it becomes simply “software.”
An important approach used in AI is machine learning. Machine learning however, is not synonymous with AI, even though many equate the two (see What is Artificial Intelligence for some clarification). Arthur Samuel is attributed with its definition: a technique that “gives computers the ability to learn without being explicitly programmed.” It is machine learning that lets us teach a computer to find a cat in an image, or make sense of what it “sees” in a street when steering a self-driving car. It is machine learning that analyzes which movies and series people watch, to make recommendations to me on what to watch next. One could argue that there is no intelligence in that – it was human intelligence that judged the material for similarity after all; the algorithm merely gathers what humans came up with.
And relying on this particular technique of AI comes at a cost. Things get problematic when developers over-promise AI performance and it under-delivers at launch time. If it gets better with use, how good is it when I start using it? Does it provide enough value out of the box that I am willing to keep using it to ultimately reap the rewards? I want to argue that many solutions fail at this because they operate (at scale) a software release practice that some blame on Microsoft: a public beta test with what users assume is a mature product.
Take Siri. When iOS 9 launched, Apple promised a self-learning product that gets better the more you use it. Yet here I am, over a year later, using Siri app predictions on a daily basis, still experiencing situations where it acts dumb. I type the first letter of an app I want to launch – an app I use frequently – and it instead thinks I want one that I rarely use. I start using my phone only at 5am when I have an early flight to catch. I practically ALWAYS use Uber to get to the airport. Yet does Siri offer me Uber as one of the 8 apps I’d likely need next when turning the phone off airplane mode at 5 in the morning? Or does it tell me if my flight is still on time? It does not. I also practically ALWAYS use Uber to pick me up from my arrival airport. If Siri detects that I have been offline for a while and then get back online and I am suddenly in a completely different location, shouldn’t it know I was flying and likely need Uber next? It should, yet it doesn’t.
Why does the software industry try to so religiously rely on a technique that clearly under-delivers for the average user? What happened to good old thoroughness and user-centric design? If we want personal assistants or similar technology to succeed, we cannot rely on machine learning alone, not right from the start. Software developers (rather: designers) need to “digitize” our lives, meticulously defining rules that describe how we go about our daily lives, so that the little helpers can truly assist us through it, helping like a human would. What is the process of organizing a barbecue with friends? What is the process of going on a business trip, buying life insurance, making a dentist appointment? Each and every step needs to be designed and considered, converted into rules and decision trees. Machine learning can then come on top to improve the processes over time, and yes, maybe “get to know me over time” – over time, but not on day 1.
Until that is the case, try to launch with a reasonably mature solution, but allow me to enhance it myself, together with controllable machine learning. Let me write “macros”, like we still do in our desktop operating systems or in MS Word. Let me configure my own rules and the assistant’s behavior. I know my life and preferences best.
The post How patient do we need to be with Machine Learning? appeared first on Aspect Blogs.