It’s impossible to spend a day watching the news, browsing the net or visiting conferences without hearing about AI. The proponents all advocate wonderful opportunities, but there are also antagonists. There are people who believe AI will surpass human intelligence soon, and eventually take over the world. Popular media is quick to jump on doom scenarios as they make for great stories to scare people.
Now, let’s look at reality. Looking at where the big money is being spent is a good clue as to where the biggest progress is being made. Let’s take the most visible area for us all, the world of online advertising.
Last week, I bought a new leather laptop bag online. Obviously, I browsed the web to look for the nicest to suit my taste and indeed did find one (happy with it, BTW). Now, a week later, still every website I browse shows me ads of the very webshop I bought the bag from. Very useless and quite annoying. For all who don’t have adblockers, I’m sure you have seen this at least 100 times before. It’s really not that intelligent.
This obvious flaw is not the result of bad AI, or bad algorithms. It is simply the lack of data: the system doesn’t know I’ve already made a purchase from the shop. The issue is that decisions are taken based on the limited information the system has. That’s perfectly fine for well-defined problems like chess and GO; it’s somewhat acceptable (though annoying) for online ads; but perhaps more problematic when it comes to career-deciding recruitment, loan approvals and other potentially life-changing events. At the level of ads, we perhaps needn’t be afraid, just merely annoyed. In other applications, however, we should be more concerned.
People who don’t have all the data also make the wrong decisions; AI is no different. Where people have an obligation to inform themselves before taking a decision, people equally have a responsibility to adequately ‘inform’ AI systems to ensure they help us make good decisions. Having said that, note that other than human beings, AI typically needs a lot of data to be effective. And equally typically, this data tends to be privacy sensitive. So, adding all these extra data sources to make AI more effective is not without costs and risks.
Say the online ad engine would have presented me leather treatment products, because tapping into my financial data, it ‘knew’ I had just bought a leather bag. It would certainly be a somewhat better online experience, but would it be a good idea? Don’t think so.
Of course, people are working to improve AI to make educated guesses as to whether I bought the bag or not, but what if that’s wrong? Maybe not so much of a problem with the ads, but how about the more relevant decisions AI may help to take? You wouldn’t want that loan denied because of some ill-defined assumptions.
AI really needs to be managed at a higher and broader level than just the algorithmic component.