Machine learning researchers are not solely concerned with improving the accuracy of models. They want to know how they can be corrupted and undermined – a research agenda that warrants more attention. Techerati spoke to an IBM researcher at the heart of it
It’s no exaggeration to say that artificial intelligence (AI) has been the most transformative technology this side of the millennium. Although it has its origins in the 1950s, techniques developed in the last few years in machine learning, and its offshoots neural networks and natural language processing, have produced remarkable results that would have been unthinkable to those early researchers.
Today, AI surrounds us. We can now interact with retailers and other organisations via convincing chatbots, instantly translate most languages into our native tongue, and make smart investment decisions with a few taps of an app. But as AI encroaches further into our lives, and we allow it to drive our cars, decide our recruitment decisions, or predict crime, the technology community and society at large are rightly insisting that autonomous systems are not just intelligent, but trustworthy.
Trustworthy AI
It is largely agreed that AI trustworthiness involves satisfying four conditions: fairness, transparency, explainability (and thus accountability) and robustness. To understand these conditions, just think of a political body, a comparison that gets to the nub of the debate surrounding autonomy, sovereignty and legitimacy.
We can think of a trustworthy AI as a legitimate government. We want governments to heed the concerns of all of the citizens over which it presides and pass equitable laws (fairness), we want to know how it arrives at laws that affect us (transparency), laws that ought to be intelligible to us (explainability), and we want to ensure governments are resilient to the intrusion of nefarious or incompetent actors who might encourage the production of dangerous or counterproductive laws (robustness).
Since as early as 2016, the IBM Thomas J. Watson Research Center has investigated ways to bring trust to AI, so that the users who rely on it; humans; organisations and countries, can trust the decisions made on the basis of its models. To that end, Big Blue churns out papers, hosts annual conferences and releases evaluative software that enables users to gauge the trustworthiness of their AI systems, and develop more trustworthy ones. They are by no means alone in this effort, as society is gradually waking up to the implications of an increasingly automated world.
Pin-Yu Chen is a research member of the Trusted AI Group & MIT-IBM AI Lab at the IBM Watson Research Center, who primarily focuses on the robustness of AI in neural networks and is one of the field’s most prolific researchers. Speaking to Techerati, Chen explained the state of AI robustness research, and navigated us through the field’s most pressing research questions.
Neural Networks
Neural networks have come to dominate AI. They are essentially very large networks containing millions of parameters. If you flood them with an ocean of raw data, such as images with labels representing the tasks you want it to learn (e.g. which face belongs to who) and anchor the data to classifiers (this face belongs to Bob), neural networks transform into an end-to-learn learning system as if by magic. Neural networks are used not just to recognise images but speech, text and symbols, famously enabling self-driving cars to comprehend the array of visual objects through which they navigate.