fbpx
Features Hub Opinion

The road to AI – speedy and full of blind spots

Mon 24 Feb 2020 | Rachel Roumeliotis

Before organisations rush to exploit AI, they must be aware of the blind spots that can develop

It is fair to say that artificial intelligence (AI) is everywhere. Newspapers and magazines are littered with articles about the latest advancements and new projects being launched because of AI and machine learning (ML) technology. In the last year it seems like all of the necessary ingredients – powerful, affordable computer technologies, advanced algorithms, and the huge amounts of data required – have come together. We’re even at the point of mutual acceptance for this technology from consumers, businesses, and regulators alike. It has been speculated that over the next few decades, AI could be the biggest commercial driver for companies and even entire nations.

However, with any new technology, the adoption must be thoughtful both in how it is designed and how it is used. Organisations also need to make sure that they have the people to manage it, which can often be an afterthought in the rush to achieve the promised benefits.

Before jumping on the bandwagon, it is worth taking a step back, looking more closely at where AI blind spots might develop, and what can be done to counteract them.

The lack of ML Development

As the pace of AI and ML development intensifies alongside heightened awareness of cybercrime, organisations must ensure they take into account any potential liabilities. Despite this, it has been proven that security, privacy, and ethics are low-priority issues for developers when modelling their machine learning solutions.

According to O’Reilly’s recent AI Adoption in the Enterprise survey, security is the most concerning blind spot within organisations. In fact, nearly 73 percent of senior business leaders admit that they don’t check for security vulnerabilities during model building.

Additionally, more than half of organisations also don’t consider fairness, bias, or ethical issues during machine learning development. Privacy is similarly neglected, with only 35 percent keeping this top of mind during model building and deployment.

Register For Big Data & AI World Now.

Join 763 expert speakers and 526 technology suppliers at London’s ExCeL, March 11-12.

Despite the lack of attention to security and privacy concerns with machine learning development, the majority of resources are focused on ensuring AI projects are accurate and successful. For example, 55 percent of developers mitigate against unexpected outcomes or predictions, but a large number who don’t still remain. Furthermore, 16 percent of respondents don’t check for any risks at all during development.

This lack of due diligence is likely due to numerous internal challenges and factors, but surprisingly, a big part of this problem is having the skills and resources to complete these critical aspects of the development process. In fact, the most chronic skills shortages in technology are centred around ML modelling and data science. To make progress in the areas of security, privacy, and ethics, organisations urgently need to address this.

What can be done?

AI maturity and usage has grown exponentially in the last year. However, considerable hurdles remain that keep it from reaching critical mass. To ensure that AI and ML are both represented by the masses and that they can be used in a safe way, organisations need to adopt certain best practices.

One of these is making sure technologists who build AI models reflect the broader population. Both from a data set and developer perspective this can be difficult, especially in the technology’s infancy. This means it is vital that developers are aware of the issues that are relevant to the diverse set of users expected to interact with these systems. If we want to create AI technologies that work for everyone – they need to be representative of all races and genders.

As machine learning inevitably becomes more widespread, it will become even more important for companies to adopt and excel in this technology. The rise of machine learning, AI, and data-driven decision-making means that data risks extend much further beyond data breaches, and now include deletion and alteration. For certain applications, data integrity may end up eclipsing data confidentiality.

As AI and ML become increasingly automated, it’s essential that organisations invest the necessary time and resources to get security and ethics right. To do this, enterprises need the right talent and the best data. Closing the skills gap and taking another look at data quality should be their top priorities in the coming year.

Experts featured:

Rachel Roumeliotis

Vice President of Content Strategy
O'Reilly Media

Tags:

artificial intelligence Ethics privacy security
Send us a correction Send us a news tip