fbpx
Features Hub Opinion

Finding the key to the missing link in AI

Fri 1 Feb 2019 | Kyle J. Davis

Practical AI is here. Image identification, voice assistants and even the lowly thermostat are being infused with AI. People who aren’t remotely interested in technology are taking advantages of products and services that leverage AI. However, there is a missing link in how we process and serve AI-produced data

Non-AI data has been with us for a long time, but AI-sourced data is relatively new and immature from a software perspective. Libraries of AI models have really lowered the threshold of building software that can serve predictions, however this is generally implemented at the application layer. Today, this is reality, but if this were any other form of data, it wouldn’t be acceptable.

Database level abstraction

Imagine a scenario where all data was kept in a database for a user, except one piece lived only on the application layer. A software architect would look at this and ask many questions. What happens if this service goes down? How do we scale it if the database and application layer scale differently? How can you ensure anything is written to both layers in an unbreakable way (data atomicity)? But, for AI predictions, we don’t consider this… yet.

When you think about how data predictions work from the perspective of a database rather than from a data science or developer background, your expectations shift. Let’s consider an image classification problem: does a picture contain a head of cabbage? You accept the photo as a field of bits then you run the classification operation and get back a confidence score on if it’s a head of cabbage. The image here can be thought of as a query rather than an input for processing, the classifier is the table, and of course, the output is the result of the query.

Without this kind of database-level abstraction, the application becomes vastly more complicated, requiring the developer to account for many data handling scenarios that are unusual for applications:

  • What happens if the image classifier dies before the entire image is accepted?
  • Do you have to run the prediction every time the same image is presented?
  • How do I make sure the application serving the prediction isn’t a single source of failure?
  • How do you scale the image classifier?
  • How do you manage changes to the image classifier?
  • How do you manage multiple versions of the image classifier?
  • Can I store the result so I don’t have to run the classifier again?

The list goes on. To produce a stable, reliable and fast cabbage identification service you’d need good answers to all of these.

These questions happen to be questions that databases are well suited to answer. Any good database can provide answers to all these questions. Databases require this type of rigour to satisfy enterprise-grade demands.

Talent Gap

Part of the problem has been that developing AI solutions has not been a run-of-the-mill development activity. Years ago, it was a highly exotic skill only held by the upper echelons of the developer community, but today everyone needs an AI strategy and execution plan. To put that plan and strategy into motion there is simply a talent gap. Even wild salaries can’t always recruit the right AI talent.

Databases, however, are an essential part of any developer’s portfolio. Virtually any useful service requires some use of a database. Indeed, building a Create, Read, Update and Delete (CRUD) application is an exercise that most developers are tasked with doing early in their careers. Therefore, learning how to connect to a database, request data and get that data back is essential.

The AI talent gap is a complex, long-term problem, however part of this problem can be mitigated by moving AI predictions into the database layer.

What once required a deep understanding of algorithms and obscure libraries / toolkits can be boiled down into treating AI predictions as you would any database query: take your subject (image, raw data, etc.) and treat it like a query.

“The need is clear – AI, or at least the prediction end of AI, should live in the database layer with all the rest of the data”

Your model takes the role of your table and then you get back your prediction. Predictions become no more complicated than a CRUD application.

The final factor is performance. For an entire class of application, AI is only novelty unless it can be done in the instant threshold. While human reaction time varies, developers know that most humans perceive anything less than around 100ms as instantaneous and 500ms is an eternity when it comes to reaction flow.

Take voice assistants, for example. While they are getting better, it still isn’t a fluent conversation – it’s more akin to barking commands. Until we pass the instant data threshold, we really won’t advance past the novelty phase of AI applications. Scaling any service to do anything non-trivial in instant response time thresholds is tricky, much less an AI prediction.

In the database world, performance optimisation can be baked-in and rather than building a bespoke high-performance application the performance can be generalised to operate within very tight envelopes.

The need is clear – AI, or at least the prediction end of AI, should live in the database layer with all the rest of the data. This means that AI predictions can be commoditised, built by sharp but not specialised developers, and scaled to meet the next generation of AI performance demands.

Experts featured:

Kyle J. Davis

Head of Developer Advocacy
Redis Labs

Tags:

AI data bases developing engineering machine learning
Send us a correction Send us a news tip