A novel legal construct is seeking to balance the dual needs of fast-paced innovation and privacy
AI and analytics has a “seesaw” problem. Innovation in AI requires access to rich and expansive datasets, data which is being created all of the time by a raft of entities. But personal and corporate data, if not handled correctly, risks exposing individuals or organisations. If we limit the amount of data entering the innovation funnel, the seesaw pivots in the opposite direction, hamstringing innovation.
A new legal construct is seeking to balance the dual needs of fast-paced innovation and privacy by allowing data owners to cross-pollinate data securely and ethically. It’s called a Data Trust, and interest in them is rising where data sharing is important, such as in smart cities, health data ecosystems or collaborations between business and academia.
“A Data Trust is essentially a legal construct that enables various data providers to share their data in a secure, compliant and ethical way,” explains George Zarkadakis, digital lead at Willis Towers Watson, where he is leading the development of a Data Trust to connect a number of the global advisory firm’s clients.
The business demand for Data Trusts, a term first coined by the UK Open Data Institute, can be traced to AI’s centrality to the 21st-century economy. This economy is built on data, but the foundations are rocky. Good quality and varied data is scarce, and the data landscape is fragmented. Where data is up to scratch, a range of regulatory, technological and ethical issues keep it siloed, shielded and unable to blossom.
“To survive and thrive in the AI economy, companies must make considerable investments in how they collect, store and handle data in a collaborative way,” says Zarkadakis, who offers the example of an airline, a hotel chain and an insurer: