The career of Big Data World Frankfurt speaker Bruno Kramm has spanned gaming, music and politics. Ahead of his appearance at Messe Frankfurt on November 14, Techerati spoke to Bruno about his new organisation Digitalgrid, which focuses on the cultural and social effects of machine learning, artificial intelligence, and the comprehensive automation and digitisation of our society.
Do you remember your first encounter with AI?
My first contact with AI was in my childhood – very romantically, I have to admit, in the form of science fiction authors like Stanislaw Lem or Isaac Asimov’s “Three Laws of Robotics” and not to forget the immense impression HAL 9000 gave to me in 2001 Space Odyssey.
My first computer was a Commodore Pet 3016 which my father – a renowned professor for mathematics – brought home from university when I was 13. He was deeply engaged in games theory and chaos theory, which had a significant impact on the early AI research, and he conveyed that to me rather spectacularly. Then I attended his whole lecture series on chaos theory and its influence on games theory.
But as an artist who specialises in electronic music I was intrigued early on by Google’s TensorFlow technology and the interesting possibilities artificial intelligence creates for music composition and sound synthesis forms (as for example NSynth algorithms).
What are the biggest challenges AI brings to the table for the economy and society?
In this regard, the economy and society are often diametrically opposed. AI should be a common good per se because especially machine learning is a basic prerequisite of AI. It is dependent on our data to master a learning process defined by us humans successfully according to our ethical understanding.
But more often than not the economy evaluates the proprietary possession of data and knowledge with the traditional paradigm of productive producing — products are made of raw material. This not only contradicts to the elementary transparency and traceability but also the copyrights of data and the desire for protection of privacy. Especially in Europe, GDPR has defined some new thresholds.
Furthermore, in the hands of a few economic organisations AI may lead to misuse. Control requires transparency and the allocation onto a broad basis of economical players from middle market to large industry. The self-commitment via responsible AI practices like at Google or the merging into ethical self-control committees like at Amazon, Google, IBM Microsoft and Apple are a start.
Civil society has too often associated images of fear with AI, which stem far more from Hollywood dystopias then from startup labs. But the industry also needs the acceptance of society which can only be achieved by knowledge, in order to apply the new AI-based technologies in the interest of the whole of mankind. Besides the ethical aspects, it is also about mitigating future social tensions caused by blazing fast automation.