fbpx
Features Hub Opinion

A brief history of intelligence, and what it means for the future of AI

Thu 21 Mar 2019 | Dagmar Monett | Colin W.P Lewis

In order to develop better AI technologies, we first need to understand what “intelligence” is, say Dagmar Monett and Colin W.P Lewis of the AGI Sentinel Initiative

Understanding intelligence is one of the major scientific challenges of our time; however, the science of intelligence is very much in its infancy. By working closely with scientists and leading thinkers from multiple disciplines, we can start to help humanity to better understand intelligence.

If human beings have a better understanding of intelligence, it will not only help to continue to develop artificial intelligent machines, but it will also help to improve individuals’ situational awareness, decision making, and values, and ultimately greatly improve people’s knowledge of each other and our world, and thereby improve the quality of life for society overall.

An important distinction should be made, however – defining is not the same as understanding.

Seeing clearly

Despite the fact that many definitions of intelligence have been proposed so far, our understanding of what intelligence is has been limited by the lack of breakthrough advances in intelligence research, especially in neuroscience and cognitive science.

The more we understand the functioning of the human brain, where intelligence is “located,” how it relates to exogenous or endogenous factors, such as social and genetics, the better we can envision methods and policies that enhance intelligence and, with it, improve human well-being. More intelligent individuals will be more successful citizens, capable of living more meaningful lives, conscious lives.

Our understanding of intelligence could be strengthened and reinforced if we had well-defined definitions of intelligence, the advantages of which would be: a better educated general public, more synergy between politics and society (since the understanding of what (machine) intelligence is will diminish the fears of technological innovation), as well as enhanced understanding and knowledge of other capabilities that could contribute to the development of advanced intelligent systems and human-computer interactions.

Complexities of consensus

Over the last 100 plus years the concept of intelligence has been defined on numerous different occasions, and by different fields both formally and informally. There is a myriad of informal definitions of both human and machine or artificial intelligence (AI). AGISI research analysed more than one hundred informal definitions from the literature and Shane Legg and Marcus Hutter (2007a) collected 71 definitions divided into three broad categories: collective definitions, psychologist definitions, and AI researcher definitions. Despite many attempts and suggestions, there is still no generally accepted definition of intelligence.

Defining intelligence has been a rather controversial topic in the AI community and it remains one of its fundamental problems since the creation of the field. This is a perceived stumbling block in the pursuit of understanding intelligence and building machines which replicate and exceed human intelligence, as addressed by Rodney A. Brooks in 1991.

More than 25 years later, Michael Wollowski, Peter Norvig, and others (Wollowski et al., 2016) outlined in their study a stark difference of opinion with respect to the definition of AI. Very little has changed since. This stark difference is further reflected in our study “Defining (machine) Intelligence” (Monett & Lewis, 2018).

Competing conceptions

Two mainstream perspectives divide the conceptions into definitions of human intelligence and of artificial (or machine) intelligence. But there are also definitions of animal intelligence (non-human animals) and even intelligence in plants. And then there are socio-cultural meanings of intelligence (Ema et al., 2016), too.

In the AI field, there is a distinction between narrow (or weak) and general (or full) artificial intelligence, the former being focussed on solving specific problems without the capacity to generalize to other contexts and situations, which the latter does. The main debate over the last years has not been around a definition per se, however, but it has been centred on whether machines or systems can be developed that replicate or even surpass the most intelligent humans, which has been called superintelligence.

There are also many definitions of machine intelligence. Although there have been numerous attempts, there is no agreed upon working definition.

For example, Legg and Hutter introduce a definition of intelligence “rigorously formalized in mathematical terms” in (Legg & Hutter, 2007b). But our research revealed their definition to be accepted less than others in the research community.

“The well-being of all humanity will improve as our understanding of what intelligence is develops”

Many definitions have failed to find a wider acceptance because of the impossibility of measuring the intelligence they define. It is not only how do we define intelligence but also how do we transfer the definition into practice: intelligent to what degree, how do we measure it?

Other definitions are too anthropocentric and exclude other kinds of non-human intelligence. Others are too machine-centred and awake the feeling human intelligence in all its manifestations can be easily modelled or simulated or replicated or even surpassed by machines. This, however, has repeatedly proven not to be possible in practice: for some researchers, it is a dubious, unattainable goal.

Societal implications

AI has a perception problem in the mainstream media even though many researchers indicate that supporting humanity must be the goal of AI. By clarifying the known definitions of intelligence and research goals of machine intelligence this should help us and other AI practitioners spread a stronger, more coherent message, to the mainstream media, policymakers, and the general public to help dispel myths about AI.

Our research has shown that reaching consensus on defining AI (and by default the goal of AI) is extremely challenging but not insurmountable. For example, to the proposition: “It will never be possible to reach an agreement upon a definition of AI,” 58.6 percent of respondents to the AGISI survey disagreed, albeit it is a narrow majority, nevertheless we can be rationally optimistic that it is possible to reach a consensus on defining machine intelligence.

Furthermore, in response to the proposition “A definition of intelligence is self-evident,” respondents were highly sceptical, with some 80.6 percent of them indicating their disagreement. It was encouraging to note that there is some favour towards the proposition “A definition of intelligence should differentiate between human and machine intelligence,” with a slim prevalence of 48.2 percent of respondents in agreement versus 41.9 percent disagreeing.

A direct implication of the study findings, especially around the AI discourse, could be the need to disseminate good applications and possible future uses of AI that help to counteract the unfortunately misleading news and fears about it.

Perhaps more importantly, the different cognitive biases present in the arguments that were given when justifying the level of agreement with definitions from the literature may imply an added responsibility that developers and marketers should have when designing and rolling out their systems, especially how these might be received, used by, and affect their end users.

With respect to the definitions of intelligence, there are, as may be expected, strong arguments in support of and against different definitions. Experts clearly view, understand, react to, and judge definitions of machine and human intelligence differently.

Cutting edge

We collected 338 new definitions of intelligence suggested by survey participants that are helping us to shape the boundaries of the current discourse on intelligence.

One of the big challenges of our research has to do with the application of text analysis tools (Lewis & Monett, 2018) as well as text mining and natural language processing techniques to dive into the significant experts’ opinions received and in order to extract meaningful information from the unstructured data.

For example, we would like to find answers to the main research questions that drive our research: (i) What makes the most accepted definitions of human and machine intelligence the best agreed upon and why? (ii) Should a definition of intelligence differentiate between human and machine intelligence and, if so, do the opinions provided when respondents justify their selection support this differentiation? (iii) Which are the cognitive and behavioural capabilities that shape the current discourse on both intelligence and AI that should be considered when defining intelligence?

A visual representation of the 3 research questions:

Three research questions that drive AGI's research and the pool of expert opinions about different definitions of human and machine intelligence (HI and MI, respectively).

Hence our research is focused on finding the boundaries of the current discourse on intelligence by analysing people’s opinions and by studying where there might be a consensus or not, and providing a map of the capabilities that might be important to consider when defining (machine) intelligence.

Furthermore, we have developed a working framework of the elements of quality criteria for definitions that should guide researchers and practitioners when defining intelligence. These quality criteria are inspired by an exhaustive literature research for the properties that definitions should fulfil in order to be considered “good” definitions.

As a result, we are suggesting a catalogue of quality criteria intended to serve as guidelines or best practices when defining and evaluating definitions of intelligence. For example, well-defined definitions of intelligence should be clear and easy to understand, they should not include contradictory statements, and, ideally, they should be ostensive, in that they exemplify cognitive abilities or functions that indicate intelligence.

The inclusion this criterion is a direct result from our analysis of the responses to the AGISI survey and perhaps one of the most important factors from the collection: survey respondents indicated more agreement with ostensive definitions of intelligence. If the quality criteria we are proposing are consistently followed, it would be much easier not only to convey what is meant by intelligence but also to reach a common public understanding of what it is.

Looking forward

The response from the community has been highly receptive, with a clear recognition that it is important to define machine intelligence and, by default, the goals of AI. Nevertheless, the well-being of all humanity will improve as our understanding of what intelligence is develops. The question remains ‘how to define intelligence so that, along with a consensus among experts from different fields and backgrounds, it helps develop better AI-based technologies to improve well-being?’

A thorough analysis of thousands of experts’ opinions about defining human and machine intelligence was carried out. Our research suggests that “good” definitions of (machine) intelligence fulfil certain quality criteria that allow for both: better insights into intelligence and a wider understanding of the current discourse on AI. Ultimately, improving the quality of life for society overall will necessarily mean getting clarity about what AI is and is not.

Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence 47:139–159.

Ema, A. et al. (2016). Future Relations between Humans and Artificial Intelligence: A Stakeholder Opinion Survey in Japan. IEEE Technology and Society Magazine 35(4):68–75.

Gottfredson, L. S. (1997). Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence 24:13–23.

Legg, S. and Hutter, M. (2007a). A Collection of Definitions of Intelligence. In B. Goertzel and P. Wang (eds.), Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms 157:17–24, IOS Press, UK.

Legg, S. and Hutter, M. (2007b). Universal Intelligence: A Definition of Machine Intelligence. Minds and Machines 17(4):391–444, Springer.

Lewis, C.W.P. and Monett, D. (2018). Text Analysis of Unstructured Data on Definitions of Intelligence. In Proceedings of The 2018 Meeting of the International Association for Computing and Philosophy, IACAP 2018, pp. 1–12, Warsaw, Poland.

Monett, D. and Lewis, C. W. P. (2018). Getting clarity by defining Artificial Intelligence–A Survey. In Müller, Vincent C. (Ed.), Philosophy and Theory of Artificial Intelligence 2017. SAPERE 44:212–214. Berlin: Springer.

Wollowski, M., et al. (2016). A Survey of Current Practice and Teaching of AI. In D. Schuurmans and M. Wellman (eds.), Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, Phoenix, Arizona, pp. 4119–4124, Palo Alto, CA: AAAI Press.

Experts featured:

Dagmar Monett

Co-founder
AGI Sentinel Initiative

Colin W.P Lewis

Co-founder
AGI Sentinel Initiative

Tags:

AI intelligence machine learning research
Send us a correction Send us a news tip