Within the domains of science and technology, no field grapples with philosophical problems more than artificial intelligence. AI research attempts to physically express human attributes such as intelligence, the mind, decision making, interaction, a self and so on. In doing so, they have inadvertently made the greatest recent strides in ontology and the philosophy of mind. That’s great! But there are still many miles to go.

“Intelligence” is turning out to be a remarkably more complex concept than originally thought—which isn’t a huge surprise. But AI researchers have proven themselves quite capable of working with complexity. The problems lie in other, more difficult descriptors of human intelligence such as, say, ambiguous, fluid, enigmatic, diffuse, contextual and idiomatic.

Cut to the emergence of another popular field of scientific study at the moment: neuroscience. Here, many scientists have concluded that humanity is nothing more than neurons firing. We are machines. We are flesh and bone rather than steel and silicon but we are machines nonetheless.

Naturally, this is an appealing idea to many AI researchers. They hope that as neuroscience’s mechanistic materialist worldview advances, it will prop up the sinking foundation of Artificial Intelligence. Spurred on by this, many AI researchers’ answer to any hurdle is simply: more computing power! More code! Currently, the amount of computing power required to approach an infinitesimal speck of all that it is to be human becomes ever more astronomical despite shrinking, faster processors and more efficient memory technology. This alone is almost enough evidence to support the notion that “the mind” and “the self,” though certainly aware and a part of the physical world, are still distinct from it somehow.

None of this is to say that a comprehensive ontology cannot be structured—quite the opposite, actually. AI researchers are simply approaching it from the wrong angle. Their first step should be to acknowledge a reality emerging from the physical, something my TOP colleagues and I call psychosocial reality. This will be difficult for them, and understandably so. Scientists and technologists’ domain is the physical world. And their philosophy that the physical is all that exists has found a rather strong foothold in the social consciousness (slightly ironic as this is, itself, a psychosocial entity). Opposing views in the sciences are often rejected outright, (see Rupert Sheldrake’s censored TED talk) making funding and social support in short supply for those who might like to inquire into anything counter to mechanistic materialism.

Beyond that, it would take a realistic inquiry into what it is to be human, as being human is the necessary infrastructure and context. But mechanistic materialists have constructed a completely inaccurate facsimile of what human intelligence is. Nor will they acknowledge that intelligence cannot be taken out of context of other features of living. Furthermore, we still seem to be mired in the thinking that “intelligence” is the ability to think logically and mathematically, which it is not.

Perhaps it would be best for AI to supplement their notion of intelligence. Perhaps it would be better for pioneers in the field of artificial intelligence to consider something more like artificial intuition—another nearly synonymous phrase would be “artificial awareness.” Aware machines are what we’re after, isn’t it?

Intuition and subconscious understanding of context; the ability to change one’s frame of reference based the changing situation; making decisions despite a lack of information—or fuzzy information; the ability to interact with and respond to emotions; to understand values—or not understand them; to seek out or understand hidden agendas and subtext—or the truth that is not shown—and scores of other similar everyday phenomena are what constitute human intelligence. It can be difficult for humans, as self-aware as we are, to understand these things for ourselves. Making a machine that can do it seems a near-impossible task.

It isn’t as though AI researchers have completely avoided or neglected to take these things into consideration. Have a look at Wikipedia’s entry on Artificial Intelligence. Brief synopses of the challenges facing AI research are given, including common sense, planning, learning, language processing, perception, social intelligence and creativity. And some encouraging advances have been made in artificial general intelligence, which according to Wikipedia, “does not attempt to simulate the full range of human cognitive abilities.”

Still, more understanding of psychosocial reality is needed. It is the reality inhabited by the mind. The Taxonomy of Human Elements in Endeavour (THEE) has already done a good bit of the legwork here. Giant steps have been taken to structure psychosocial reality, to order and group and show the relationship between the thousands of elements that combine and interact to make up what it is to be human beyond the physical. AI research would most certainly have to interact with a structure of this sort to grasp what has eluded it all this time. We certainly hope they will.

Scientific inquiries into the nature of THEE’s structures have only just begun, though more patterns emerge all the time and a wider view becomes more possible every day. For AI to make use of THEE, it will likely be necessary that THEE architecture be better understood. Perhaps a combined effort would produce results and THEE and AI can evolve together. We welcome any input.

blog comments powered by Disqus