top of page

Why I am not afraid of singularity anytime soon (Part 3): Conceiving super intelligence

This series "Why I am not afraid of singularity anytime soon" takes a closer look at potential flaws in our human understanding of singularity and why I think we should all be more concerned about other crises of humanity rather than about super intelligence. If you missed the first two parts on human stupidity and an evolutionary definition of intelligence, check them out here: part 1, part 2.


 

Finally, we will delve deeper into the question "what is super intelligence, anyway?". To do so, we need to become intelligence architects to think of ways to construct "intelligent systems" in an artificial way that does not primarily rely on biochemical processes. We use our knowledge of the natural human brain as an analogy for the development of a truly intelligent artificial system - and ask ourselves: what do we actually mean with a superlative form of intelligence? Can we be right in reasoning about super intelligent intentions, or is that a false assumption to start with?


What is super intelligence annyway?

In Socratic fashion, let us formalize the core questions introduced last time that we need to answer to come closer to a semantic understanding of super intelligence:


Should super AI have any existential motivation?


Previously, we identified intelligence as a quality of something that can only be defined from an evolutionary perspective. In other words: a thing can only be "intelligent", if its behaviour contributes to its survival in an efficient and effective way. This implies that any existing thing needs to have an intrinsic motivation to keep existing, for it to display intelligent behaviour. Hence, any AI system needs to have a higher goal to serve its intelligence to.


Current machine learning models already show how they are specifically learning to achieve one particular goal. In reinforcement learning, the goal is to optimize reward. In supervised learning it may be maximizing overall accuracy or precision. In unsupervised learning, it may be minimizing some form of entropy. Such goals are tied particularly to the contexts in which these machine learning models are used. The specificity of machine learned patterns to unidimensional goals makes current AI still relatively stupid: they do not generalize to different contexts.


To overcome this issue and go from specifically trained "narrow AI" to general AI - AI that has human-level intelligence, we can argue for either of the following (non-exhaustive number of) options:

  1. Machine learning algorithms need a meta-algorithm that learns how to arbitrate between different optimization strategies, given different contexts;

  2. Machine learning goals need to be abstract enough and hierarchically organized, so that achieving various goals makes sense from different contextual points of view.

In this world, there are different types of problems. One strategy for solving problem A is not necessarily the best strategy to also solve problem B. How human beings choose what learning strategy to use to achieve a goal seems to be a natural ability.


An extremely analytical person might first scrutinize each detail of the problem at hand and then decide which way to go. Another person might prefer using brute force, agressively attempting to impose their will to solve the issue and seeing what worked best in hindsight. Similarly, we may need AI to be able to "decide" whether to use an unsupervised learning approach or reinforcement learning to achieve a goal - or a combination of the two. Super intelligent AI needs to be able to come up with its own learning strategies, so it can arbitrate between an even bigger arsenal of techniques to solve problems in significantly more efficient and effective ways than for instance humans can.


The ability to display new optimized learning strategies alone however, is not enough for an entity to come into existence. Why should anyone be learning anything otherwise? Enter number 2: having goals applicable in different contexts.



Humans set their own individual goals in alignment with evolutionary principles. The primary objective is to survive, to keep existing. All individual human behaviour is then guided by smaller, more practically defined goals that serve this primary objective - much like an inverted Maslow hierarchy: finding nutrients, creating shelter, earning money, maintain healthy relationships, annual growth percentages, obtaining a degree, getting up in time to work and so on. Here we can think of each goal as a problem: a state that has not been achieved yet. Such goals are inherent to the contexts in which humans operate. That is, human goal hierarchies are constraint by the complexity and attainability that human intelligence allows for.


To enable super intelligent AI, this AI needs to be able to generate its own goal hierarchies as well, independent from human goal definitions. Goals that are relevant to the context of super intelligent entities. Super intelligent AI cannot exist as long as humans control all of its goals. Hence, the answer to the core question whether super AI should have any existential motivation seems to be: yes, super intelligent AI needs to have an intrinsic motivation to exist, conditional for displaying any super intelligent behaviour that gets them to achieve self-motivated desirable states. As long as AI cannot have its own existential goals, any AI system will just be as intelligent (read: stupid) as any other man-made technology. Like another breed dog walked by a human owner.


Should AI directly reflect our biological functions to become super intelligent?


Whether AI should have the same functional biological structures as humans is often discussed by using the analogy of birds and airplanes that both can fly. The former developed through ages of evolution, using purely biological structures to move as if lighter than air. The latter a mechanical apparatus that uses advanced aerodynamic engineering to power the machine to launch.


Although the airplane's construction is inspired by the aerodynamic properties of bird wings, the mechanics are completely different. Moreover, its purpose and desired functionalities differ as well from those of birds. Similarly, AI may not need to be made of biological matter to be a functioning "tool" for humans beings.



Given that we just concluded that super intelligent AI cannot be a tool merely serving human beings, because it needs to be able to set its own goals, the short answer to whether AI should directly mirror our biological functions to become super intelligent is no, not necessarily. Like bacteria, super intelligent AI may exist in completely different dimensions that do not necessarily intersect with those in which we operate. Still, there are two things worth discussing here that I would call attributes of intelligent organisms instead of pure biological functions: 1. emergent properties and 2. empathy.


From Wikipedia:

"...emergence occurs when an entity is observed to have properties its parts do not have on their own, properties or behaviors which emerge only when the parts interact in a wider whole. "

Put differently, emergent properties are unanticipated behaviours or characteristics of technology when deployed in interactive ecosystems. Interpreted in an evolutionary way, such semi-random phenomena continue to exist if and only if they prove to be adaptive in an environment over time. This goes hand in hand with what is described as exaptation: phenomena that serve one function in one context can turn out to serve new or additional functions in another context. A funny example is the variation in human nose shapes: some turn out particularly "adaptive" to support glasses in this modern day.


Biological life, human beings, our consciousness and the subsequent goals we set can all be regarded as emergent properties of fundamental natural processes. Super intelligent AI will show emergent properties as well. We as human beings may not directly understand these, even if we can systematically study the behaviour of super intelligent AI - unless we find a common way to communicate with it.


To some extent, an example of emergent properties in currently developed AI can be found in the hidden layers of deep neural networks. During a learning process, the configuration of these layers are constantly updated in ways we do not completely understand (yet). This lack of explainability forms another major issue in current AI developments and is often the root of general fear for AI. Which is another paradoxical hint for why I do not fear singularity anytime soon. We are not ready and possibly not willing to let go of control over AI's exact functioning for super intelligence to emerge.


Another characteristic we typically find in intelligent organisms such as primates and dolphins is the ability to empathize with others. Even if these "others" belong to different species...



To be empathetic, one first needs to be self-aware. Next, it needs to be able to emulate the experience of another entity by mirroring their perceptual standpoint. We, for instance, tend to feel empathy for mammals, because we have a sense for what it feels like to be one ourselves. We have the same functional senses for smell, sight, taste, hearing and touch to interact with the world.


Akin to being the "airplane", AI does not necessarily need to have eyes, a nose and a mouth. Yet, once it is part of an ecosystem, it needs ways to interact with that environment. Let super intelligent AI exist in a virtual environment and it will need to devise its own ways of sensing what it happening in that virtual world. To understand other entities in there, their goals, as well as any other process that runs in that environment. To form this understanding and have meaningful interactions, the AI needs to be empathic. To this end, it needs to realize a sensitivity in the same modalities with which other entities in that environment operate.


Thus, a "super intelligent" entity does not need to be made of mechanically identical structures as biological life. Yes, it does need the ability to emulate the functionalities required to interact in the same dimensions as other entities in the ecosystem that it is dependent on. Whether these are biological functionalities then depends on whether the super intelligent AI needs to interact with biological entities. For such emulation to happen, super AI needs emergent properties and empathy.


Super intelligence needs incentives to cooperate with us, humans


When we think about the most intelligent people who ever lived, most of us probably think of people like Aristotle, Einstein and Curie who were driven by their curiosity about nature. Neither of them were obviously aggressive people. So why should we assume any super intelligent AI to be completely destructive to its environment? Just because a succesfully proliferating species (e.g. Homo sapiens) tends to be so? This section briefly touches on a research avenue that focuses on how human beings and super intelligent AI could coexist: AI value alignment.



Value alignment is a relatively new interdisciplinary research area that looks at the agreeableness of human values and objectives of AI. The underlying question here is: can AI have goals that are aligned with human or the universe's existence? If so, how can human values be embedded in or understood by AI systems to prevent doom scenarios of AI dominators? Perhaps we should give AI a purely altruistic primary objective to exist: to help all life on earth to continue existing peacefully.


A super intelligent system may not be obedient to objectives set by less intelligent species. It can, however, have the empathetic ability as described above to understand different objectives of different entities in its ecosystem. For instance, to take into account the relevance of existing next to lower level species. As long as there are interdependencies, (super) intelligent entities will understand that they are part of a bigger world that defines their reason to continue existing.

 

In summary, true super intelligence will not be allowed by humans to exist, unles there is a direct dependency between AI and biological life that incentivizes collaboration to achieve a common objective, or unless we find a way to satisfy AI with all its fundamental needs, so that the surplus energy it has can be spent on solving problems of common interest.


Bonus food for thought: is there an objective ground truth?


Does God definitely exist? Is string theory not just theory? Is there any goal of having biological life at all? What is consciousness? Will AI help solve human stupidity?


Such questions seem to emerge particularly from the human mind and are still left unanswered. Maybe super AI can help (to accelerate) uncovering the natural laws of existence. For this to be possible, we need to assume that there is one objective truth about how the universe is conceived, how it works and what intelligent behaviour is. Maybe this assumption is wrong and Truth is only relative to context. These "super intelligent" wonderings may be just the right type of challenge for super intelligent AI to solve... Super AI to the rescue for world peace!


Final remarks


This post concludes my blog series on "Why I am not afraid of singularity anytime soon". We started by understanding the more pressing current issues that cause human stupidity. In part 2 we arrived at an evolutionary definition of intelligence. Here, we conceptualized what super intelligence actually is and why it is not necessarily a dark entity that is going to wipe out all life. Altogether, to give you my view on why such doom scenarios are not realistic and why we should be worrying more about other issues we can solve more easily.


In all fairness, I do not believe singularity will eventually happen in the way we talked about super intelligence in this series - at least not in my lifetime. Human beings tend to be shortsighted control freaks after all, which makes me doubt whether investing in an autonomous AI system would be to their liking. It may require years of developing just a new language to interact between the super intelligent AI entity and human beings if it is going to happen at all. Creating technology-enhanced human bodies, i.e. cyborg-like creatures and designer babies, would be a more likely scenario in my opinion, since they directly enhance an individual's abilities. Which may arguably be even scarier than having super intelligent AI governing us for the better.


Nobody can precisely predict the future. Super intelligent AI may be able to. Maybe not a bad idea to outsource solving problems caused by human stupidity to a much smarter, god-like system. What do you think?



They do their dance in the land of bytes.

Should the Truth be held in

binary processes, their priors are destined indeed

to present nothing but a lonely dancer -

flirting with gravity in a futile attempt.

Critics that watched would accordingly say:

"What a pitying fight to exist!"



Comments


bottom of page