What Kind of Smart is it?

Speculative Hurdles Towards Evaluating Artificial General Intelligence

My general assumptions and methods of thinking are here: 27 Premises, Silent Assumptions to Drive Systemic Thinking.

Is the Emergence of Consciousness in AI a Possibility or a Misunderstanding?

In this post I intend to deal only with issues related to speculation about consciousness as we know it as an embodied event not distinct from those bodies and contexts, operating as part of an assemblage of systems. These initial speculations deal with embodiment, evolutionary dynamics, and the potentials of both biological and artificial intelligences, with the intention of clarifying my premises, rooted in prior research and introspection.

Let’s begin with a few of the questions that always rattle around in my head whenever the topic of AGI comes up, which lately has been quite frequently:

What do we mean when we say “conscious” vs. “intelligent”? 

How do we approach the topic of consciousness — not as a bicameral / simple bifurcation, but a plurality of consciousnesses within one organism, or many — with no single central model that can apply in every case? 

Does it even remain a useful term, when we use it to refer to the general case, or does it often confuse us into talking past one another?

The last question is easy to answer upfront. My assumption is that the answer is yes, on both accounts: it is confusing, particularly when we pretend we know precisely what we’re talking about, and yet it can serve a purpose.

So far as I will use it here at the outset, consciousness simply means having the capacity for experience and embedded awareness of various subtypes, possibly with some form of meta-awareness about that experience, or possibly not.

Intelligence is much easier to define in a useful way, at least on its surface. Intelligence refers to the ability to respond effectively to the needs of a specific environment / context.

These two may in some cases have overlap, but there’s little indication either in biology, neurological research, or the ongoing research into LLMs that seems to indicate that the one (namely intelligence) is contingent upon the other — nor does it unravel the contingencies or possibilities within each term. 

For example, consciousness as experience, as Peter Godfrey Smith puts it, that it is “like something to be that thing”, sometimes appears with the secondary meta-capacity to make third person assumptions about one’s self as an agent within the world. In other designs, it does not appear to.

Giving any scrutiny to how LLMs function, it seems inaccurate in any case to claim that this technology presents a form of consciousness, or that it can intend or even understand anything with the perceived interiority that is the hallmark of our experience — however this technology can provide many of the apparent surface effects of things it cannot do, such as thinking, reasoning, or intention.

Presenting the results of a particular type of intelligence, such as performing well at Pong or on the Barre exam, is tautological so far as definitions of general capacity go, doing well on the test proves that you are good at the test, but where results count more than experience, this distinction may quickly be left in the dust. That is to say, if it can solve problems of type A, B, and C, many people will probably not worry overmuch if it doesn’t properly “understand” any of them.

This poses any number of conceptual challenges into an already fraught topic, not to mention some potential existential threat as the technology matures. If this wasn’t enough, projection is an equally tricky issue when it comes to evaluating non-human (let alone machine) consciousness.

There’s a natural tendency for us humans to attribute life-like qualities or consciousness to things that mimic human behavior. It’s something that’s helped us survive over the millennia — better to mistake a rustling bush for a predator and be wrong than to ignore a real threat, better assume that it intends to kill us, etc. But in the case of LLMs, this instinct leads us to misunderstand the nature of their capabilities and think of them as sentient when they’re not.

(I’ve given some initial comments about this, and tie-ins with the concepts found in Peter Watt’s novel Blindsight, in a post entitled “The Bullshit Machine, and other tales of misattribution”. The most salient details I’ll deal with here.)

This tendency towards projection and overestimation isn’t just a harmless quirk. It can lead us into philosophical and psychological pitfalls, especially as these AI systems get better at simulating human interaction. If an AI is intentionally weighted to do so, or if it reflects back biases within its training, it can influence the people working with it. (However, the level of threat here is specific to the realm it is being used in, and the likely uses that might result.)

This concept leads from — and also seems to lead right back to — an almost inevitable cul-de-sac, namely that we cannot even say with certainty that another human being is conscious. Or a snake, or cloud formation. This is, if resemblance to one’s own outward presentation of consciousness isn’t the criteria — which it shouldn’t be. However, this is at least for the time more obfuscating rather than clarifying what the actual barriers are to our understanding of what machine learning algorithms are.

Why shouldn’t it be the criteria? Many of the ethical concerns about our treatment of AI arise if it does at some point present a form of consciousness, given our track record not only within the rest of the animal kingdom but also in equal measure to members of our species. After all, slavery and indentured servitude of various kinds has arisen too many times in human history to be considered a purely local and individual aberration. These things hinge on the ability to see others as non-entities. 

It’s a bit of a catch 22. And that is far from the only thing that blocks our understanding of the inner processes of “alien” species.

“Evolution has no foresight. Complex machinery develops its own agendas. Brains — cheat. Feedback loops evolve to promote stable heartbeats and then stumble upon the temptation of rhythm and music. The rush evoked by fractal imagery, the algorithms used for habitat selection, metastasize into art. Thrills that once had to be earned in increments of fitness can now be had from pointless introspection. Aesthetics rise unbidden from a trillion dopamine receptors, and the system moves beyond modeling the organism. It begins to model the very process of modeling. It consumes evermore computational resources, bogs itself down with endless recursion and irrelevant simulations. Like the parasitic DNA that accretes in every natural genome, it persists and proliferates and produces nothing but itself. Metaprocesses bloom like cancer, and awaken, and call themselves I. “— Peter Watts, Blindsight

The Distributed Living Network

Much of these problems need to be put aside for now, although they should not be forgotten. Let’s go back and start with the instances we actually do have to make any evaluations in this area. All the types of consciousness to date on earth seem to arise within nested feedback loops between living organisms and their surroundings, both in the present and back into that organisms developmental history. All the examples we can refer to up to this point in history are of this type.

As such, the surrounding environment itself is a part of what I think of as its context. We must begin by breaking with anthropocentric assumptions. Consider the intelligence of trees. These organisms do more than grow towards light; they fend off predators, engage various strategies in competition and collaboration with other tree and animal species, and co-create ecosystems. Their existence is intertwined with other living beings, like birds that nest in their branches, highlighting their roles in both shaping and responding to their environment through various adaptive strategies. It is intelligent in a variety of ways that humans are not; conversely, we shouldn’t expect a grove of fir trees to understand Hamlet. This is not only a factor for individual species, as if their existence ends where their roots and branches do. Some of that intelligence is embedded, literally a part of the ecosystem itself.

Here I use “tree” in the same way as “intelligence,” that is in a descriptive sense. Prescriptively there is no such thing as a tree (taxonomically) nor enough of a means for comparison to dictate their actual limits. This seemingly fringe taxonomic issue actually sums up one of the chief problems in dealing with the issue of contextualizing artificial intelligence within the range of possibilities we know from study of life on earth. We can test for the ability to take a particular test well or poorly, but need to be careful about the assumptions baked into the test.

Assumptions of this kind get challenged all the time by the sheer strangeness and adaptability of nature. Consider the following by way of example about expectations: “Starfish: nothing but head.”

“It’s as if the sea star is completely missing a trunk, and is best described as just a head crawling along the seafloor,” said Laurent Formery, a postdoctoral scholar and lead author of the new study. “It’s not at all what scientists have assumed about these animals.”

The findings suggest that the traditional bilaterally symmetrical body plan, common to many animals including humans, is not the only blueprint for life. Starfish, with their five-fold axis of symmetry and head-like characteristics distributed along their arms, represent a distinct evolutionary path that challenges an understanding of animal body plans and consciousness that follows a single trajectory.

Assemblage theory underscores this diversity, proposing that entities like starfish are not singular ontologies but assemblages of multiple interacting components. This perspective aligns with the idea that consciousness is not a binary state nor a simple continuum, but rather any number of continuums operating within phase spaces defined by environments, rather than any singular or central type of capability.

Each form of consciousness, whether in humans, starfish, or other organisms, is shaped by the unique evolutionary and environmental contexts in which it has developed. Where an inner experience exists, it may be difficult if not impossible to even conceive it from across the divide.

The same is almost certainly true of intelligence. This must be fully internalized before we can even begin to assess the distinction between the potentials of artificial intelligence and consciousness.

Furthermore, a mind is not “housed”, it is a series of relationships that are very much immanent to matter, in the way that spirit is thought to be immanent to matter in various religious traditions.

Many uncertainties are hidden in that statement, each with different downstream repercussions.

If a mind is part of those relationships, it can at the same time be separate from them, and the inherited idea of correspondence theory is maintained. A mind floating about in a world, though interacting with it in a variety of ways. The recursive, hall of mirrors effect produced by mind-body dualism (“all the way down”) is easy enough to brush aside in exchange for the surety of that bedrock cogito.

What if it is instead a case of existing within those relationships themselves, fundamental to the systems it engages with, rather than discrete from them? …

Bottom line, if we intend to take this question of AGI seriously as presenting a new type of consciousness, it will require a holistic approach that considers the physical, genetic, and environmental aspects of living beings. The consciousness of a squirrel, tree, bee, or person differs markedly, each manifesting various forms or continua rather than binary states. These modes of awareness are distinct, again rendering the term “consciousnesses” more apt than “consciousness.”

Assemblage theory will continue to be a useful cognitive tool in this context, precisely because we are not dealing with singular ontologies: see Manuel DeLanda’s various writings that touch on this point, (derived from Deleuze and Guattari though often taken in novel directions).

Discussion across the divide

They Learned it From Watching Us

All forms of consciousness and intelligence that have arisen on this planet share a commonality, akin to the way all Earth’s life forms — unique instances each — are part of the same evolutionary and genetic lineage. They exist as embodied entities in a shared environment. These are not separate variables in separate equations, but a long string of variables in one equation. This contrasts the distinction of the range of possibilities within the phase space afforded to specific organisms which are part of particular ecosystems. 

Given this shared lineage, it’s perilous to make assumptions about the inner experiences of entities outside of those domains. The similarities we draw might be superficial, rooted only in deep linguistic structures or the rudimentary connections between digital neural networks and biological synapses. The same goes for any sense of utter conviction where the overlap (or lack thereof) between intelligence and consciousness is concerned. The human experience, perhaps in some sense uniquely characterized by selfhood (and related domains of memory, narrative, metacognition…) forms our understanding of the possible range of these possibilities. 

Specifically in regard to human consciousness, it is also relevant that our brain’s predictive processing plays a pivotal role in our conscious awareness, often acting before we consciously recognize stimuli. The function of narrative tends to be more after the fact, aimed towards assessing past actions and possible future actions.

A handful of examples from recent research:

  • AI-Generated Images and Visual Processing: Researchers at Weill Cornell Medicine utilized AI-generated images to probe the brain’s visual processing. The study found these images effectively activated targeted brain areas, as observed through fMRI, demonstrating immediate and specific brain responses to visual stimuli​​.

  • Heart Cycle’s Impact on Neural Responses: A study by the Max Planck Institute revealed that the heart cycle influences neural excitability, with higher responsiveness during the systolic phase. This indicates a direct correlation between cardiac activity and the brain’s sensory perception and response mechanisms​​​​.

  • Social Interaction Processing in the Brain: Research from the University of Rochester identified that the ventrolateral prefrontal cortex (VLPFC) plays a key role in processing social cues. Neurons in the VLPFC collectively respond to facial expressions and vocalizations, showing an immediate collective processing mechanism for social stimuli​​.

  • Brain’s Reaction to Unexpected Visual Stimuli: A York University study focused on how the brain reacts to images that defy expected patterns. The findings revealed that neurons’ responses to these unexpected stimuli evolve over time, highlighting the brain’s adaptability and immediate response to novel sensory information​​.

There is no such thing as universal fittedness, as circumstances and contexts change. This is precisely what has made the human adaptations of language and all the conceptual frameworks that come out of it such a deviously effective omni-tool for human adaptation. The development of symbolic language (and the abstract thinking it can produce) is almost certainly a more novel adaptation of humans than consciousness of any of its numerous types.

As such, a form of intelligence arising from a massive archive of human transmissions at least gives us the beginning of a context (as previously indicated) to evaluate LLMs and subsequent AI developments. LLMs start to make sense within an evolutionary context when they are considered simply a stage in the development of the “slow AI” that is the construct of our mass myths and narratives.

William Burroughs famously described language as a virus from outer space, an analogy not for linguistic panspermia, but to highlight the strangeness of the constructs language creates, akin to a fish swimming through water without awareness of its environment. The ability to shape and be shaped by abstract concepts that don’t exist in the world is incredibly odd. The ability to mass produce it is even more odd, and it seems to be precisely what we are doing with LLMs.

It has to be said that in comparing biological entities with machines, numerous distinctions in consciousness mechanisms quickly become evident. It has been frequently suggested (in articles, journals, etc) that we should avoid even the metaphorical use of computers and machines as a map for biological processes, or at least any sort of process that occurs within the nervous system or larger organism. (This topic is much discussed, as an example: “The Brain-Computer Metaphor Debate Is Useless: A Matter of Semantics.”)

Contrasting that, the rise of synthetic-biological technology may very well remind us of how little we understand what life is at a fundamental level, further blurring our dearly held boundaries. Thereby response and feedback dynamics enter into the “embodied context” of human intelligence regardless of its own interiority or lack thereof, prompting profound questions, much as the cybernetic (interface of human and machine) has arguably led towards the restructuring of human life to be more like machines, rather than actually making them more like us. This is a process that started many hundreds of years ago.

“Digitization has yet to allow us to flee our material origins. If we shut ourselves offline, we do not regain some unity with the silent heart of the world.” — MASKS excerpt on Ribbonfarm

Appearances and Essences

A 2015 study of African cichlids on predator detection mechanisms provides an unexpected insight into the mechanisms of consciousness and perception. It demonstrates that the evolution of zebrafish’s visual and motor systems is geared towards efficient hunting for survival, rather than accurate perception in a broader sense. This aligns with the concept that intelligence is an adaptation for specific purposes, not in pursuit of fidelity. 

Applying these findings to the more grandiose sense of Artificial General Intelligence (AGI) development, it suggests that AGI consciousness, if realized, would significantly differ from human consciousness. Possibly quite radically. 

As I’ve already implied, this is not the direction that I see things going, but it has to be considered as a possibility. For the time being, it may be more productive to focus on the observation that appearing “more human” or “more intelligent” is an entirely different matter than being either. 

That is, appearing intelligent by producing certain results neither supports nor contradicts the capacity for experience. It also does not answer the question of what benefit or purpose that capacity serves. 

Claims that the appearance of intelligence is the same as the article itself also deserve scrutiny. In one sense, appearance and reality are functionally the same. The world of perception is “masks all the way down”, with no one master map that can be used to centralize and evaluate all the others. I have already presented my thoughts on this matter in my book MASKS: Bowie & Artists of Artifice, albeit in a different context.

To clarify my position, it is not that appearance encompasses all of reality, as that appearances are all that can be presented for analysis. We have long “lived” as it were within a holographic recreation, long before there was such a thing as VR or AI.

Questions about whether the experience of being a self is sustained fundamentally in the brain, the body, or in the mind all seem to be ignoring this fact, or at least pushing it to the side so we can act as if the vantage provided by our molehill is actually a mountain. (Also ignoring that “mind” is a mental construction itself that demands we conceive of it as a sort of substance.)

This is neither here nor there in regard to being able to assess the nature of another’s inner experiences, or meta-awareness of it. Our track record here is not good, whether regarding animals or other humans.

Philosophy of AI

AI as Humanity’s “Black Mirror”

I explored a thought experiment to demonstrate some of the problems that arise from this premise of interiority/exteriority in a previous post, Through A Mirror Darkly. This is just one possibility, but it seems a likely and somewhat chilling one.

I’ll summarize a few ideas introduced there. The Black Iron Prison, as conceptualized by Philip K. Dick, and Jeremy Bentham’s Panopticon, serve as metaphors for societal structures that limit individual freedom and promote self-censorship, presented here in a kind of tech-gnostic context. This concept extends to the human mind, influenced by societal norms and internal processes, suggesting that our perception of free choice is a sort of neurocognitive illusion, and decisions are predetermined by a variety of unseen mechanisms.

Artificial intelligence can be seen as an inverted case of our own situation, appearing intelligent by way of results (scoring well on a test, performing well at chess, etc) but with no consciousness or self to speak of. This notion is aligned with the concept of “blindsight” in Peter Watts’ work, where entities function without subjective experience.

This idea is paralleled in humans, as a sort of mirror image. Here conscious decision-making is believed to be under control but might be guided by subconscious processes. We make decisions and act microseconds before we are aware of the fact we have made a decision or chosen to act. The zebrafish cichlid study I mentioned earlier has salience here, as one of ther findings was that the visual-perception systems were designed to respond immediately, so that response was in fact a part of the perceptual process. There have been a number of similar studies where human perception is concerned.

A world run by super-intelligent silicon-based p-zombies does have a certain techno-dystopian chic to it, but as rule these scenarios make for better television than as a world we have to actually live in. I suppose we’ll just have to hope that it isn’t the best form of government we are able to manage.

(P-zombie: that is, simply put, simply a colorful way of representing the idea of sufficient exteriority without interiority of any kind).

Old Questions May Soon Be Answered

The intelligence of LLMs and systems like them is rooted in the convergence of mathematics (statistics, matrices, etc), natural language processes, and massive amounts of data and parallel processing capacity. All of this technology is still in its infancy but appears to be operating at a fairly ludicrous developmental curve.

Creating multiple more targeted models working in tandem on various layers or other approaches could very well see the development of what marketers are already touting as nascent AGI.

Technically this is correct, although it’s clear that there’s little understanding in much of the press of many of the topics discussed here. Absent some “ghost in the machine” arising simply from sufficient complexity, next generation LLM tech is unlikely to present a separate self-aware entity. We are staring in the mirror and like some animals who suddenly see the reflection, howling at that “other” peering back across the divide.

Suffice to say, I am currently more optimistic about the possible development of artificial general “intelligence” if by that we simply mean the capacity to do a variety of tasks well. Though it need be remembered that this is the assembly-line mechanistic definition, based on a fundamentally capitalistic understanding of personhood as derived from capacity. 

Where the central role of embodiment within consciousness is concerned, I remain open to revising my stance, as there is a possibility that consciousness (and its supposed antecedents like self-hood and experience), is instead fundamentally rooted in mathematics. In this view, the organic/synthetic distinction is notional and taxonomic at best, and all that is really needed is a matter of scale, the efficiency of processors and processes, and fittedness.

This hinges on a debate in philosophy and science with a long history. Essentially, if you simulate the function of a brain digitally rather than analog, is that not still a brain? This position has always seemed backwards to me — all the way back to Plato turning everything on its head so that the conceptual form is the true reality — but maybe… we shall see.

Approaching this problem from the other end, there are a number of recent developments in trying to reduce the processing overhead by creating analog rather than digital solutions. For example:

Rain Neuromorphics has built a neuromorphic chip that is analog. In other words it does not simulate neural networks: it is a neural network in analog, not digital. It’s a physical collection of neurons and synapses, as opposed to an abstraction of neurons and synapses. That means no ones and zeroes of traditional computing but voltages and currents that represent the mathematical operations you want to perform. This is similar to what Intel has done in the Loihi chip, but potentially far more advanced.

I have no idea if development in this direction will yield the holy grail they’re looking for, but it is clearly salient with much of what I’ve been saying about embodiment and the digital/synthetic divide where life is concerned.

Whatever the coming developments within the various lineages of AI/LLM research currently underway, all outcomes will be very instructive in regard to some of these long vexing problems of mind / consciousness, the relationship of mathematics and reality, etc. As someone who has wrestled with these ideas for quite a long time, at least in this one particular vector I am excited about developments in AI, as I would much rather increased clarity on these questions than have the answer come out in accordance with my existing assumptions.

Of course, whenever a question of this sort is answered, ten new questions generally present themselves.

Future Speculations

I think I’ve managed to sketch out some of the various reasons we should be cautious. The conflicted and conflicting ground here belies any facile assurances that we know what we are all talking about when we speak of artificial general intelligence. It underscores the need for interdisciplinary collaboration, merging insights from neuroscience, biology, philosophy, and AI, to truly grasp the risks and benefits. This remains a wool-gathering exercise for the present, engaging with it properly would likely require the consultation of experts in a variety of disciplines and several years of direct research.

It hasn’t been my intent to discuss ethics of machine learning tech here, other than to say that LLMs are downstream of the already existing ills of this society (inequality of power and resources, un-examined biases, …) and should be considered a part of that same complex of the — maybe hopefully named — “late” capitalism.

I do not see the real catastrophic risks stemming so much from the AIs themselves as the humans using them. If human activity is the X factor, history seems to indicate cause for concern in terms of how these technologies are utilized by corporations, authoritarian states, and so on. (For instance). This means the problem — alternatively referred to as the poly- or meta-crisis in various places, with slightly different but related meanings — that needs solving is the same as it has been, whether we’re talking about overshoot or moral hazards. 

In short, AI is dangerous because we don’t have our shit in order. 

For now, that’s all I dare speculate. Due to the rapid acceleration within this space over the past few years, across multiple disciplinary domains, most of these points call for a renewed deep dive on current research before writing anything conclusive. Too many unknowns exist, some potentially remaining forever elusive. Absolute certainty often signals the need for heightened skepticism. The questions at hand are intricate, demanding more than cursory consideration.

I provide broader context in 27 Premises, as well as a woefully incomplete reading list.

Peter Godfrey Smith’s research on octopus (Other Minds) and subsequent speculations about embodiment and the evolutionary dimension of consciousness (Metazoa) are one of the many crucial pieces under these starting premises — I believe he is fundamentally correct about a number of things. Peter Gray’s The Silence of Animals explores the topsy-turvy niche humans seem to occupy within this context.


Previous
Previous

Fantasy Map Illustration in Progress

Next
Next

The Last Litany of Lenev