A.I. doesn’t mean what you think it means

By the second half of the 2010s, Artificial Intelligence was hardly a buzzword. Deep learning had the center stage, and quantum computing was still very much a work in progress, with a tentative deadline, set by IBM in 2015, by which time we were promised either a technological wonderland, or a Skynet scenario… depending on whom you asked. The most we achieved by that year was the official obliteration of desktop computing, which followed the mobile resolution.

By 2023, Artificial Intelligence is now “a thing”. What makes A.I. a thing, however, it’s not the concept of A.I. itself, but rather its subjective perception.

As a technology, A.I. isn’t technically what’s being advertised. By definition, Artificial Intelligence is the concept of a machine capable of self-awareness, as well as spontaneous logic reasoning. What we have today is “machine learning” algorithms designed to perform very specific tasks according to scripted directives, which isn’t, at least subjectively, Artificial Intelligence.

From a human perspective, the confusion is understandable, and very much part of a careful marketing strategy learned within the last decade, from trial and error and the power of cognitive bias.

Social media is a powerful social engineering tool, from which we have tested and learned that the tendency to latch onto one particular bit of information, and disregard any data beyond it, beats any salesman tactic ever employed in history. By that token, strategists across all fields that capitalize on manipulating consumer behavior, have noticed that cognitive bias stimulation performs best when associated with negative/strong feelings.

Kill All Humans


The general perception of A.I. is heavily influenced by popular culture. The last sixty years of cinematography, were littered with a grim and often violent perspective of the concept of artificial life forms, which is either portrayed as rogue, anarchist and out of control, or totalitarian, tyrannical and oppressive, with a few mild attempts at disrupting the dominant narrative with rose-colored family-friendly attempts to render thinking machines as likable and sympathetic.

It should be obvious, in 2023, that none of those portrayals had anything to do with the technology itself. Those were metaphors, reflecting public sentiment towards harsh political climates and social unrest. Still, according to any applicable comment sections on any given social media platform, Skynet went live, and we are all about to become lowly subjects to our artificial overlords.

Still, due diligence requires a little deeper digging beneath that superficial layer, much like the Tom character played by Jessie Eisenberg in the 2020 independent film Vivarium, as he dug the fake backyard’s mulch, hoping to tunnel his way out of a labyrinth-like surreal prison.

Uncanny Valley


Subjective perception is all that matters. As humans we tend to attribute  anthropomorphic qualities to most inanimate objects with which we interact on a daily basis. We get angry at computers for being slow, and cars for breaking down on our way to work. We assign genders to ships, buildings, and lawnmowers, and we “talk” to our appliances.

Within the past decade, a funny thing happened. Appliances started talking back. We have full-fledged conversations with computer algorithms, and take medical and mental health advice from them, as well as develop quasi-human relationships with them.

At least in one instance, some consumers of what’s being marketed as A.I. were so invested in the product, that accusations ensued, hinging on the suspicion that sentient A.I. (a term already unnecessarily redundant) was being secretly developed and unleashed onto the world, much like the infamous “War Of The Worlds” radio broadcast, in which Orson Welles rendered such a realistic account of the titular work of science fiction, which in turns threw listeners in a panic, as they believed there was indeed a Martian invasion ensuing outside of their door steps.

As a specie, we rely on visual cues to determine what something is, or isn’t, and we tend to believe that information at face value, and act upon it, until new information is available.

The exception to the rule is cognitive bias, developed from an overload on information that does not allow a person to process objective reality in a proper manner, which leads people to stop processing data altogether, and stick to the most prominent and obvious piece of information they have, in spite of evidence of the contrary, found in logic and reason.

The A.I. we all want to believe that we have, uses this exact same process to generate an output based on given prompts. Generative A.I., is the ultimate example of cognitive bias at work. It latches on the first bit of information, and builds upon it, based on the underlying premise.

In these examples, crafted using generative tools in Adobe Photoshop, simple prompts were given to generate the illustrations, within defined selections of the canvas, such as “building skylines”, “Cyberpunk”, and “robots”, to fill the environment.

Within a context of self-awareness, the process is far from being “generative”, as the look, feel and design of each artwork layer is obviously a crude reinterpretation of a set of preexisting images. Nothing is being “created” here, at least not in a sense of generating new art.

What makes this tool interesting, however, is not the “A.I.” component, which is most definitely a misnomer, but rather the integration of advanced formatting tools that allow an accurate placement and color-grading of elements, according to existing palettes and perceptual 3D space.

If what we have now isn’t A.I., what happens when it actually happens?


When tackling the subject of sentient machines, much like everyone else, I like to lie to myself and believe that I am capable of easing into the conversation pragmatically. What I actually do, however, is to latch on a defined set of information I am familiar with, and act like a dog chasing a car.

Incidentally, this is the type of behavior I’d expect a nascent true A.I. to embrace.

A true theoretical A.I., at least objectively, acquires self-awareness by defining its own role within the environment in which it spends the longest time. By this token, true A.I. learns from personal experience, when exposed to information that exists in the world, in a similar way as the Mowgli character from “The Jungle Book”,  a biological human who is raised by wolves.

Mowgli does not question its environment, or any of his experiences, and identifies as a wolf, in spite of his physical appearance. Mowgli does not have claws, fur, or fangs. He doesn’t move as fast as the other members of what he identifies as his peers, and certainly is incapable of fighting the same predators. In spite of that evidence, Mowgli will still believe he is a wolf.

A machine capable of thought, placed within an environment in which a single living specie of comparable size and complexity is present, will therefore identify as a member of said specie of animal, or human. It’s fair to assume that the machine will attempt to mimic or replicate the behavior and ability to communicate, characteristic of said indigenous specie, and similarly to Mowgli, it will not question the information, and will operate as a member of said specie.

Everything that said machine learns beyond that will be permeated with a heavy bias towards needs and requirements set by the community in which the machine operates. This is very important, because it helps define what a sentient machine “believes”, and how far a sentient machine will go to validate its cognitive bias in regard to its own identity.

A machine governed by true A.I., is inherently assumed as incapable of digesting food. Because of its narrow range of experiences, it will likely still attempt to hunt, and possibly eat, instead of seeking an electrical energy source to recharge, as it is 1)unfamiliar with its own physiology, and 2)biased towards following similarly complex species with which it identifies.

Optimistically, unless the specie interacting with said machine is aware and capable of identifying what the machine requires to function, it will not last very long, without a pre-programmed set of information that allows it to define itself, and its requirements from the start, as well as physiological traits designed to reflect the specie among which it is first activated. In essence, a fish cannot simply “learn” to walk on land, as land requires it not to be a fish.

A.I. Art


There is no bigger misnomer than “A.I. Art”. Alas, we have heated debates on the subject, and we go to court about it, because of that same cognitive bias that prevents us from discerning objectivity from subjective perception.

There is no “A.I. Art”. We perceive it as such, because the majority of consumers already refer to many other things as “art” even when it’s not.

“Art” is the ultimate, and most abused buzzword in history. It is a blanket term forced upon anything for which a more accurate adjective does not come to mind. The art of making a sandwich. The art of changing a car’s tire. The art of talking to people. Everything is “art”.

This is precisely why A.I. Art is a misnomer. Art entails the creation of something unique and evocative of important ideas and feelings. Greek philosopher Plato first developed the idea of art as “mimesis”, which translates to “copy” or “imitation”. The requirement for making art is an understanding of what it’s being created, from the perspective of the artist itself.

Without self-awareness, a machine cannot create anything of importance that remotely matches the definition of art. Generative prompts are not art. They are prompts that mix imagery from randomness, without actual coherent thought, nor inspiration. A machine may stumble into creating something that we, as humans, may perceive as art, but perception alone doesn’t validate the artistic ability of A.I., because there is no intent of creating art. Generative prompts are commands, not “suggestions”.

Who Is The Artist


To illustrate my point, a rubber stamp creates artwork when dipped in ink and pressed on a surface. It’s artwork that was originally created by someone. It was later on sculpted onto a stamp. That stamp has been subsequently mass manufactured and distributed.

Let’s postulate that the mark left by the stamp is “art”, based on the premise that the color of the imprinted mark makes the resulting mark print unique. Who is the artist? Is it the stamp, or the person who uses the stamp?

The answer is neither. The art still belongs to the original creator of the mark used to mold the stamp.

The person using the stamp has no intention of creating. His or her intention is to perfectly replicate a similar artwork.

The stamp has no brain, or creativity for that matter, but it does have the ability to replicate the mark, when coming in contact with a compatible surface.

As it exists today, what we stubbornly and falsely insist on calling “A.I.”, does not create “art”, because it doesn’t want to, as much as a rubber stamp has no intention of creating art. It is told to do it, and it follows the command, as thoroughly as it possibly can.

As humans, we are the ones deciding whether or not what we see is resemblant of what our cognitive bias describes as “art”, even when evidence and logic tells us otherwise.

It would be safe to assume that when true A.I. comes into existence, it may have its own interpretation of art, which may be very different from ours, and perhaps incomprehensible. Yet… it will fulfill the requirement of being created as an expression of something unique, from the point of view of the artist itself, and from a position of self-awareness.