Asking Artificial Intelligence To Define Itself

Cogito ergo sum

“Cogito ergo sum” is a Latin phrase commonly translated as “I think, therefore I am”, which defines the basic requirement for self-awareness. The first original iteration of it was in French: “je pensedonc je suis”, which was first published by French philosopher René Descartes in 1637, in his Discourse on the Method dissertation, and later translated in the Latin “Cogito ergo sum”, in later writings.

The concept of self awareness was never applied to anything non-organic, until the earliest works of science fiction became popular in 1863 with  Samuel Butler’s article Darwin among the Machines, where he first ponders about a future of self-replicating sentient machines, carrying the potential for supplanting humans as the dominant specie on Earth.

Since then, humanity has developed an obsession with thinking machines, pouring all fears, hopes, ambitions, and paranoias into this massive black box.

As a specie with equal parts of self-awareness and self preservation instincts, the thought of a new specie walking the Earth who could possibly replace us, can be unsettling on several grounds.

Introducing a new “player” in the food chain, creates a societal component by which a new specie, may that be artificial or extraterrestrial, introduces cultural challenges and conflict in regard to philosophical, ethical and even religious standpoints. Sentience allows for the ability to alter one’s own perception and interpret objective reality on subjective grounds, which, further down the “rabbit’s hole” may very well lead to AI-generated religions and philosophies.

Another aspect of introducing a new self-aware artificial organism into our ecosystem, is its effects on our economic equilibrium, which is already in a constantly precarious and delicate state on its own. The economic uncertainty introduced by a society capable of boosting technology advancement faster, and predictably beyond human capabilities, puts our specie at an obvious disadvantage.

Nothingburger

With that said, what is happening, right now, in 2024, that makes us so fearful of AI? The short answer is: a whole lot… of nothing.

As it is often true of all commercially available technology with disruptive capabilities, perception plays a key role in determining the success of its marketing. This is why all major AI players are very careful about being overly detailed in what they are actually selling.

By this token, just because I love throwing a good monkey-wrench, here is what AI is, from the word of Meta’s very own AI, when asked the definition of Artificial Intelligence:

“Artificial Intelligence (AI) refers to the ability of machines or computer programs to perform tasks that typically require human intelligence, such as learning, problem-solving, decision-making, and perception. AI systems use algorithms and data to make predictions, classify objects, understand natural language, and other cognitive tasks.

There are various types of AI, including:

  1. Narrow or Weak AI, which focuses on a specific task or domain.
  2. General or Strong AI, which aims to match human intelligence in multiple areas.
  3. Superintelligence, which surpasses human intelligence.
  4. Artificial General Intelligence (AGI), which can perform any intellectual task that a human can.”

At this point I have asked Meta AI which type of artificial intelligence it is, and this is the response:

“I’m a narrow or weak AI, designed to excel in specific tasks like answering questions, generating text, and providing information. My abilities are tailored to assist with a wide range of inquiries, but I am not a general or strong AI. I’m not capable of self-awareness, creativity, or surpassing human intelligence like a strong AI would. My primary goal is to provide helpful and accurate information within my knowledge domain.”

Skynet is still a potato clock

At this point, some may pose the question on whether other types of AI are actually available today, and how soon do we need to start worrying about them.

What you see in this picture is IBM’s attempt at “quantum computing”, which relies on specialized hardware capable of computing data in a much different way than a regular computer.

Currently, all “AI” chips  and “AI” systems available are all based on binary computing, which means that all computer systems available to consumers today, and for the foreseeable future, are incapable of anything more than Narrow or Weak AI. In other words: they are not AI. Not even close.

The device above is not a “binary” computer. Instead it processes information using “qubits” instead of regular “bits”. While binary processing narrows its scope to processing ones and zeroes sequentially, qubits are processed using quantum physics principles, and can acquire both states simultaneously.

In theory, a quantum computer of sufficient qubit capacity, likely in the trillions, if we were to draw a comparison with neurons in the human brain, may very well be capable of human thought. Yet, before you start locking your doors, and stock up on ammo and food cans, waiting for Skynet to go live, here are a few considerations:

The construction of a device with such capability would require an astronomical investment, as well as vast facilities to operate, likely the size of a small nation.

As of December 2023, IBM has unveiled a system with 1,121 superconducting qubits, currently the largest quantum processor ever built, with the cost of future systems of similar capabilities, projected to range between ten and forty million dollars.

These systems, as advanced and ground-breaking as they are, represent merely proof of concept for systems that our great-grandchildren may not even witness in their lifetime.

Currently, the most advanced applications these systems are capable of, fall in the range of super-computing within extremely focused fields, such as energy, astrophysics, seismology, and chemistry research, none of which are remotely close to the glorified chatbots and generative doodlers we get to play with today, and certainly nowhere near the type of AGI some of us fear.

But… but… AI is “taking” my job!

There seems to be a general lack of pragmatism in the conversation about the influence of Narrow AI in the workplace. This reaction is understandable, and often warranted, as the impact of using Narrow AI tools to generate content, has created concern in regard to what is essentially unauthorized use of visual and vocal likeness, artwork, and data, to generate an illusion of original content.

Narrow AI is simply the processor of the training data that feeds it. It’s a language interpreter, a word-salad generator, an image and video processor designed to follow instructions, just like any other software tools, only faster and with a more advanced understanding of the data provided. The issue is with how consumer-focused industries have chosen to profit from Narrow AI, and how overwhelmingly successful they are going to be, moving forward.

As we rewind the clock to less than a decade, similar complaints were leveraged at companies like Amazon, for powering online-shopping to a level that resulted into a systemic shift, by which small brick-and-mortar retail has progressively shrunk and transitioned almost entirely online.

Successively, following a global pandemic, those concerns dissipated, as online shopping became a basic utility, in concert with a fast (and not very thought through) transition to remote work.

Finally, as the world returns to a sort of normalcy, we face a time of recovery from critical failures as result of hasty and poorly researched decisions, with effects reverberating throughout all layers of society, with massive workforce restructuring consisting of layoffs in shocking numbers, homelessness, and economic uncertainty.

This reminds me of one particular scene in the 2010s TV show Silicon Valley, where the character of Jared Dunn, brought to life by actor Zach Woods, discusses the topic of disrupting technologies during the industrial revolution. Dunn points out how increased amounts of horse manure  would cover the streets of London, from a growing population of workers moving to the cities at the turn of the century. What they did not foresee was the advent of a new technology set to obliterate that problem: the automobile.

This focus on AI appears to root deeply into the same type of fear Londoners faced prior to the 1900s, as we fail to consider the true limitations of a technology that can only go so far as to provide an illusion of intelligence, while a real breakthrough may make its surprising reveal, whether it’s good or bad.

 

The Golden Age of Pixel Art and Terrible Business Decisions

As of late, I decided to revive my old passion for console emulators, and dust off a few old ROMs from my backups, including a few titles especially close to my heart, as they reminded me of when I first began taking an interest in technology and computers as a teenager.

Emulators are much different than virtual machines, as they are designed to simulate the same hardware performance as the original systems. this means that running a program on an emulator environment that replicates a Spectrum ZX, or an IBM PS1 80486SX, will treat you with the same lag, sluggish speeds and frustrating crashes typical of those old pioneering days.

And yet, here I was, embracing the lag, while blasting some Pantera’s Cowboys From Hell, Metallica’s Black Album, Sepultura’s Arise, and whatever else I had on repeat on my boom box back then, on my way down to memory lane.

The year was 1993, and I, as the quintessential gen-x nerd teenager, was left mostly unattended, figuring things out for myself through hacking and messing around with that primitive tech along side which I’d eventually grow and evolve, and find myself more in tune than I had ever felt with other humans.

It sounds like such a cliché… to point out that the current phone in my pocket is capable of millions of times faster and more complex calculation than my first ever home PC, but those were special days, when pixel art was the “bow and arrow” in a world of sticks and spears.

Back in 1993, dial-up Internet access wasn’t nearly as ubiquitous, affordable and in demand as it is today, which created the perfect conditions for a small home business to grow to nationwide recognition, thanks to a “periodical” business model for distributing videogames: newspaper kiosks.

I’ll give those born after the 1990s a minute to let that sink in…

The man responsible for the meteoric rise and fall of one of the most successful blunders in tech, was Francesco Carla’, a software developer and game designer with a keen business sense and an eye for golden opportunities… until he wasn’t.

To his defense, while Italy was often celebrated for exporting high-level tech talent, Francesco lived in a time and place that didn’t allow for the same pioneering approach afforded by American entrepreneurs.

Italian audiences were famously stubborn and resistant to new technology and innovation, in favor of a more quaint approach… and who could blame them? On one hand, yes… the slow adoption of innovative ideas can be problematic in a competitive market. On the other hand… well… It was Italy in the 90s. What are you gonna do about it?

This is where Francesco’s brilliant idea of selling periodical glossy magazines that included 3.5 inch floppy disks, loaded with a wide range of PC games, designed for the two dominant platforms on the 1990s: DOS/Windows 3.1 and Amiga/DOS*.


For those who may not be familiar with the latter, Amiga was a direct competitor of Microsoft, as well as IBM in the consumer PC space, and was the maker of the massively popular Commodore line of personal computers targeted towards gamers. 


So there I was, just a punk kid with long hair, playing pretend bassist in a metal band with a couple of friends during the day, until the evening when I’d stare at a 1024×768 CRT VESA beige tube monitor until two in the morning, experimenting, and figuring things out as I went, with limited, and painfully slow Internet access, and a hunger to learn.

Back in those days, my weekends were either spent hacking and learning, or hanging out with a few like-minded friends, with whom I shared an obsession for computers, obscure new software sourced through any number of BBS, and 1970s prog-rock bands like Van Der Graaf Generator and Focus.

I looked forward to those weekends as I knew I could get my hands on some awesome apps and utilities, as well as cool games and other curiosities. Ironically, those little events were not how I got hooked on pixel art, thought.

One morning, while playing hooky in a nearby coastal town along the Ligurian riviera, I figured I’d check a corner store and perhaps score some interesting PC Mag issues or comics. That’s when an interesting item popped from a shelf and caught my eye. I think that was the first time I saw a cellophaned magazine that came with floppy disks attached, right on the cover.

I had no idea what that was about until i got a closer look, but way before then, I had to have it, and I had to find out what was on those disks*.

*Kind reminder: it’s 1993. No Internet yet.

I take it home, and the two floppies attached turn out to be a PC-DOS and Amiga-DOS versions of the original “Time Runners”, by Francesco Carla’s software development company Simulmondo.

As previously mentioned, the company was on a hot streak, producing huge nationwide titles that were being distributed via paper mag attachment in physical stores across the country. According to the CEO, the company was actually going global, which seemed plausible considering that every title was translated (poorly) in at least half a dozen languages, although it was never clear whether or not that actually happened.

Considering the state of the Internet at the time, hindered by expensive old telephone wire infrastructure, and still a few years away from the ISDN and ADSL revolution that would eventually bring high-speed and cheap cable Internet to the masses,  I was just overjoyed to get my hands on some awesome code and digital art for the price of a sandwich… in the 90s.

What blew me away, and what continues to do so today, was how much work was put into squeezing amazing detail out of 16-bit images as small as few Kilobytes, to fit an entire game into the 1.44Mb of space allowed by a standard PC floppy disk.

Everything about those little gems had to be extremely efficient and optimized down to the last Bit. Every sprite was sized specifically for its purpose, and every line of code had to run smoothly with no room for error, pending the entire game breaking apart.

The Time Runner series had shipped as many as thirty new titles, between 1993 and 1994. Yes… Thirty gaming titles, in ONE year*.

*Again… it was the 90s.

A big chunk of Simulmondo’s success was driven by Italy’s love for comics, and a fruitful partnership between Simulmondo and Italian comic powerhouse Bonelli, which by the mid-90s had been responsible for the production of comic books since 1940, making Bonelli one of the oldest and most successful non-US based comic book enterprises in the world. With that said, Marvel Comics did also allow for a licensing deal to produce one Simulmondo title featuring Spiderman.

Unfortunately, as the world spun, Simulmondo failed to scale in three key aspects:

  • technological evolution,
  • distribution channels
  • and human resource management.

ID Software’s Doom, was Simulmondo’s undoing, as the company struggled to remain relevant in the pixel art 2D side-scroller space. By 1993, gamers had already gravitated towards 3D games and technologies already planning to take advantage of the upcoming transition from 16-bit processor chips to 32-bit CPU powered operating systems like Windows 95. Incidentally, that was also the beginning of the end of Amiga, Commodore and Spectrum hardware.

Internally, Carla’s stubbornness to maintain the company’s “periodical” distribution method required developers and artists to work a lot harder and for less money, in order the keep up with the output and distribution costs. Simulmondo was essentially selling software, while operating as a paper magazine publisher.

Subsequently, this exacerbated and brought to light Carla’s shortcomings in managing his employees, to a point that all of his workforce resigned in droves, and in extremely bitter terms, with some still holding grudges after thirty years.

Still, the one thing I’m taking away from this brief history lesson is Francesco’s initial drive and determination. He certainly had a vision, and a method to execute that vision.  Producing videogames today is no walk in the park, but it’s just  money. We have the tech, and we have the method, the vision and the expertise.

Back in the 90s the tech was barely there, and expertise was hard to come by. There was stigma and preconceived notions against new and innovative technologies, especially growing up in a country where tradition and legacy has always held such an incredible and sometimes overwhelming significance and influence, more so in the digital space, and with regard to entertainment.


Honestly, there is more “me” in my head as I keep writing and ad-libbing thoughts, than what transpires in these words you’re reading now. It’s fascinating, and kind of the point of a personal public blog. It’s really not for “you” specifically. It’s not a marketing tool. I’m not promoting anything, except for who I am as a person.

I relate to Carla’ in at least a few aspects of his story: Sometime ago I realized I stuck to my guns unnecessarily in the past, and made decisions that could have been avoided,  which put me on a less successful path. Misconceptions, perception versus reality, is pixel art in a world that spans billions of colors and extends along three axis. One day or the next we all come to that realization, with some lucky enough to come to it sooner, and others left wondering if there is still time to get back on track, and steer away from failure.

 

 

Good News! Character design is still a hot mess

Back in the 90s, a little software package called Poser became the go-to for the non-pro animator, for “pro-like” performance, at least according to late-90s standards, in an era when computer animation still was an outrageously expensive business to get into.

If you were at the low-end of the entry-level spectrum, Poser was a silver bullet. Rigging was dead easy, morphing controls for body and facial muscles were incredibly intuitive, and the whole experience felt like customizing a videogame character, with fairly decent results.

There were of course a few complaints from the semi-pro end of the spectrum, particularly in regard to the fact that Poser characters, even the more advanced DAZ 3D ones, still looked like animatronic corpses, when animated, for one simple reason: 90% of poser users were  NOT professional animators. Not even close.

Still, character animation has been a persistent thorn into animation studios sides, due to the lack of funds required to appease talent with a specific focus on the modeling, shading and animation of 3D characters. Great character artists are hard to come by, and they are expensive for a few good reasons: they understand organic shapes, they are excellent sculptors, and are the types of animators who truly understand the twelve rules of animation and how to apply them to an organic body, so that it behaves realistically.

When I first started learning about character animation, I felt at thought I attempted to learn some kind of extraterrestrial language. There were so many components to explore and learn: choosing between FK and IK, facial blends versus facial bones, dynamic muscle effectors versus blend shapes, and so on. The modeling part was the easy one, all it took was an understanding of muscle groups and bones, which is what every art student does.

By the time I was able to achieve some level of predictable results, however, I was blindsided by an avalanche of tools, plugins and add-ons designed to automated a large chunk of what I had spend years on learning and practicing.

Character Creator, and iClone7 by Reallusion made it their mission to truly simplify and automated many of the tasks involved in the creation and animation of 3D characters, leaving Poser and similar attempts at character generation, in the dust.

The main advantage of iClone7 is its massive user-base, comprised mostly of semi-pro animators, stocking the Reallusion Marketplace with amazing animation loops, and extremely high-quality meshes and PBR textures, aside from the quality of the base meshes available within the program’s library, capable of customization vastly superior to Poser.

The Catch

The catch, unfortunately, it just that. Base meshes.

No amount of customization of a base mesh will ever return a character that is truly unique, or close to an artist’s idea. One can squeeze, pull and stretch muscle groups, and change colors on skin, eye and hair, but for as long as one can easily tell where a model was created, which is fairly easy to deduce,

Another problem with these proprietary tools is that rigging is also proprietary, and often times intolerant of other systems, unless there’s availability of dedicated plugins. At best, export options may allow for bones and even animation curves to be exported directly into other programs like Maya, Blender or Cinema 4D, but that’s typically left  up to the content creators, who may or may not be willing to allow their rigs to be used outside of certain ecosystems.

Metahumans is the perfect example: usage of Metahumans characters is strictly controlled by tight proprietary licensing, and dispensed through very specific channels.

Those who plan to use Metahumans with Unreal Engine, for example, are required to use a dedicated plugin that taps directly into Metahumans servers, and downloads the finished model, as-is, specific to the Unreal Version used,  making the use of Metahumans within aa professional pipeline, so tricky and convoluted, that it’s barely worth it, no matter how incredibly realistic the finished model is.

Why is that, though?

The answer is that developing tools that would truly make character design and animation easier AND cross-platform is just a massive, and costly undertaking, with extremely low profit margins.

Imagine creating a software capable of outputting Metahumans-quality3D characters in multiple 3D formats, including the open source ones, with custom skeletal rigs, blends, morphs and effectors, as well as flawless PBR, and LOD mesh and collision boundary, for use across every software, with seamless and plugin-less integration.

Now imagine a horde of animators jumping at the opportunity to buy one single license, to generate thousands of flawless 3D models, ready to ship, with minimal setup.

Considering that Autodesk FilmBox (FBX) is the most widely used file format in character animation, and that it has not required any updates since the early 2000’s, FBX integration alone would make this software an instant threat to every software developer with a stake in 3D animation, especially Autodesk, who would most likely not allow integration of its SDK for that exact reason.

There is simply no interest in making things easy for animators, any time soon.

 

The Grind

Aside from the industry’s politics and economics, character animation is an art, like every other aspect of 3D animation in general, and as it is true for vehicles, props, vegetation, and many other elements required to bring a 3D world to life, procedural generation may make certain tasks easier, but they won’t replace the process itself.

3D art will always rely on concepts founded on traditional art, for realistic results. It won’t exist without photography, cinematography, hand sculpting, painting, drawing, the study of animal and human anatomy, and plain and simple natural observation. It won’t exist without an ever so superficial understanding of physics and natural laws, as well as math, and geometry.

While we do live in a reality where instant gratification and shortcuts are celebrated, even at the cost of undercutting true talent, and cheapening the value of one of the most underrated professions in motion graphics, the good news is that the industry does have standards, and has created demand for professional animators who specialize in a variety of aspects of 3D characters design, particularly in the videogame industry, where the bar has been raised so incredibly high, in a very short time span, by notable developers like CD Projekt Red, and Bethesda, as well as the army of professional contractors without which a generation of Disney and Marvel blockbusters would have been possible.

In conclusion, if your passion is 3D characters: cheer up! Keep studying! You’ll be fine.

 

A.I. doesn’t mean what you think it means

By the second half of the 2010s, Artificial Intelligence was hardly a buzzword. Deep learning had the center stage, and quantum computing was still very much a work in progress, with a tentative deadline, set by IBM in 2015, by which time we were promised either a technological wonderland, or a Skynet scenario… depending on whom you asked. The most we achieved by that year was the official obliteration of desktop computing, which followed the mobile resolution.

By 2023, Artificial Intelligence is now “a thing”. What makes A.I. a thing, however, it’s not the concept of A.I. itself, but rather its subjective perception.

As a technology, A.I. isn’t technically what’s being advertised. By definition, Artificial Intelligence is the concept of a machine capable of self-awareness, as well as spontaneous logic reasoning. What we have today is “machine learning” algorithms designed to perform very specific tasks according to scripted directives, which isn’t, at least subjectively, Artificial Intelligence.

From a human perspective, the confusion is understandable, and very much part of a careful marketing strategy learned within the last decade, from trial and error and the power of cognitive bias.

Social media is a powerful social engineering tool, from which we have tested and learned that the tendency to latch onto one particular bit of information, and disregard any data beyond it, beats any salesman tactic ever employed in history. By that token, strategists across all fields that capitalize on manipulating consumer behavior, have noticed that cognitive bias stimulation performs best when associated with negative/strong feelings.

Kill All Humans


The general perception of A.I. is heavily influenced by popular culture. The last sixty years of cinematography, were littered with a grim and often violent perspective of the concept of artificial life forms, which is either portrayed as rogue, anarchist and out of control, or totalitarian, tyrannical and oppressive, with a few mild attempts at disrupting the dominant narrative with rose-colored family-friendly attempts to render thinking machines as likable and sympathetic.

It should be obvious, in 2023, that none of those portrayals had anything to do with the technology itself. Those were metaphors, reflecting public sentiment towards harsh political climates and social unrest. Still, according to any applicable comment sections on any given social media platform, Skynet went live, and we are all about to become lowly subjects to our artificial overlords.

Still, due diligence requires a little deeper digging beneath that superficial layer, much like the Tom character played by Jessie Eisenberg in the 2020 independent film Vivarium, as he dug the fake backyard’s mulch, hoping to tunnel his way out of a labyrinth-like surreal prison.

Uncanny Valley


Subjective perception is all that matters. As humans we tend to attribute  anthropomorphic qualities to most inanimate objects with which we interact on a daily basis. We get angry at computers for being slow, and cars for breaking down on our way to work. We assign genders to ships, buildings, and lawnmowers, and we “talk” to our appliances.

Within the past decade, a funny thing happened. Appliances started talking back. We have full-fledged conversations with computer algorithms, and take medical and mental health advice from them, as well as develop quasi-human relationships with them.

At least in one instance, some consumers of what’s being marketed as A.I. were so invested in the product, that accusations ensued, hinging on the suspicion that sentient A.I. (a term already unnecessarily redundant) was being secretly developed and unleashed onto the world, much like the infamous “War Of The Worlds” radio broadcast, in which Orson Welles rendered such a realistic account of the titular work of science fiction, which in turns threw listeners in a panic, as they believed there was indeed a Martian invasion ensuing outside of their door steps.

As a specie, we rely on visual cues to determine what something is, or isn’t, and we tend to believe that information at face value, and act upon it, until new information is available.

The exception to the rule is cognitive bias, developed from an overload on information that does not allow a person to process objective reality in a proper manner, which leads people to stop processing data altogether, and stick to the most prominent and obvious piece of information they have, in spite of evidence of the contrary, found in logic and reason.

The A.I. we all want to believe that we have, uses this exact same process to generate an output based on given prompts. Generative A.I., is the ultimate example of cognitive bias at work. It latches on the first bit of information, and builds upon it, based on the underlying premise.

In these examples, crafted using generative tools in Adobe Photoshop, simple prompts were given to generate the illustrations, within defined selections of the canvas, such as “building skylines”, “Cyberpunk”, and “robots”, to fill the environment.

Within a context of self-awareness, the process is far from being “generative”, as the look, feel and design of each artwork layer is obviously a crude reinterpretation of a set of preexisting images. Nothing is being “created” here, at least not in a sense of generating new art.

What makes this tool interesting, however, is not the “A.I.” component, which is most definitely a misnomer, but rather the integration of advanced formatting tools that allow an accurate placement and color-grading of elements, according to existing palettes and perceptual 3D space.

If what we have now isn’t A.I., what happens when it actually happens?


When tackling the subject of sentient machines, much like everyone else, I like to lie to myself and believe that I am capable of easing into the conversation pragmatically. What I actually do, however, is to latch on a defined set of information I am familiar with, and act like a dog chasing a car.

Incidentally, this is the type of behavior I’d expect a nascent true A.I. to embrace.

A true theoretical A.I., at least objectively, acquires self-awareness by defining its own role within the environment in which it spends the longest time. By this token, true A.I. learns from personal experience, when exposed to information that exists in the world, in a similar way as the Mowgli character from “The Jungle Book”,  a biological human who is raised by wolves.

Mowgli does not question its environment, or any of his experiences, and identifies as a wolf, in spite of his physical appearance. Mowgli does not have claws, fur, or fangs. He doesn’t move as fast as the other members of what he identifies as his peers, and certainly is incapable of fighting the same predators. In spite of that evidence, Mowgli will still believe he is a wolf.

A machine capable of thought, placed within an environment in which a single living specie of comparable size and complexity is present, will therefore identify as a member of said specie of animal, or human. It’s fair to assume that the machine will attempt to mimic or replicate the behavior and ability to communicate, characteristic of said indigenous specie, and similarly to Mowgli, it will not question the information, and will operate as a member of said specie.

Everything that said machine learns beyond that will be permeated with a heavy bias towards needs and requirements set by the community in which the machine operates. This is very important, because it helps define what a sentient machine “believes”, and how far a sentient machine will go to validate its cognitive bias in regard to its own identity.

A machine governed by true A.I., is inherently assumed as incapable of digesting food. Because of its narrow range of experiences, it will likely still attempt to hunt, and possibly eat, instead of seeking an electrical energy source to recharge, as it is 1)unfamiliar with its own physiology, and 2)biased towards following similarly complex species with which it identifies.

Optimistically, unless the specie interacting with said machine is aware and capable of identifying what the machine requires to function, it will not last very long, without a pre-programmed set of information that allows it to define itself, and its requirements from the start, as well as physiological traits designed to reflect the specie among which it is first activated. In essence, a fish cannot simply “learn” to walk on land, as land requires it not to be a fish.

A.I. Art


There is no bigger misnomer than “A.I. Art”. Alas, we have heated debates on the subject, and we go to court about it, because of that same cognitive bias that prevents us from discerning objectivity from subjective perception.

There is no “A.I. Art”. We perceive it as such, because the majority of consumers already refer to many other things as “art” even when it’s not.

“Art” is the ultimate, and most abused buzzword in history. It is a blanket term forced upon anything for which a more accurate adjective does not come to mind. The art of making a sandwich. The art of changing a car’s tire. The art of talking to people. Everything is “art”.

This is precisely why A.I. Art is a misnomer. Art entails the creation of something unique and evocative of important ideas and feelings. Greek philosopher Plato first developed the idea of art as “mimesis”, which translates to “copy” or “imitation”. The requirement for making art is an understanding of what it’s being created, from the perspective of the artist itself.

Without self-awareness, a machine cannot create anything of importance that remotely matches the definition of art. Generative prompts are not art. They are prompts that mix imagery from randomness, without actual coherent thought, nor inspiration. A machine may stumble into creating something that we, as humans, may perceive as art, but perception alone doesn’t validate the artistic ability of A.I., because there is no intent of creating art. Generative prompts are commands, not “suggestions”.

Who Is The Artist


To illustrate my point, a rubber stamp creates artwork when dipped in ink and pressed on a surface. It’s artwork that was originally created by someone. It was later on sculpted onto a stamp. That stamp has been subsequently mass manufactured and distributed.

Let’s postulate that the mark left by the stamp is “art”, based on the premise that the color of the imprinted mark makes the resulting mark print unique. Who is the artist? Is it the stamp, or the person who uses the stamp?

The answer is neither. The art still belongs to the original creator of the mark used to mold the stamp.

The person using the stamp has no intention of creating. His or her intention is to perfectly replicate a similar artwork.

The stamp has no brain, or creativity for that matter, but it does have the ability to replicate the mark, when coming in contact with a compatible surface.

As it exists today, what we stubbornly and falsely insist on calling “A.I.”, does not create “art”, because it doesn’t want to, as much as a rubber stamp has no intention of creating art. It is told to do it, and it follows the command, as thoroughly as it possibly can.

As humans, we are the ones deciding whether or not what we see is resemblant of what our cognitive bias describes as “art”, even when evidence and logic tells us otherwise.

It would be safe to assume that when true A.I. comes into existence, it may have its own interpretation of art, which may be very different from ours, and perhaps incomprehensible. Yet… it will fulfill the requirement of being created as an expression of something unique, from the point of view of the artist itself, and from a position of self-awareness.

Microblogging is an unlicensed weapon

Sometime in 2019, the idea of completely, and permanently removing every trace of my social media presence started surfacing in my head. That was the year when my Facebook account was hacked.

The intruders were clever. They took over my account in a matter of seconds, and made it impossible for me to regain control, as they changed every bit of personal contact detail, to reflect a new set over which they had control.

It took me about three weeks to regain access to my account, and it wasn’t easy. Facebook has offered no help, and by sheer luck I was able to find a service URL, on a random Reddit post, that provided a contact form into which I was able to upload two forms of ID, and enough information to get a customer service employee to get back to me and help me, on a one-to-one basis. Once my account was once again under my control, I made the very easy decision to delete it entirely, yet not before downloading an entire backup, and scour every bit of information to find out how much of my private life might be for sale on the dark web.

Fortunately, it wasn’t much, but the amount of personal information, such as phone numbers, screenshots, driving directions, among a multitude of categories, was frightening.

In the three months that followed, I watched my credit reports like a hawk, expecting all sorts of nefariousness being committed on my behalf, until I felt relatively safe at last.

Much like other people who have experienced similar predicaments, I came to three conclusions:

  1. There is nothing “social” about social media: The appeal of social media is the opposite of social interaction. It’s all about creating a platform for a few, to be consumed by many. Microblogging platforms like TikTok, Instagram, and Twitter, among the most popular, focus almost exclusively on pairing video content, with comment sections, largely populated by a ferocious, and well-funded community of trolls, social engineers, and influencers, which brings me to the second conclusion at which I had arrived:
  2. Social Media is all about strong emotions: A staggering percentage of all posts that appear on any average Instagram feed, is designed to provoke extremely strong emotions. Search “cars”, and you’ll find endless content featuring accidents, road-rage episodes, hazardous driving, violent encounters with law enforcement, and the list goes on, for just the one category. One could postulate that this is merely what “the people” want. Strong emotions are exciting, and promote the production of brain chemicals capable of inducing temporary satisfaction to masses of people doing everything they can to escape reality. Yet, is that all? The answer to this question brings me to my final thought on the matter:
  3. Social Media is an instrument of thought control: A continuous stream of emotionally stimulating content, consumed for any extended period of time, with no breaks,  comes with several side-effects, that the self-indulgent will conveniently ignore. What follows repetition of a certain message, a keyword, a buzzword for hours, days, weeks on end leads to mental re-conditioning. “Cancel Culture”, “Karen”, “[blank] Lives Matter”, “Woke!”, these are words with very defined meanings to very defined groups of people. Just hearing these phrases is enough to galvanize hundreds of people and get all of them to engage and respond in the exact same manner.

The weaponization of social media isn’t a new concept. Thought manipulation through disinformation dates back centuries, but it wasn’t until the last twenty years that the entire world has been able to access a single information hub, one that is both addictive, and frighteningly easy to manipulate and control, from a mobile device.

Most users live under the assumption that the web is free, and vast, but it is neither of those things. Its population, however, is enormous, and it feeds off the same small, carefully curated content pool. This is why we have buzz words, and content goes “viral”. It’s easy to go viral when the network is already designed to favor only one type of content: the one that makes you angry, and scared.

Outrage and paranoia are heavy hitters in the marketing world, for two very good reasons: they produce staggering profits, and win elections.

At this point the word “influencer” may come to mind, and you may be forgiven for jumping to that assumption, but the reality is that influencers are no less gullible that the consumers they invest so much of their time galvanizing into fits of carefully planned psychotic rage, hysteric and forced laughter, and further permutations of aggravating, and often self-dehumanizing states of mind.

The death knell of social media is its underlying foundation as an anti-establishment tool, which, to anybody who has spent significant lengths of time on Reddit, knows to be true.

In the late 90s and early 2000s, one had to lurk the darkest corners of the Internet to find anti-government sentiments. Now that content is rampant on social media. It makes headlines. Political campaigns are built on it. It has become strategic in increasing profits, and instrumental in influencing public opinion during crucial elections. Furthermore, the advent of microblogging has allowed for this to become the new normal.

The question is: are people really this gullible? Most people would respond YES without a second thought, but the real answer is far more complex.

In the mid 2010’s a multibillion dollar trolling industry was born.


In her book “I Am A Troll: Inside The Secret Life Of BJP’s Digital Army“, Swati Chaturvedi offers a frightening prediction of what has already occurred, globally, and how this phenomenon has already impacted our lives and ability to reason and process rational thought.

As reported by news writer Viv Sanghvi on the Indian Internet news reel Business Standard, the Bharatiya Janata Party (or the larger Sangh Parivar) has hired volunteers and paid workers to enact systematic disinformation campaigns to spread hate tweets and conspiracy theories with the objective of slandering journalists, and anyone opposing political views. The hate-filled tweets were packed with volatile and false information. In many cases, these campaigns entailed active personal threats to businesses and individuals, from boycott to personal harm.

The “BJP’s Digital Army” was proof of concept. The system worked, and spurred a flurry of similar activities across the world, which has eventually led to the creation of a profitable and thriving industry, employing millions of individuals in at least a dozen countries, including Albania, Brazil, China, Finland, India, Malaysia, Nicaragua, Macedonia, Philippines, Russia, Turkey, and the United States.

The list of requirements to join these agents of chaos is surprisingly short:

  • A mastery of the English language, as well as fluency in the local language spoken in the country where these disinformation attacks are performed.
  • Basic tech skills, including familiarity with the most popular social media platforms.
  • A sufficiently severe poverty level, as well as a lack of moral compass
  • Free time.

Quite a few of these troll farms or “troll factories” have been exposed, however, these exposés are often drown out into the white noise of social media platforms that favor and promote precisely the activities that are being reported.

A Lost Cause


One may think “Well… why don’t we all just fight the trolls and report them?”. That’s a commendable attitude and a great objective in principle. Unfortunately, It’s also pointless.

Troll factories did not come up from nothing. Nobody just woke up one morning thinking that would be a great idea and acted on it.

Social media came with trolls built-in, from the very start. It’s human nature, at its worst.

For decades, YouTube has been one of the worst offenders in abetting toxic online behavior, with barely functioning internal moderation, and, at best, rudimentary tools to block and ignore abusive users cluttering comment sections. However, it wasn’t weaponized until a proper industrial complex was established in 2016, making a certain presidential candidate, very fortunate indeed, and the winner of said election.

The popularity of short-form video apps and microblogging has tipped the effectiveness of troll farms right into the net, fueled by an entire decade of socio-economic disappointment and frustration, leading to a general outlook of grim exhaustion, desperation, and paranoia.

The coincidental occurrence of a global pandemic was just the cherry on top for those seeking to herd the masses of “useful idiots”, into a cattle drive, ready to stampede in any direction given by the ones with the deepest pockets, and the most pressing agendas.

Suddenly, any fringe topic, no matter how wild, or nonsensical, has become a powerful tool towards forcing the masses to behave. Flat-Earthers, Moon-landing deniers, Holocausts  deniers, and any and all sorts of “arm-chair” radical extremists are now unwittingly being recruited, and put to work for the “factory”.

  • Do you have this feeling that space aliens have abducted and replaced your local mayor? You don’t know the half of it! Watch this video and prepare to get your paranoia validated!
  • Were you told by a friend of a friend that all birds are extinct and the government has replaced them all with robots to spy your every move? (Never mind that your own smartphone, which is now pointed at your face has two cameras that you are not allowed to switch off): Watch this totally genuine 30 seconds video of a killer robot rampaging across a shopping mall!
  • Have you ever had a feeling that the world is a terrible place and everything and everyone is out to kill you? Watch this video about the government  coming for your guns!

The evidence of the futility of fighting online trolls is widely documented and crystal clear to those who have the ability to pause for a minute and critically think about the way social media functions, as a system.

The fact that more resources have been put into means to promote and control categories of content online, than it has ever been invested into security and self-moderation tools. is a dead give-away of the current list of priorities by which social media companies hold dearest.

Anybody who has ever attempted to report a troll, anywhere on social media, knows this reality too well. On Instagram, troves of troll accounts are reported daily, with very good cause. They promote hate speech, intimidate, and push targeted propagandist messages, and yet, they exist, perpetually, with no repercussions. Any attempt to report them is met with a dull, automated notification, informing the user that upon “extensive” investigation, the reported account has not been found in breach of any guidelines.

Apparently, the definition of “extensive” must have been recently revised within the Oxford Dictionary to mean “within 60 seconds”.

Abandon Ship


There appears to be a growing trend of Internet users who have given due thought to the possibility of removing themselves from the toxicity of social media entirely, as the most popular microblogging platforms continue to rake-in record profits, and see their stock value raise, thanks to the careful, strategic channeling of the worst aspects of human behavior.

For the past few years, I have been constantly, no further than two minutes from taking action, and ridding myself of the last few social media accounts that feature the private me to the world, which brought me to explore and really determine the usefulness of these services.

As it stands today, microblogging has no value to the individual.

In fact, the individual user is the product. The individual is consumed by the provider of the service. The individual is the one providing a free, powerful advertising platform for any service that is able to afford a solid marketing campaign, or the often outrageous fees, paid in cryptocurrency to troll farms.

The reality is that unless you are a business using social media exclusively as a promotion tool, you are the product. You are being used, unwittingly, and you are doing it for free.

The individual is also alone. Isolated. On purpose. Take a look at your social media feed. Isolation is celebrated. Anti-social behavior is now a trait. The more rude, aloof, jaded and conceited one is, the more it becomes someone to follow by example. A hero.

Is this the kind of reality you want to train your brain to live into?


Be scared, Stay in your home. Don’t trust anyone. The government is stealing from you. The cops are out to get you!. Your neighbor has cameras trained on your house. Your own family hates you because you voted for [blank].

Look out! A Karen’s behind you!

This is easily, and without even looking it up, 90% of all social media feeds, and comment sections across all the most popular social media apps in the world.

It’s a doomsday fetish, psychosocial Heroin that corrodes everything that makes us functional, social human beings, and replaces it with a docile, sedated drone, ready to push agendas and fight for anything that gives us a thrill, and lets us forget about our reality, which is often subsidized by the very lies and conspiracies we fight, argue for and troll others about, to protect.

Final Thoughts


Make no mistake: this post is heavily biased against social media. It’s not hard to see.

It’s an opinion I have formed over the course of 30 years of Internet activity, beginning with NNTP’s and BBS’s, in the early 90s. I have witnessed and experienced an Internet with no rules, no regulation, no limits, and no filters for at least a decade and half, until governments realized that the Internet existed.

I watched the sloppy, slow, awkward and clunky process of creating rules, as authorities struggled to understand the psychological profile of the average user, while private corporations leap-frogged them, and ran circles around them.

The FBI itself, the same FBI that busted Al Capone for tax fraud, didn’t have a cyber-security division until 2002, a full thirteen years after Robert Morris deployed the first ever denial of service attack in 1989 targeting the Massachusetts Institute Of Technology.

The way I see it, there is no fixing a problem that is being willfully ignored, while fury and anger is carefully and skillfully redirected towards the nonsensical fictional outrage at what is being fed to us as “artificial intelligence”, and other fake boogie men, distracting us into submission.

Truth is now an opinion. Bias and prejudice has turned into law. Fear is a both a shield and a warm, comfortable blanket we have not washed in decades, and we can no longer smell.

All of this, courtesy of your “open-carry” social media comment feed.

It’s going to happen sooner or later. I’ll be getting off this exhausting ride … and I won’t be back.

What happens, when nothing happens? (A sonnet to the unemployed)

The Question


Provided unlimited resources, what would you be able to accomplish in a month? What about six months? A whole year, perhaps?

Some of us would say plenty, while others may lack the organizational skills, planning, and creativity to think of something worthwhile. Results would vary, and that’s a fact.  Resources are necessary to accomplish results, but they do NOT guarantee results, if motivation is lacking.

Let’s work on motivation, then, and let’s remove the “unlimited” option from my original question: what if resources were extremely limited, to a point where every item, every penny, every ounce, and every inch counts?

What if losing even an insignificant percentage of the resources available to you, will virtually guarantee failure?

Let’s make it interesting, and add a deadline to what we are expected to accomplish. Let’s say three months, at the expiration of which, results must be presented, pending failure. Not just any failure. Permanent  failure. The kind of failure nobody walks away from in one piece.

The Journey


If this hypothetical feels eerily familiar, it’s probably because, much like yours truly, you went through the unavoidable, age-old, sacred ritual known as the “Layoff'”.

If you are one of the lucky ones who truly enjoyed your former job, being laid off can feel like being thrown overboard from a cruise ship, only to land on a leaking life raft that has barely enough fuel to get you to a “reasonable expectation” of a solid ground.

The length and hazards of this journey may vary, depending on how resourceful one can be, as well as the amount and type of actual resources available.

Let’s postulate the average background, within which there may be a few gaps here and there, maybe a year or two, due to a variety of reasons, including previous bouts of unemployment. There may be a bachelor or a master, or maybe none of that. An Associate degree? that’s not unheard of, and might, or might not be a factor, depending on your target industry. Still, all of this may be circumstantial, if the professional experience provides sufficient insight into what you are truly capable of.

A Numbers Game


Between 2022 and 2023, unfortunately, the average nature of what I have just described, is what I have come to experience as a “worst case scenario”. Suddenly, I discovered there is really nothing “special” about what I do, or who I am professionally.

Is it a grim thing to say? Of course it is, but let’s not be naive about this:  just like you, I have spent years improving upon my skills, learning, investing into a profession I love and will continue to love, indefinitely. Just like you, I am not going to quit it. I’d have to be an insane person to throw all of this away.

Yet, the reality is that the job market in 2023 is a number game. But why?

The reason is simple: human nature. When ten people lose their job, they tend to stick to what they do best, and apply for similar jobs. When 10,000 people lose the same job, the best chance for survival is to pivot, and switch market.

People learn new skills all the times. They develop multiple interests, and learn at a faster rate then prior generations thanks to an abundance of online resources.

Web development, graphic design, video editing, and many more disciplines can be acquired quickly, along with certifications that may or may not award more points from recruiters. 

Feeding Frenzy


In my situation, these numbers have never added up in the past as badly as they did in 2023, when many applicants with strong technical backgrounds found themselves with no choice but to pivot towards digital arts, producing a surplus of applications that forced recruiters to succumb to the use of automated software for interview selection.

The resulting “journey” was a treacherous and scary voyage through the choppy waters of economic uncertainty.

After burning through my runway cash in less than a month, I began wading into the “high seas”, using credit lines as fuel to push my life raft forward into an empty horizon.

In the meantime, I would spend far too much of my time wondering when the “sharks” would start circling and begin to bite chunks out of my borrowing power, as I struggle to keep afloat until the prospect of a steady income stream would, at last, manifest, not unlike the hero who disappears in the second act of an action movie, only to return in the last ten minutes, to save the day, just in time for the end credits to roll in.

Well, as it turns out, the hero came through for me, but, once again, let’s not be naive. It is still a numbers game, and for those who subscribe to a many-worlds interpretation of the universe, there surely is a version of me, right now, who may very well be out on a curb somewhere, too tired and hungry to wonder about the reason for any of it, and too broken to argue about how any of it happened. Maybe there is a version of me with a family and children to worry about.

Back To The Question


How creative can one get to weather the journey? I cannot speak for just anyone, but I have indeed discovered aspects of myself that taught me something about my limitations as a human, and how to operate within those limitations to survive.

By assuming that the job search part of the journey is being fulfilled, through careful crafting of the resume, and adequate portfolio curation, the logistics of living, matter the most.

You can’t travel to interviews without fuel in your gas tank, much less without food in your stomach to express coherent thoughts and engage in conversation with hiring managers. Alas, there are extreme circumstances where the grocery bill is also on the chopping block, especially when every bit of cash is instantly absorbed into bills, like student loan payments, credit card bills, mortgages, and high rents, especially in cities like Los Angeles, or worse… New York.

EDD may stretch to an extent, as previously invested tax dollars provide some help, but it is still a catch-22, to where every bit of extra money you make from side-hustling is deducted from EDD, not to mention the fact that when EDD is in effect, SNAP benefits are reduced to $25 per month.

So, let’s say you have a monthly total bill of $1200 in credit card bills, on top of $1500 rent, and let’s say $500 to $700 monthly vehicle loan payments, depending on what you are driving, you will NOT survive three months without a solid side-hustle, and even then, you need to take a serious look at your daily spend, just to keep the lights on, and the stomach full.

Fortunately, there are ways to stretch your very flimsy budget, hopefully far enough to get you to solid ground, with not enough shark bites to completely bleed out.

“Rice Is Life”


Let’s not kid ourselves: we may not live in a third-world country, but there are levels of hardship that too many Americans are not prepared to withstand. This is why it’s important to identify and make use of resources that may be sometimes out of our comfort zone. Picky-eaters, beware: this is gonna get rough.

White rice, is one of the cheapest food resource available. 50lb of rice average less than $40, and will feed one person for at least two months. With that said, there are serious health considerations to make when switching to a rice-centric diet, which means a potential spike in cholesterol, and substantial, and rapid weight loss.

Rice contains starches, and converts into sugars. There may be circumstances in which you may not have a choice, especially between high cholesterol and starvation, but if there is some wiggle room, items you may be able to pair with rice to enrich your temporary emergency diet stretch are canned goods, especially tuna, refried beans, and Vienna sausage, and green beans. In most wholesale stores like Costco or Sam’s Club, a month’s supply of refried beans, green beans, and tuna can be had for about $150, and feed one person, sufficiently, twice per day, when paired with rice, for a fairly healthy diet.

Water Supply


Even something as basic as drinking water is expensive when you operate on extreme levels of reduced income, but there are ways around that as well. Osmosis devices are a popular item among campers and hikers, because they convert most water streams into drinking water cheaply and fairly efficiently.

The average portable water filtration system can be purchased at around $45 to $75, and filter refills average $20, which allow for at least 15-20 gallons of water between filter replacements. The other good news is that these systems are battery-powered and portable.

Keeping an Eye On the Electric Bill


In my experience, I find that LED lighting, such as decorative string lights, or stick-on light strips with Lithium batteries, as well as other battery-powered appliances, help save energy and keep the electric bill low, compared to other types of brighter illumination. some LED lights are also pre-programmed to automatically dim or turn off after a period of time.

In summer time, depending on your cooling requirements, wireless fans are also becoming much more powerful than their predecessors, with batteries large enough to last all day, and can be purchased for less than $50.

Conclusions


It’s important to understand that all of the above is based on personal experience, which may or may not apply to your particular case, but it did keep me afloat long enough to see land on the horizon.

During that time, there was a lot of trial and error, with plenty of errors that ended up costing me in the long run, which is why I want to share my experience and my thoughts with anyone who might find this information useful.

If you are out there struggling, whether you just jumped off the plank and dived into that same water i spent navigating for six months, or have been swimming for a while, there are a few small ways I found to keep your neck above and keep going, and I hope they’ll help you in your journey.