The Era of AI is Here
How Artificial Intelligence is becoming the mediator of all human interactions, and why that is so scary
What if the most significant revolution of our time isn't just about faster computers or smarter algorithms, but about a fundamental re-wiring of how we experience reality itself? In an age often polarized between unbridled techno-optimism and dire predictions of collapse, I believe a more nuanced, pragmatic approach is essential. We must look beyond the hype and the fear to understand the true nature of the technological shifts unfolding around us.
In this essay, I will argue that we are not merely witnessing the rise of powerful new tools; we are entering the Era of Artificial Intelligence, a profound societal transformation where AI increasingly mediates our relationships with objective reality and, crucially, with each other.
To understand the magnitude of this shift, consider how past technological eras have reshaped human existence. For the purpose of this article, I'll define an "era" as a period marked by a fundamental shift where the majority of relationships, both with objective reality and among people themselves, become mediated by a specific technology stack.
Let's look at a few examples.
First, consider what we may call the Era of Electricity. Slowly first, but increasingly accelerated, our interactions with the physical world were completely transformed. Light, heat, power, all mediated by electric currents. Work in factories and farms became mediated by electricity, then transportation, then entertainment and education. Electricity even changed how we sleep. As people started staying outside after dark, we changed from a sleep cycle, often biphasic, that mirrored the behavior of the Sun, to our monophasic cycle that is completely artificial—we go to sleep way before its dark, and we wake up often before the Sun comes out.
Another example is the Era of Computers. Our engagement with reality became even more profoundly mediated, not just by physical devices, but by the logic and processing power of silicon. From personal computing to industrial automation, the computer became the invisible hand guiding our interaction with information and machinery. Think how many jobs started to function exclusively with a computer mediating between our intentions and their results. We started to use computers to pay for things, where before a simple exchange of paper was enough. Even our language changed with the advent of computers.
The Internet Era took this mediation a step further, fundamentally altering human relationships. We started flirting over the internet, ordering food, arguing about politics, and making friends over the Internet. Our social fabric itself became woven through digital threads. All our life is intricately connected to the Internet, there is almost no interaction between ourselves and with the real world that doesn't happen through some sort of online application or platform. Even this interaction between me, the author, and you, the reader, is only possible because the Internet mediates. We wouldn't know each other otherwise.
Now, Artificial Intelligence stands poised to inherit this mantle. In the remaining of this article I want to argue why AI is already becoming the primary mediator of most meaningful human interactions, and will continue to do so in the near future. And while no one can deny the progress of technology brings incredible benefits---and I'm the first to acknowledge it and embrace the immense power that AI grants---it is also undeniable that this mediation brings some pretty scary consequences, some which are already unfolding.
AI as the Ultimate Mediator
AI's mediating power extends across every facet of our lives, subtly, yet profoundly, shaping how we perceive, interact with, and relate to the world and each other.
Consider how AI is already changing our perception and understanding of the world. In medicine, AI analyzes X-rays and MRIs, often detecting anomalies like tumors with greater speed and accuracy than the human eye. It mediates how doctors and patients perceive health conditions.
In industry, AI monitors machinery to predict failures, mediating our understanding of the physical world's future state. AI even helps us perceive the planet's health by processing satellite imagery to track deforestation or pollution.
Think about your phone's camera: when you take a photo, AI often automatically adjusts the lighting, focus, and even smooths skin, mediating how you perceive the scene or person you're photographing. It's not just capturing reality; it's enhancing or altering it before you even see the final image.
Beyond perception, AI is increasingly mediating our interaction and control over reality. Autonomous vehicles, from cars to drones, are changing how we physically move through and interact with our environment.
Smart home systems, managed by AI, mediate our control over our immediate living spaces. In manufacturing, AI-driven robots optimize production lines, mediating how we create and manipulate physical goods. When you ask a voice assistant like Alexa or Google Home to play music or turn off lights, AI is mediating your control over your environment through voice commands, rather than you physically interacting with switches or devices.
Many modern cars also have AI-powered features like adaptive cruise control or lane-keeping assist, mediating your interaction with the road by taking over some aspects of driving.
And then there's the realm of creation and augmentation. Generative AI, like Midjourney or ChatGPT, creates art, music, and text from simple prompts, mediating our creative output and expanding what's possible.
AI accelerates scientific discovery by analyzing vast datasets, mediating our ability to discover and augment knowledge. When you use an AI assistant or just an innocent grammar checker like Grammarly, AI is mediating your writing process and creative output, suggesting changes to make your sentences clearer or more impactful. Many video editing apps now have AI features that can automatically cut scenes or add background music, mediating your creative process in video production.
Our daily consumption of information is also heavily mediated by AI. Algorithms curate our news feeds, suggesting articles and entertainment based on past behavior, mediating what information and experiences we consume from the vast digital reality. Think about your Netflix recommendations; AI is constantly analyzing your viewing habits to mediate what shows and movies you see, often leading you down rabbit holes of similar content.
But the impact of AI on human relationships is even more profound. Communication itself is being redefined. Real-time language translation, powered by AI, enables seamless conversations across linguistic divides, mediating interpersonal understanding. AI-powered chatbots handle customer service, mediating interactions between businesses and consumers. And in a more intimate, and perhaps concerning, development, AI companions are emerging, offering conversational interaction and emotional support, mediating personal and emotional relationships in ways we're only beginning to comprehend.
Have you ever used a dating app? Or just Facebook or LinkedIn? AI algorithms are mediating who you see as potential partners by suggesting matches based on your profile and preferences, fundamentally changing how many people initiate relationships, romantic or otherwise.
When you send an email and your email client suggests auto-completing your sentences or offers quick replies, AI is mediating your written communication, subtly influencing your phrasing and the very nature of your interaction with the person at the other side of that email.
Consider how ride-sharing apps like Uber or Lyft connect drivers and passengers; AI mediates who you interact with for transportation, replacing traditional hailing or dispatch systems. Many online gaming platforms also use AI to match players for competitive games, mediating who you collaborate or compete with.
Finally, AI is beginning to mediate our very identity and self-perception. Deepfakes and synthetic media, generated by AI, create realistic images and videos of people, blurring the lines of authenticity and mediating how we perceive others and ourselves online. Personalized digital avatars, often AI-assisted, mediate our online identity.
Think about the filters you use on social media that can alter your appearance in photos or videos; AI is mediating how you present yourself to others online and, in turn, how you might perceive your own image. AI is starting to mediate even your relationship with your own mind.
The Perils of AI Mediation
The immediate benefits of Artificial Intelligence are undeniable. AI offers unprecedented efficiency, personalized experiences, and the ability to solve complex problems that have long eluded us. This potential for human flourishing is immense, provided we make conscious choices about its use.
However, this era, like all technological advancements, comes with no free lunch. While the benefits are compelling, they are inextricably linked to profound, often insidious, risks. To focus solely on these gains would be to miss the critical trade-offs inherent in this new era. As AI imposes more layers of mediation between us and our world, we risk a dangerous detachment from immediate reality, from each other, and ultimately, from our own agency.
This is where the prescient critique of Guy Debord, articulated in his seminal 1967 work The Society of the Spectacle, becomes chillingly relevant. Debord argued that modern capitalist society had transformed "all that was once directly lived" into "mere representation." The spectacle, for Debord, was not just a collection of images, but a social relationship mediated by images, where appearances replaced authenticity, and passive consumption supplanted active participation.
He warned of a world where reality itself was subsumed by its image, leading to profound alienation and a loss of genuine human experience. If Debord's mid-20th century world was a spectacle, the Era of AI threatens to usher in a Hyper-Spectacle—a phenomenon of mediation so pervasive and sophisticated that it would dwarf anything he could have imagined. I think there are three fundamental, distinct concerns about this hyper-spectacular reality that we should analyze in depth.
Disconnection from Immediate Reality
In the Hyper-Spectacle of AI, our direct engagement with reality is increasingly replaced by an algorithmic illusion. When AI curates all our information, news, and even sensory input, we risk living in an algorithmic filter bubble on steroids. Imagine an AI-powered AR overlay that filters out undesirable elements of reality, presenting a highly curated, potentially distorted, and ultimately unreal version of the world.
This is the ultimate triumph of Debord's "mere representation", where the mediated reality becomes more compelling, more perfectly tailored, than the actual reality. We become passive consumers of AI-generated solutions, outsourcing our direct engagement with the physical world and its challenges. This leads to a loss of practical skills and a diminished sense of agency, as our capacity for direct experience atrophies.
Think about how many people rely solely on their phone's GPS for directions, even in familiar areas. AI mediates their perception of their physical surroundings, often leading them to ignore street signs or landmarks, and potentially diminishing their spatial awareness. If the GPS is wrong, they're lost because they haven't engaged directly with the real world.
Similarly, when you watch a heavily AI-edited video or listen to an AI-generated song, you might be consuming something that feels real but was never "created" by a human in the traditional sense, mediating your experience of art and authenticity.
Then comes the black box issue—where AI systems make critical decisions based on opaque algorithms—which means we benefit from outcomes without understanding why or how. The "truth" presented by the AI is accepted without question, becoming an unchallengeable spectacle. This opaque authority reinforces the illusion, as the underlying mechanisms of control are hidden, and the mediated outcome is presented as objective fact, further separating us from the direct understanding of cause and effect.
For example, many people now rely on AI-powered smart assistants to answer questions or provide information without ever checking the source. If the AI gives a subtly incorrect or biased answer, it's often accepted as fact, mediating their understanding of truth without critical engagement.
Or consider a smart refrigerator that automatically reorders groceries based on AI predictions of your consumption; while convenient, it mediates your direct engagement with your food choices and shopping habits, potentially leading to a loss of awareness about what you're actually consuming or spending.
Disconnection from Each Other
The Hyper-Spectacle extends its reach into our most fundamental human connections, fostering a simulated relationship that replaces authentic interaction. The rise of sophisticated AI companions, so convincing they fulfill emotional or social needs, poses a chilling question: what happens to genuine human interaction?
The "relationship" with the AI is a performance, a simulation of connection, a perfect example of Debord's social relationship mediated by images where the image of connection supplants its substance. This risks eroding empathy, social skills, and the capacity for deep, complex human relationships, leading to profound loneliness in a hyper-connected, yet isolated, world.
Think about how many people prefer to text or message rather than call or meet in person. Even without AI, the digital mediation of communication can lead to a simulated connection where nuances of tone, body language, and spontaneous interaction are lost, potentially eroding deeper social skills. Now add an ever-present, never-angry chatbot that can replace all your friends and family.
Moreover, AI-driven algorithms optimize content delivery to maximize engagement, often by showing us more of what we already agree with. This fuels algorithmic polarization and echo chambers, making shared understanding and collective action incredibly difficult. Different groups live in entirely different "spectacles" of reality, curated by algorithms, making genuine dialogue and the bridging of divides increasingly impossible.
The spectacle here is not just what we see, but who we see and how we see them, fragmenting society into isolated, algorithmically-defined bubbles. If your social media feed is constantly showing you content from only one political viewpoint, AI is mediating who you see and what opinions you encounter, making it harder to understand or empathize with opposing views. You're living in an algorithmically-defined bubble.
Servitude to Embedded Ideologies
Perhaps the most insidious risk in the Hyper-Spectacle is the subtle, yet pervasive, servitude to ideologies embedded within these mediating layers. AI systems new and old, when trained on biased historical data, can perpetuate and amplify existing societal inequalities in areas like hiring, lending, or criminal justice. This ideology of past discrimination gets baked into the future, presented as objective truth by AI.
This is the spectacle's ultimate power: presenting its biased outcomes as neutral, objective reality, subtly reinforcing existing power structures and values without overt coercion.
Imagine an AI-powered resume screening tool that, because it was trained on historical hiring data, subtly favors candidates from certain demographics or educational backgrounds, even if those biases aren't explicitly programmed. This mediates who gets professional opportunities, reinforcing past inequalities.
To make things even worse, AI systems are increasingly designed to subtly influence our choices—what to buy, what news to read, whom to vote for—through personalized recommendations and persuasive interfaces. Our choices feel free, but they are meticulously guided by an unseen hand, a spectacle of autonomy that masks underlying control.
This is the essence of Debord's critique of commodity fetishism extended to every aspect of life: our desires and decisions are manufactured and mediated, not genuinely our own. When you're shopping online, AI-powered "recommended for you" sections or "customers also bought" suggestions are constantly mediating what products you see and are encouraged to buy, often leading you to spend more or buy things you didn't initially intend to. Your choice feels free, but it's heavily influenced.
When the development and deployment of powerful AI systems are concentrated in the hands of a few corporations or governments, it gives them unprecedented control over information, resources, and even human behavior. The AI-mediated world becomes a spectacle designed and controlled by an elite, shaping narratives and realities to their advantage, further cementing our role as passive spectators in a world we no longer truly govern.
The Future is Not Predetermined
This Era of AI is not a distant future; it is here, and it is accelerating at a pace that leaves little room for avoidance. We cannot, and perhaps should not, try to stop it. But we absolutely must shape its trajectory. The technology is inevitable, but its direction and impact are not predetermined. This requires a proactive, collective effort, guided by reason and evidence, not by fear or blind faith.
I have absolutely no idea how to "fix" the problem of AI mediation. But I'm sure there are at least three critical approaches we have to embrace, lest we become mere spectators of this hyper-spectacular reality.
First, we must demand regulation that keeps choice firmly in the hands of the people. This requires advocating for the principles of algorithmic choice. We should always be able to determine when an algorithm is being used to guide our choices, and crucially, we must have the agency to understand, challenge, and more importantly, select which algorithms we let influence ourselves. Just as we choose our food or news sources—because you do, right?—, we should demand the right to choose our algorithmic lenses.
This means not only having, e.g., options to turn off algorithmic recommendation in Netflix or Twitter, but also to connect our own algorithmic filters and recommendation engines to replace the built in functionality. To achieve this, platforms need to embrace open protocols for algorithmic recommendation, for example, such that anyone can implement a different feed sorting procedure for Twitter. Platforms like Mastodon and Bluesky are pioneering this approach, but without explicit regulation there is no hope the largest platforms will follow.
It is clear then that we need robust institutional oversight. This means proactive, informed government regulation to set standards and enforce accountability. But it also means empowering non-governmental watchdogs, independent organizations, academia, and civil society, to monitor, critique, and advocate. Their role is crucial in preventing overreach, ensuring fairness, and keeping powerful AI entities in check. This collective vigilance is essential to prevent the concentration of power that fuels the spectacle.
And finally, beyond regulation and oversight, a more fundamental and personal shift is needed. We must consciously and actively strive to reclaim our direct experience of reality and shed the algorithmic lenses that increasingly mediate our lives. This means cultivating critical awareness of when and how AI is shaping our perceptions, fostering genuine human connections that bypass algorithmic curation, and seeking out unmediated experiences.
It means valuing authentic engagement over passive consumption, and actively participating in the world rather than merely observing its mediated representation. This is a call to intellectual and experiential rebellion against the Hyper-Spectacle, a commitment to living directly rather than through the image.
The Era of AI is upon us. It promises to mediate nearly every aspect of our lives. The question is not if it will mediate, but how. Answering this in a way that encourages human flourishing demands an unending, tireless reevaluation of our position towards technology. There are no banners, no ideology we can trust blindly. Techno-optimists and AI-doomers are both wrong, because they believe the outcome of technology is predetermined. And that makes us, by definition, spectators.
So here is my call to action for you today. Dare to stop living inside this spectacle of reality for a minute, put down your phone, close your laptop, or turn off your display. Look up, the world is still out there, waiting for you. Call one of your friends and ask them out for a pizza or a coffee or a beer—or all of them for a triple bonus. Hugh your kids if you have any, and your parents if you're lucky to still have them around. Go outside, feel the warmth of the sun in your face, the smell of the city, or the countryside, or the ocean, the roughness of dirt or pavement or grass under your bare feet. If it hurts just a little bit, even better.
Dare to choose how you want experience the actual, raw, unmediated world. The choice, I believe, is still yours. But not for long.