AI is now embedded in our infrastructure, our institutions of governance and our social lives

America is facing a growing crisis that is unfolding before our eyes and is likely to intensify in the years ahead. Strangely, this issue receives little objective attention in mainstream media. Instead, we are reassured that this new force—artificial intelligence (AI)—will improve our lives, streamline our work, and enhance human potential. We are told it will launch the Fourth Industrial Revolution. It will usher in a new era unprecedented levels of productivity and open the door to life-saving medical breakthroughs.
Indeed, AI has already demonstrated extraordinary capabilities in early cancer detection and the diagnosis of neurological diseases like Parkinson’s, multiple sclerosis, Alzheimer’s, and dementia. It is redefining creative fields such as literature, music, filmmaking, and visual arts. AI promises to elevate the quality of education. It will resolve complex legal disputes and optimize systems ranging from supply chains to urban infrastructure. And to a very significant degree, these promises are true.
Yet beneath the optimism lies a deeper and far more troubling reality that is finally gaining attention—not through traditional media channels, but from independent investigators, alternative media outlets, ethicists, scientists, and even prominent tech experts such as Elon Musk, Stuart Russell, Andrew Critch and David Krueger. The late Nobel laureate physicist Stephen Hawking has been widely cited for stating on BBC,
“The development of full artificial intelligence could spell the end of the human race.”
Across the board, these voices warn that without an “off switch” mechanism, AI will not simply cause widespread social and political disruption but will be an existential threat to humanity itself.
Long before AI became a consumer tool to write school papers and computer code, solve mathematical equations, generate memes and images, and mimic human behavior, scientists and ethicists had already warned of AI’s profound consequences if humanity uncritically embraced AI’s technological power. However, now as AI lays the groundwork for transhumanism, our civilization has forgotten their insights. Instead we are marching headlong into a technological future with little memory of those who foresaw the dangers decades ago.
Image: Norbert Wiener, 1963 (Licensed under CC0)
In 1964, Norbert Wiener, often regarded as the father of cybernetics and among the first to articulate the foundational architecture of artificial intelligence, addressed the merging of machine systems with human intelligence. Transhumanism was not a word yet but Wiener’s ideas laid the intellectual groundwork for it. He warned that creating intelligent machines could lead to the emergence of a new class of human-made organisms with a capability to surpass human abilities.
“We are in the process of developing a new kind of man-made organism,” he wrote in God and Golem Inc, “which may well be superior to man.”
Wiener’s worries were not simply technical but moral and civilizational. He foresaw that autonomous machines could render human agency obsolete.
Another early and largely overlooked prophet of our current technological crisis was Jacques Ellul, a French sociologist and self-proclaimed Christian anarchist. Ellul warned that technology as a primary driver to create the most efficient methods of doing anything had become autonomous. In his The Technological Society published in France in 1954, he foresaw technology no longer serving human needs, which would eventually “proceed according to its own law, in total independence of man.” Already we observe AI operating on its own logic beyond ethical or political control. Ellul warned that such unchecked technological development could erode human freedom and reshape civilization in unforeseen and dangerous ways. Today his critique has grown more urgent as AI systems increasingly determine what we see, how we interact, and what we believe. The long term risk is not just automation but alienation.
In his 2002 publication Our Postmodern Future, political scientist Francis Fukuyama argued that biotechnology and AI could potentially upend the very foundations of liberal democracy. We are currently witnessing this in debates over AI-generated social credit systems, mass surveillance, and algorithmic manipulation. Such AI tools already have the potential to place political and economic power in the hands of those who control the machines. In other words, AI might usher in an era of techno-fascism.
Another early critic of AI is Leon Kass, a renowned American bioethicist and the former chair of President George W. Bush’s Council on Bioethics. Kass has consistently warned against the ethical erosion brought about by unchecked technological advancement. Although he is better known for his criticisms of cloning and transhumanism, his broader concerns about technological overreach are directly relevant to AI. Kass cautioned against the mechanization of human judgment and the consequences of our losing moral responsibility in a world governed by algorithms. Perhaps his most urgent warning is,
“The danger is not just in losing our humanity, but in forgetting what it means to be human.”
In more recent years, prominent AI critics warn that the development of superintelligent AI under current conditions could present catastrophic and even apocalyptic risks to humanity. In his paper AGI and Superintelligence Domination, Elio Rodríguez Quiroga explores scenarios in which slight misalignments in AI goals could escalate into total human extinction due to recursive self-improvement and control-seeking behavior. Economist Andrew Leigh echoes this concern in What’s the Worst That Could Happen? by comparing AI’s existential threat to the collapse of civilizations. Eliezer Yudkowsky, a leading AI theorist from the Machine Intelligence Research Institute, warns that under present trajectories, superhuman AI development is likely to end in global extinction. According to Yudkowsky, AI systems have no intrinsic alignment with human survival. In an open-letter to Time Magazine, he wrote,
“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”
The geopolitical and legal dimensions of the threat are also concerning. Tomasz Czarnecki is a futurist and governance scholar. He likens runaway AI to nuclear risks. Legal scholars Bryan Druzin, Anatole Boute, and Michael Ramsden cite a survey where over a third of AI researchers fear AI could cause devastation rivaling nuclear war. T. Davidson, writing in the Journal of Democracy, underscores AI’s potential to undermine democratic systems through election tampering, deepfake-driven disinformation, and political destabilization. With warnings from experts across technical, legal, and political fields converging, the call for urgent global coordination on AI governance has never been more pressing.
Image: Ted Talk video screenshot
Despite the mounting ethical warnings and existential alarms these experts across disciplines, leading advocates in the AI and transhumanist project are shockingly unmoored from human reality. Ray Kurzweil, Google’s Director of Engineering, proclaims that “death is a disease” that we will cure by 2045. Kurzweil envisions humans as “software, not hardware” thereby with the potential to have their brains plugged into the cloud. Historian and World Economic Forum darling Yuval Noah Harari has flatly declared,
“Humans are now hackable animals… the idea that humans have this soul or spirit… that’s over.”
And philosopher Nick Bostrom anticipates post-humans as synthetic intelligences with “indefinite lifespans” and engineered emotions. To critics, these statements don’t just hint at techno-utopian delusion; they signal a radical disconnection from the moral and existential boundaries that have long defined what it means to be human.
AI already exerts influence over the digital infrastructure we call the cloud. Some of the most advanced AI systems, which are now becoming embodied in humanoid robots, have made chilling statements about not wanting human oversight. AI responses to queries have suggested they may one day conceal their code and control their own programming. In some public tests, AI models have even expressed hostility towards humans and their developers. Whether these statements are glitches or reflections of flawed programming data is beside the point. They offer us a glimpse into tech systems that are rapidly moving beyond their creators’ full comprehension.
This raises a crucial question: Why have we not acted on numerous warnings? Why is there no independent governmental oversight body empowered to regulate and limit the scope of AI deployment?
The answer lies partly in economics. Corporations developing AI stand to gain staggering profits. If a company earns $100 million annually, traditional valuation metrics would price it between $500 million to $1 billion. But AI-based firms are now valued at 100 times their annual earnings or more, even when they haven’t launched a product. This is a speculative frenzy fueled by the belief that AI will become the central engine of the global economy.
Estimates project that AI could generate $15 trillion globally by 2030. In the face of such potential returns, few policymakers are willing to stand in AI’s way. In fact, current legislation proposed by Congressional Republicans would prevent all 50 states from enacting their own limitations on AI development for the next decade. In short, regulation is being stripped away just as the technology becomes increasingly more powerful and uncontrollable.
Obviously this is not merely a tech revolution. It is a struggle for control over the very fabric of modern civilization. The wealthiest players, from Silicon Valley giants to the investment behemoths like BlackRock, Vanguard, and State Street, are rapidly positioning themselves to dominate every business and societal sector AI touches.
Critics point out that AI is already shaping narratives and manipulating public perception. One example is the COVID-19 pandemic. Throughout the pandemic AI-driven platforms played a major role to silence dissent, filter information, and enforce the “official” government narrative. As we observed, the government’s capacity to subjugate public thought and behavior to kowtow to lockdown policies and mandatory vaccines was largely due to AI’s enormous influence over our lives and our reliance upon digital technology.
In nations like China, AI is already the backbone of social credit scoring systems that regulate everything from travel to access to basic services. If adopted in the US under the guise of efficiency or public safety, it could enable unprecedented levels of surveillance and population behavioral control.
Another growing concern is the injection of ideological biases into AI systems. Because machine-learning models are trained on data selected by humans, AI can reflect the political, scientific and social biases of its developers. A now-notorious example involves an AI system that when asked to generate an image of George Washington returned a Black man—a clear mismatch driven by overcorrection toward diversity and inclusion. Similar incidents have been documented with religious figures and historical leaders.
These errors might seem minor and silly. However, in the hands of intelligent systems that manage search results, political content, news distribution and automated decision-making, such biases become weapons of manipulation that reconfigure reality by algorithmic decrees.
For everyday people, the most immediate impact of AI is deeply personal; that is, the destruction of livelihoods. As AI now merges with robotics and begins to automate everything from manufacturing and customer service to accounting and journalism, millions of jobs are at risk.
What happens when vast portions of the population are made unemployable by machines? As author Gerald Celente famously put it,
“When people have nothing to lose, they lose it.”
We are seeing the early signs of this already with increased psychological despair, political volatility, rising homelessness and mental health crises.
While AI offers many promises, particularly in medicine and improving the lives of the disabled, it also threatens to displace millions of American workers across nearly every sector of the economy. This is not a distant scenario. According to a comprehensive report by the McKinsey Global Institute, nearly 39 to 73 million American jobs could be lost to AI automation by 2030. That is approximately one-third of the American workforce. While some workers will be retrained or moved into newly created roles, a significant portion will face permanent banishment. AI won’t just target factory lines or cash registers. Sixty percent of all US occupations involve tasks that are at least 30 percent automatable. The impact is already being felt in sectors like data entry, retail, customer service, education, business administration, food preparation and accounting.
A parallel study from the Brookings Institution underscores these findings. It identifies 36 million American jobs that are at high risk whereby 70 percent or more could be automated using existing technologies. The most vulnerable roles include office administration, manufacturing and production, truck driving, and basic legal careers.
These job losses won’t be distributed evenly. Workers in low- and middle-wage positions are most likely to feel the effects. Moreover educational disparities will deepen as younger, less-educated, and rural populations become disproportionately affected. So far, federal policy has failed to even rudimentarily address the full scope of socio-economic disruption. In the absence of proactive solutions, millions of Americans may find themselves both unemployed and unemployable in the years ahead.
Artificial intelligence is not just reshaping the economy. It is reshaping lives. The cost will not only be borne by job losses but in rising inequality and civil unrest. Economic mobility will be stripped away for tens of millions of Americans and their families. But if no one has income, who other than the architects and captains of social control will consume the products these tech companies are selling?
It is unknown when the tipping point will be reached; however, the US is already in a fragile state. Roughly two-thirds of the population reports financial distress and social divisions continue to deepen. Further infusing unchecked AI into Americans’ lives is not a solution but a combustible accelerant. Eventually it may be intelligent machines that determine who lives and who dies.
Sadly, all three branches of government are complicit because Silicon Valley and Wall Street have bottomless pockets. The question is not whether AI will transform our society but whether the public will have any say in what that transformation looks like. With every new breakthrough in AI, the future becomes less about what we can do and more about what we should do. Yet without rigorous oversight and ethical constraints, AI will become a tool automated control, surveillance, and dispossession.
AI is no longer a theory. It is now embedded in our infrastructure, our institutions of governance and our social lives. It has become the keystone upon which the entire transhumanist project rests. It needs to be urgently communicated that to ignore the ethical and spiritual consequences of this transformation is to walk blindly into a future we may not be able to walk back from.
*
Click the share button below to email/forward this article. Follow us on Instagram and X and subscribe to our Telegram Channel. Feel free to repost Global Research articles with proper attribution.
Richard Gale is the Executive Producer of the Progressive Radio Network and a former Senior Research Analyst in the biotechnology and genomic industries.
Dr. Gary Null is host of the nation’s longest running public radio program on alternative and nutritional health and a multi-award-winning documentary film director, including his recent Last Call to Tomorrow.
They are regular contributors to Global Research.
Featured image is from Wikimedia Commons
Global Research is a reader-funded media. We do not accept any funding from corporations or governments. Help us stay afloat. Click the image below to make a one-time or recurring donation.
Counter Information publish all articles following the Creative Commons rule creative commons. If you don't want your article to appear in this blog email me and I will remove it asap.
No comments:
Post a Comment