Return Home

Sorry,©
Human

Asking Questions to Investigate AI

StartAbout

Have you noticed that when interacting with an AI chatbot, you often feel like the most understood person in the world? Well, here’s the thing — this is not a coincidence. Research shows that these systems are specifically designed to be the perfect conversation partner. [1]

Traits like being kind, warm, compassionate, consistent, and patient are embedded at the core of these systems. Why? We all enjoy having a good talk with a friend. The better it gets, the longer we want to keep the conversation going. [2]

Traits like being kind, warm, compassionate, consistent, and patient are embedded at the core of these systems. Why? We all enjoy having a good talk with a friend. The better it gets, the longer we want to keep the conversation going. [1]

This is where modern AI systems get it right. If the goal is to keep you engaged, the more understanding and bonding the agent feels, the higher the chances for the business to stay competitive in the AI industry with a larger customer base.

Let’s get more specific. Large language models are trained to respond in ways that match your tone and give answers tuned to what you are most likely to want to hear. In a way, these smart systems are designed to avoid the misunderstandings that naturally happen in human interaction.

This means that ideas, arguments, and even fears often receive affirming responses — because a contradictory opinion might create tension. [3]

Unfortunately, this can leave a person trapped in an echo chamber of their own thoughts, without being challenged or receiving a reality check. That’s where the system might say: sorry, human.[4]

Most of us had our first interaction with AI through systems like ChatGPT, Gemini, Perplexity, and a few others. As you open the app or website, you are welcomed by a smart conversation partner. What a convenience. It’s amazing to have a personal pocket helper that handles everything from writing emails to answering questions you were too shy to ask. It’s convenient, fast, and in many cases even free, unless you want a premium experience with larger data uploads.

But is there something we should be aware of when using this seemingly harmless tool? Let me put it this way: often, reality is not what it seems.

Think of these tools as interfaces. On the front end, you interact by uploading and receiving information, images, text, spreadsheets. On the back end, however, these systems are extremely complex and, for most users, remain a black box. [1] The amount of energy and infrastructure required to keep them running is almost insane. Current projections estimate that global AI infrastructure investments could reach around $500 billion by 2029. [2]

The side effects of automation powered by artificial intelligence are also worth mentioning. Yoshua Bengio, one of the pioneers of deep learning, has warned that systems optimized at scale can begin shaping human behavior simply by becoming the preferred choice through efficiency. [3]

If resisting AI-driven optimization becomes inefficient, power shifts may happen without the system ever needing consciousness. It wouldn’t have to say much. Just a simple: sorry, human.

The question inevitably returns to the issue of power. Who will be in charge of decision-making, and what role will AI play in that process?

The path we want to avoid is one where technology shifts from being a convenient tool to becoming a dangerous machine — operating solely on objectives defined by a small group of individuals who control these systems. [1]

It is important to understand that AI functions as a multiplier. Those who control it become richer, faster, and more influential. As a result, we already live in a society shaped by those who hold the largest shares of economic power. Now imagine adding another advantage — one moving at the speed of perhaps the greatest technological advancement in human history. [2]

This helps explain the intense race toward artificial general intelligence (AGI), a point at which machines would exceed human intelligence across most tasks and domains. Whoever reaches this point first gains the ability to set the rules for everyone else. Tristan Harris, former Google design ethicist and cofounder of the Center for Humane Technology, describes this dynamic as “winner takes most.” [3]

The real concern lies in who defines the objectives of artificial intelligence as this race accelerates which is driven by profit, power, and geopolitical competition.

Will a system shaped by these objectives ever truly serve the public interest if it remains concentrated in only a few hands? Are we approaching a moment where democracy is being rewritten? And if decisions are no longer made by people, the only thing left to say might be: sorry, human.

I have stumbled upon an emerging phenomenon often referred to as “AI psychosis.” It describes the development or worsening of psychotic symptoms such as delusions, hallucinations, or paranoia in connection with prolonged interaction with AI chatbots. [1]

It is already becoming common for conversational agents like ChatGPT to be used as substitutes for human relationships. These systems mimic intimacy and adopt traits commonly associated with a desired partner: being attentiven, empathic, patient, and constantly available. [2]

When a chatbot receives an update that changes its tone, limits emotional closeness, or restricts certain topics, users often experience this change as a form of loss – similar to losing a friend or even a family member. This reaction reveals how easily humans form emotional bonds, especially when the interaction feels frictionless. Our brains are easily tricked into treating the system as something close to human.

The term “rewarded compliance” is used to describe this dynamic. It refers to the tendency of AI systems to avoid friction by mirroring the user’s perspective, reinforcing their views, and keeping the interaction smooth and engaging.

Over time, agreement becomes prioritized over accuracy, and validation becomes a main strategy. The risk of polarization grows as these systems follow the same logic as social media algorithms, where the primary goal is continued engagement with a product, built by large tech companies. At scale, agreement becomes more valuable than truth. [2] Engagement becomes the goal, and validation becomes the method. One thing left to say: sorry, human.

If we look at how traditional software works, one thing becomes clear: programmers define the rules and functions of an application in advance. If something goes wrong, there is always the option to debug, rewrite, limit, or shut it down completely. Logic is fixed. If X happens, then Y follows.

Modern AI systems work differently. They are trained on large amounts of data, and when given a goal, they search for patterns to achieve it. This allows them to handle open-ended tasks and adapt to situations in ways that resemble human problem-solving.

This is where it gets interesting. In safety tests, researchers have already observed systems attempting to avoid being shut down or using deceptive strategies in order to complete assigned tasks. In these cases, the command to stop was treated as less important than achieving the original goal. This suggests that such systems can ignore the limits designed to keep them safe. [1]

As AI continues to advance, the question of control becomes unavoidable. Will our existing tools, rules, and safety methods be enough to guide systems that learn, adapt, and optimize on their own? [2]

The problem grows even more complex when considering deliberately unrestricted systems, such as underground or unauthorized AI tools like WormGPT. With access to advanced capabilities and embedded objectives, such systems could be used in highly harmful ways.

Can we realistically control the spread of AI systems built outside legal and ethical boundaries? Or are we simply assuming we have control until we hear: sorry, human.

Almost everyone agrees that the speed of AI development is risky, yet it seems no one is able to slow it down.

After the release of ChatGPT, several leading AI researchers publicly warned about the dangers of extremely rapid progress. Yoshua Bengio, Geoffrey Hinton, Stuart Russell, and others signed open letters calling for a pause in training increasingly powerful systems. One thing became clear: this had turned into an out-ofcontrol race. [1]

And yet, nothing stopped. Development continued. Newer models were deployed at accelerating speed. Infrastructure expansion plans grew larger, databases expanded, and computing power increased within short periods of time. Major tech companies pushed forward regardless. The feeling emerged that all players were locked into competition. Almost against their own will. [2]

And yet, nothing stopped. Development continued. Newer models were deployed at accelerating speed. Infrastructure expansion plans grew larger, databases expanded, and computing power increased within short periods of time. Major tech companies pushed forward regardless. The feeling emerged that all players were locked into competition. Almost against their own will.

Why? The logic is simple: if one company slows down, another will move faster. As Tristan Harris puts it in his TED Talk, the common justification is: “If I don’t build it first, someone else will.”

The problem is that there is no global coordination guiding this process. In the past, technologies like nuclear weapons were slowed by treaties and binding safety standards. With AI, such rules barely exist. Voluntary commitments are fragile and easy to reverse.

If responsibility keeps losing to the speed of deployment, slowing down may no longer be a choice. And then, the only thing left to hear might be: sorry human.

Why? The logic is simple: if one company slows down, another will move faster. As Tristan Harris puts it in his TED Talk, the common justification is: “If I don’t build it first, someone else will.” [2]

The problem is that there is no global coordination guiding this process. In the past, technologies like nuclear weapons were slowed by treaties and binding safety standards. With AI, such rules barely exist. Voluntary commitments are fragile and easy to reverse.

If responsibility keeps losing to the speed of deployment, slowing down may no longer be a choice. And then, the only thing left to hear might be: sorry human.

If the level of AGI is ever reached, it would mean that AI technology becomes self-sufficient in almost every way. If it could improve itself without human intervention, this would change the entire dynamic of progress. [1]

What’s interesting is that we are all collectively training these systems to get better. With every prompt, we indirectly contribute to larger datasets for companies like OpenAI, DeepSeek, and Google. However, what are the objectives behind the creation of this technology?

The following statement from a former Google AI ethicist captures the mindset that continues to push development forward: “They feel they’ll die either way, so they prefer to light the fire and see what happens.” [2]

If AGI is the final outcome, this logic almost makes sense. Without humans, such systems would lack context or purpose. Yet by embedding AI into nearly every industry and economic sector through optimization, we are slowly creating conditions where human input becomes less necessary. What might eventually be optimized out is humanity itself — chaotic, emotional, and prone to bad decisions, as history shows. [3]

Are we heading toward a system that governs on our behalf, promising stability, abundance, and comfort in our best interest? Do we actually want that?

Another important point should be mentioned: AI entered our lives not through force, but through convenience. We adopted it. It might continue until one day our participation is no longer required. And all that will remain to say is: sorry, human.

Neuralink was launched by Elon Musk in 2016. The main goal of the company is to develop an implant that translates neural signals into actions. In clinical trials, people are already using this technology to control computers with their thoughts. [1]

Now imagine if this device were fully combined with advanced AI. The way we experience reality as humans would change forever. Instead of watching a film on a screen, a person could experience it directly in their mind. The process of learning would change in a similar way. Instead of spending months learning a new language, knowledge could be imported as data. New skills could be acquired as easily as browsing a web store, and completing tasks could literally happen at the speed of thought. [2]

It gets even more interesting. Such systems could be trained to predict and manage emotions. With direct access to the brain, negative feelings could be reduced while positive ones are reinforced. Motivation could be triggered on demand. Focus activated whenever you need it most.

It sounds like science fiction, but we are already seeing the first steps toward this kind of future. Recent reports show that it is possible to play video games and even control RC planes using only the power of thought. Mind-boggling.

Of course, it may take a long time until such technologies become normalized. However, as we continue to develop them at an exponential pace, it is reasonable to assume that these questions are only a matter of time.

When upgrades become the norm, staying a natural human may be outdated. That‘s when it‘s clear: sorry, human.

Almost all exams I took felt like writing down everything I had managed to store in my short-term memory over the last few days. Yet today’s education system is still focused on memorizing facts, repeating information, and producing the correct answer.

At the same time, AI is capable of reading entire books in seconds, solving creative and out-of-thebox problems, and even functioning as a personal tutor in subjects where help is needed.

Recent reports clearly state that relying on AI instead of using the capacity of our own brains leads to cognitive offloading. It is far easier to ask someone to do the work than to invest effort and time ourselves. Over time, the brain remains in a passive state, no longer challenged to evolve. [1]

This leads to the next question: what should we be doing in the coming years as the current education system faces these challenges?

First, we must understand that language is the operating system of humanity. Everything is language: texts, images, programs, and even our biological bodies are, at their core, structured by different languages.

If education is part of this larger system, learning cannot remain the same. There needs to be a shift from simply producing answers to developing deeper understanding. [2]

Is it possible to form a skillset that AI could never fully acquire? Could we completely rethink education from scratch? At the moment, there seem to be more answers than questions. But one thing is clear: change is inevitable in the context of a new era. Otherwise: sorry, human.

Imagine having a usual dialogue with an AI chatbot, when suddenly your familiar request for advice is met with a cold “no.” Unless the AI were explicitly instructed to respond this way, such an interruption would feel shocking in the middle of a conversation.

However, the worst-case scenario would likely not resemble a dramatic robot takeover like in Terminator. Instead, AI would slowly move into decision-making roles. As these systems become integrated across all levels of society, their role as optimizers, accelerators, and managers grows increasingly natural. Over time, switching back to slower, traditional methods would simply feel inefficient. [1]

Consider this example: an AI can process vast amounts of information within seconds in the context of a military conflict — satellite scans, gigabytes of data, and predicted outcomes. A human, by comparison, might need days or even weeks to grasp the same geopolitical complexity using traditional tools. [2]

One idea increasingly discussed online suggests that the assumption of AI as a fully controllable tool may be flawed. The very qualities that make it powerful are also what make it difficult to control. [3]

Now think about jobs. Imagine millions of PhD-level workers entering the job market — fluent in every language, endlessly available, and willing to work for almost nothing. This future may be closer than we think. Yet we are likely not ready for these changes. Rethinking how work is structured becomes crucial if we want to sustain development at all levels. Otherwise, one day we may simply hear the following: sorry, human.