Why some people think about IA the wrong way ?

I’ve noticed a recurring pattern in how we talk about AI:

We often measure it against ourselves.

By « ourselves » I mean humanity collectively. We compare its « intelligence » to human intelligence. We judge its « performance » by how closely it mimics human reasoning. We even criticize it for not feeling emotions the way we do. But what if this is the wrong framework altogether ?

Human thinking is not a universal benchmark, it’s just one form of intelligence shaped by many constraints: our bodies, our senses, our cultures, our experiences. Everything about how we reason and solve problems is conditioned by these factors.

So why should we expect AI to think the same way ? And more importantly, why should we assume that our way is the optimal one ?

At a recent dinner, someone said: « AI isn’t as intelligent as humans ! It doesn’t feel emotions !« 
That statement reveals two deeper issues.

First, we still don’t have a clear, universally accepted definition of intelligence. It has been debated in philosophy for centuries. So claiming that one system is « less intelligent » than another is, at best, subjective.

Second, why should AI feel emotions at all ? And even if it did, why would or should it experience them like humans ? Do all living beings experience emotions the same way ? And is emotion always an advantage in reasoning or decision-making ?

These questions quickly move us into philosophical territory and i we won’t go there right now.

What i want to point out, is that evaluating AI through a purely human lens may not be fully relevant. It’s like asking a fish to climb a tree and concluding it lacks ability. Different systems operate under different rules, constraints, and strengths.

Large language models, for example, don’t « think » like humans. Their form of reasoning is fundamentally different. But different doesn’t mean inferior. Just as two people can solve the same problem in completely different ways, machine learning systems may reach valuable outcomes through paths that are unfamiliar to us.

What’s truly fascinating is that AI challenges us to rethink and reevaluate our own intelligence. For the first time, we may need to seriously consider that human intelligence is not the reference point and surely not the most optimized one in every context.

This isn’t about being pro or anti AI. It’s about recognizing a simple idea:

If something doesn’t think like you, it won’t behave the way you expect. And that doesn’t make it necessarily wrong: it makes it different.

If you’re interested in how AI might evolve over the next few years, I recommend exploring https://ai-2027.com/. It’s not about precise predictions, but it raises important questions about where we’re heading.

Because beyond performance and capabilities, the real challenge is ahead of us:

How do we regulate, integrate, and coexist with systems that don’t think like we do?

That’s a conversation we all need to be part of to ensure a brighter futur for all of us.

Don’t be misled :

I like AI, just not everything it does. Just like I like humans, but not everything they do.