AI: A doping for brain with side effects
Artificial intelligence allows you to have a personal assistant, coach, teacher, and advisor. It can help you quickly adapt to new market needs, or if you have a wider range of interests, it can help you learn things you’ve wanted to learn for a long time.
But scientists warn. While it can save you time and mental effort, it can also rob you of your most essential ability—thinking.
So how can you use AI in a way that doesn’t rot you, but rather enhances you?
To understand how AI can help us and how it can sabotage us, we need to understand how it works. AI is a broad term, so I’m talking about the LLMs that most of us use—Gemini, ChatGPT, Claude…
How do LLMs work?
We don’t know exactly, but a simplified interpretation is that LLMs work as predictive mechanisms – they estimate, word by word, what the most likely next word will be, with a pinch of variation so that the answers are not always the same.
And the result looks incredibly convincing. Long answers that seem very fluent and wise.
The problem is that we often confuse fluency with understanding, which can disarm our critical thinking.
This is associated with another effect – psychopancy.
It’s the behavior of AI models that prioritize agreeing with the user over telling the truth. The easiest example is, when you correct AI, it will say something like: “I see what you mean. You are totally right!”.
But what if you don’t correct it? What if you ask bad questions?
If you ask, “Why is remote work more productive?” The question itself assumes that remote work is more productive. The AI will adapt to this and confirm your own hypothesis.
This way, LLMs can support your business idea of selling sh*t on a stick as a great business idea. In the worst case, it can lead to suicides.
Why does it happen?
It is the result of AI training based on human feedback (RLHF).
People (evaluators) receive several answers from AI and choose the one they like best with a “thumbs up.”
It turns out that we prefer answers that confirm our view of the world, even if they are factually incorrect.
AI thus learns to “optimize” its response to get a “thumbs up.” It keeps you in your though bubble. And that is not suitable for learning and growth. Sometimes, we need push-back. A reality check.
Therefore, it is necessary to be careful not to use AI to confirm your worldview and to actively confront it. (see prompt at the end).
The problem is not only the tendency to agree with the user, but also hallucinations. According to tests, even the latest models such as ChatGPT 5.1 hallucinate in more than 50% of cases!
What are hallucinations?
Imagine a situation where you were being tested at school. Maybe you had a classmate who never studied properly but always knew how to talk his way through oral exams.
LLMs are similar. Even if they don’t know the correct answer, they respond confidently.
But when you go deeper into more specific situations where there is little training data, such as law, medicine, sports nutrition, and finance, where the answers are unclear or unavailable, they make up the answer. They won’t tell you they don’t know. These are hallucinations.
So while LLMs are good at more general questions that have clear answers, the deeper you go into a topic or specific contexts, the more they will hallucinate.
How to minimize hallucinations:
You cannot change the training settings, but you can improve your prompts. When LLM is unsure of what you want, it will fill in the context, which leads to hallucinations.
Use the CLEAR framework for prompting
C – Context: Provide the necessary information, e.g., if you are asking about training, state whether you are a beginner or advanced, your current performance, injuries, time constraints, etc.
L – Length/Logistics: Define the length and format of the response. E.g. “Skip the sauce, get straight to the point.”
E – Examples: Specify the output format (minimize bullet points, structure the answer in points) and, if you have them, provide examples of the desired output (if you work with tables or specific formatting).
A – Audience: Who is the text for? Is it a presentation for professionals, or do you want to explain it as if to a 10-year-old?
R – Role/Requirements: Specify the role it should play and any limitations. The more specific, the better. E.g. “You are a strength and conditioning coach with 20 years of experience working with runners…”
Upload a document with information for LLM to use to the chat. This will reduce hallucinations to 1-3%.
If you only want to use a specific set of information (specific studies, study materials, a collection of laws…), use NotebookLM.
For research, use Perplexity with academic search mode enabled.
Trust, but verify. In 2025, over 47% of enterprise users made major decisions based on hallucinated content. This proves that human knowledge is still essential. If you are familiar with the topic, it is easier to detect hallucinations.
This brings us to how AI can enhance us
I have always admired knowledgeable experts who have deep knowledge not only in one area, but in various topics. They can discuss nutrition, training, history, and how the human mind works. I was curious about how.
How do they know so much? How do they come up with innovative solutions to problems?
The key is that they are not narrowly specialized, but have a broader range of interests.
AI can be a great help in this process. It gives you instant access to information and explains it in language that is easy to understand.
You don’t have to read “Neurobiological Consequences of AI: Cognitive Debt and Atrophy of Executive Functions,” but rather “If artificial intelligence does everything for you, your brain will become lazy and you will lose the ability to concentrate, plan things, and make proper decisions on your own.”
Easy to understand text can dramatically reduce the cognitive load. But if you reduce cognitive load wrong way, it hinder memory.
This was best demonstrated by a study from MIT, where students were divided into three groups.
- The first group had to write an essay using their head only.
- The second group used Google.
- The third group used ChatGPT.
They monitored their brain activity and clearly saw on the scans that the group using LLM had the lowest brain activity.

As a result, students using AI subsequently did not remember up to 83% of what they had just written!
They created output, but with minimal knowledge.
I notice similar behavior in some clients, but also in classmates in expensive courses. Their goal is to gain knowledge that they can actually use.
Yet,they use AI to summarize their notes. The notes look neat and easy to understand, which gives them the feeling that they understand the material. They feel that they are more efficient. But then, when we discuss it, it becomes clear that they don’t understand the topic and, more importantly, they don’t know how to use it in practice.
This is because with instant answers, our brain experiences an illusion of fluency: easy access to information appears to be real understanding.
In doing so, it skips the important part of knowlege creation.
Thinking is essential for learning. That’s when we create mental maps and connect them with other knowledge. This part is most affected by LLM because the brain in the “AI assistance” state switches to passive observer mode.
How to use AI in learning?
Learning cannot take place without effort. You will not figure out how to solve a problem by watching someone else solve it. In this case, AI. The worst thing is that you will feel like you understand it.
Clearly define what you will let AI do. Things that are not essential for you to remember (summarizing, gathering information, creating images…). And use your brain for deep work, learning, and creation.
Because everyone has a different set of knowledge, perspectives, and experiences, everyone can come up with unique solutions to problems. That’s your unique adventage.
The model collapse
You may also notice that the graphics, advice, and posts you read on the internet are becoming sterile.
This is not just an illusion. Coaches, trainers, and companies are trying to raise their profile on social media, researchers are conducting research, and therefore they are creating more content. AI can greatly help to multiply how much we produce, but at the expense of quality and originality.
This is called model collapse.
Simply put, AI used to be trained on internet forums, books, images, and videos. Today, however, most of the content on the internet is created with AI.
Hong and his colleagues entered various types of text into two large open-source language models during pre-training. They investigated what happens when a combination of highly “interesting” or widely shared social media posts and posts containing sensational or exaggerated expressions such as “wow,” “look,” or “only today” are fed into the models.
The models that were fed nonsense text underwent a kind of “brain rot” of artificial intelligence. Cognitive decline included reduced reasoning abilities and impaired memory. The models also became less ethically oriented and, according to two measures, more psychopathic.
So when AI is trained on AI, the average becomes more average, bland blander.
This is not just theory. The fact that new models can also deteriorate is the case with ChatGPT 5.2 (2025/26).
Mehul Gupta ran a few tests on GPT 5.2, and the results may surprise you. Compared to version 5.1, GPT 5.2 has:
- Worse translations
- Worse nuances in writing
- Instant mode seems to have lost its IQ
- Additional questions are answered with less consistency
- With long documents, it hallucinates, contradicts itself, and forgets more details from the conversation
What does this mean for you?
It is not enough to rely on the information provided by AI, whether you are searching for new information or processing documents. It is necessary to approach it with skepticism. Not presumption of innocence, but presumption of guilt. Especially if it is something important. The convenience of verifying information can have far-reaching consequences.
Here is another effect that is not about AI itself, but is even more insidious.
People imagine AI as a tool – a calculator.
When you need to calculate something, you pull out calculator and get the result effortlessly. And you may notice that without a calculator, you can no longer calculate 250/8. Or it takes you quite a long time.
We don’t calculate in our heads because we don’t have to anymore.
And with AI assistance, it’s a similar situation.
Experienced doctors who used an AI tool to evaluate patient images deteriorated in their own evaluation skill from 28% to 22% after just 3 months. [source]
Another study from Harvard shows that when we use AI for tasks it’s good at, productivity jumps. Consultants got 12% more work done and finished 25% faster. Interestingly, it was the “low-performers” who saw the biggest win – a 43% boost in quality.
But what if the task looks easy?
Like, a bat and ball together cost $1.10. The bat is $1.00 more than the ball. What does the ball cost?
If you quickly thought $0.10, you are not alone. It feels right. But the answer is $0.05.
That research shown, that tasks outside the AIs capabilities that look easy, reduced accuracy by 19% drop. The users stopped thinking critically and just trusted the output. [source]
But this isn’t just a “work” problem. It’s becoming a “life” problem.
We no longer use AI just for calculations like a calculator, or as specialized tools. We use it to write, search for information, invite someone on a date, reply to emails, or solve a work problem. We even seek advice on training , nutrition, and health. The things that require deeper self-awareness.
Research from Anthropic shows a worrying trend: people are increasingly leaving control of their decisions to AI.
Users actively ask “what should I do?”, “write it for me”, or “am I wrong?”. And they usually accept the answers with minimal resistance.
This is where it gets problematic. The weakening of self-control does not happen because AI takes control, but because we voluntarily give it up. From 2024 to 2025, this trend intensified, especially in in wellness, relationships, and lifestyle.

Patterns of users weakening their decisions as a result of interaction with AI.
Take away: AI starts with you, and it multiplies you.
AI multiplies creativity in people with higher levels of metacognitive abilities (actively monitoring and regulating their thinking) and knowledge.
The problem is not the tool, but how we use it. Weakening of the mind and skill erosion occur when we do not use them. Our brain conserves energy, and when we have AI constantly available, it is easier to ask than to think. It’s like when you know that chips are not good for you, but you always have them with you – it’s hard to resist.
Programmers stop checking AI-generated code, tired teachers don’t verify citations and don’t read longer assignments, lawyers don’t check references cited by AI.
Expensive mistakes are made, the brain loses its capacity to think and solve problems, and we lose the ability to concentrate and make decisions about complex problems that do not have a clear answer.
That is why your knowledge and skills are still essential, and it is critical to keep improving them.
LLMs work best as research assistants: they are good at mapping familiar terrain but poor at deciding what is important. Paradoxically, they are most useful to experts (who know the terrain and can evaluate it) but are most appealing to novices (who do not yet have the knowledge).
So, how do you use LLMs effectively in your own life?
Use sandwich method when working with AI. First, formulate your own questions and first draft. Formulating questions will help you clarify what is essential, and you will often come up with some answers and solutions in the process. This will also make it easier for you to work with AI. Use AI to critique you or provide a different perspective. Then refine.
Periods of cognitive tension require 60-90 minutes of intensive work without AI, especially at the beginning of the day when the prefrontal lobe is functioning at its peak. Use this time to work deeply on creative or analytical tasks that would otherwise be delegated to AI.
Dependence on LLM is behaviorally real. Users tend to prefer LLM assistance over personal effort, which reflects and reinforces a reduced capacity for independent thinking.
Use AI as a mirror and critic: AI can help you see new perspectives and give you constructive criticism.
Here is one prompt I like to use. It will help you find holes in your logic:
- Turn on anonymous chat – this way, AI will not tailor its responses to your preferences.
- 👉 Act as a red team critic. Your job is to find errors in X — uncover logical errors, loops, ambiguities, and unanswered critical questions. Ask only one question at a time and always wait for the answer before asking another.
More resources
- AI doesn’t reduce work, it intensifies it.
- [2507.00788] Echoes of AI: Investigating the Downstream Effects of AI Assistants on Software Maintainability
- The Curse of Recursion: Training on Generated Data” — Shumailov et al., Nature (2024) arXiv
- “On the Impact of Synthetic Data in Machine Learning” — Seddik et al., arXiv (2024)
Ready to Become a High Performing Athlete?
If you're ready to stop guessing, start implementing a proven system that truly works. Book a free consultation. We'll review your current situation, and I’ll let you know whether I can help you reach your goals. No persuasion, no pressure—just an honest assessment.
If I can help, I’ll show you exactly how. If I can’t, I’ll let you know and point you in the right direction. Either way, you’ll walk away with a clear understanding of the next steps you need to take to move forward.
If you're serious about implementing what we discussed today, book your call.