Paul Sava

PAUL SAVA

i am a social vegan. i avoid meet.

on writing is thinking

Thu, 24 July 2025

The Red Ink Experience

Back when I was writing my Bachelor's Thesis, I hated how everything sounded. As JP put it, it was "too emotional", not exactly the adjective you would want for your scientific paper to be associated with.

Until then, all my references in terms of scientific writing were drawn from articles of "Stiinta & Tehnica", one of my favourite Romanian magazines (Știință & Tehnică - Totul despre științe, tehnologii și invenții!). These weren't exactly academic writings, but for me they counted as such.

In any case, my point is that it was very hard for me to write in this new, short-sentences-so-that-everyone-can-understand-NOT-INTERPRET-WHAT-YOU-WANTED-TO-SAY style.

My first draft, which of course needed to be saved with double spacing so that JP could write comments between the lines on his Remarkable tablet (because that's who JP is) instead of just adding comments to the shared Overleaf project, was more red than black.

I still can't get over the "too emotional" written next to what appeared to be almost an entire red page, with all but a few lines crossed-out completely in red ink. Just like this but imagine half of the page.

This whole experience forced me to adapt. I had to basically relearn writing academic writing.

Enter the Post-ChatGPT Era

When I had to write my Master's Thesis, this was no longer an issue. I already knew what was expected of me. However, a new problem appeared. The post-chatGPT era was already there.

Nowadays, it's easy to write. You just have an idea and you tell chatGPT or Claude or Deepseuk or Qwen or Gemini to "make this sound nice. correct all the mistakes". You don't even need to spell the words right, the LLM already knows what you wanted to say.

And most of the time, it produces texts that are probably the best things you have ever seen. You read them and think, "okay wow even if I tried, I could have not written it so well". Or maybe you don't, in which case, WHO ARE YOU?

Or maybe you feel bad for delegating the complete writing task to the AI so you scribble down some thoughts and instruct it to put everything together in a nice way. And correct all the mistakes. And make it read nice and have a nice flow. This makes you kind of feel better, because you tell yourself, "okay, you know what, it did write the stuff for me, but I came up with the ideas".

Which might be true, but from my experience, it's probably not entirely true. Probably in order to come up with the initial thoughts you already had 2-3 chats with chatgpt brainstorming about the idea, asking questions, formalizing the concept and so on.

I am already rambling on here, but the main thing I am trying to say is that once you start using LLMs for writing, and once you see how well it performs, you probably have already started debating ideas with it too. Probably you use it for debugging your code. Maybe you don't even google anymore, you just ask the LLM to do this for you.

Once you start using LLMs, it becomes so frictionless to use them for basically everything. And at that point, you have been delegating so much of the tasks that usually kept your brain active, that I believe it's actually been doing most of the active thinking for you.

I don't really know how to explain this, but you stop to think about stuff. I studied computer science. I am writing or using codes for the most part of my week. And at some point, I came to realize that when my code was not working, it has become easier for me to copy paste the code and send it to chatgpt and ask "what's this", "solve this", "solve".

I didn't even think of trying to figure out what was wrong. I just delegated this to the LLM.

If I had Claude clean up my first draft for the bachelor thesis, I would have probably ended up with a better thesis. One that probably would no longer have more red ink than black. But, at the same time, I would have learned less about the academic discourse. More than that, it probably would not feel like me anymore. It would have neat academic prose and proper transitions and would sound authoritative, but it would not be mine.

The Nature of Cognitive Offloading

This blog post came up to be after I read this editorial in the Nature journal (https://www.nature.com/articles/s44222-025-00323-4) on the value of human-generated scientific writing in this post-GPT era.

The authors fear that we are indeed delegating more writing to LLMs and this does not only change the way we communicate and read stuff on the internet (like how we now have a feeling of whether an article was written by AI based on whether it uses "delve"), but also how we think.

In my opinion, we are in the middle of something that we don't really understand, let's call it cognitive offloading, and we do not seem to be capable of recognizing what we are indeed giving up.

Building Mental Models Through Struggle

I have studied computer science. I work in ML & IT Security. For the best part of my week, I write code or work with code. And at some point, I realized that I have become so dependent on Coding Assistants that whenever I had to debug something in my code, I would just copy paste it in ChatGPT and ask "what's this".

I would not even think about trying to figure out what went wrong. That thought did not even appear in my head. But luckily, I recognized this behavior and I am working on fixing it. Not because using coding assistants does not work, it probably works better and much more efficiently than I can.

But the fact is, manual debugging forces you to build causal models. If the error is here, the root cause is probably there. You start to learn various patterns and understand failure modes better.

When you outsource this to an AI, you get the fix but miss the learning. Just like in the case of my Bachelor thesis, but luckily for me, there was no chatgpt back then.

It's the difference between knowing that 4*3 is 12 because you memorized it and understanding that 4*3 is 4 + 4 + 4, i.e., multiplication is just iterated addition. Both variants get you the answer, but one builds mathematical reasoning.

And this is very important, because expertise is not about knowing facts, it's about having the right mental models to generate insights.

And since I just started playing chess more often, I will try to make an analogy with chess grandmasters. When a GM looks at a given board, they probably don't see individual pieces, they see patterns, threats, developments, all of which appear after years and years of practice. This practice was never about learning moves, this occurs as a side effect, but it's about building the cognitive architecture that makes pattern recognition automatic.

The Paradox of Productive Struggle

This is why I believe that we are now finding ourselves in a very difficult position. Let's call it "The Paradox of Productive Struggle".

Modern AI tools are so damn good, that they eliminate productive struggle without us even realizing it. In theory, learning is always a friction-based process. It happens when we are at the edge of our abilities, outside of our comfort zone, when we are working hard enough to make mistakes but not hard enough to be overwhelmed. This is the zone where our neural pathways become stronger and mental models start to appear. If you do this long enough, you end up getting better at that task.

But LLMs are frictionless. They give you the answer on a platter before you even have had time to struggle with the question. From a user perspective, this is fantastic. I can fix my bike, without even understanding what I am actually doing. I just send pics of the broken part and follow what ChatGPT tells me. If that sounded too specific, don't ask me how I know this.

From a cognitive perspective, this is bad.

The Calculator Analogy

I am not arguing here we should return to stone tools and make fire by rubbing sticks. In this sense, calculators are also cognitive prosthetics, right? And they did transform how we deal with math problems, but in a very specific way.

Yeah, I can compute the square root of 3434.4 in a second, but that was not really the point. With a calculator, I can focus on the conceptual reasoning, while it executes the procedures I (hopefully) already understand.

The problem with LLMs is that they are general enough to handle both cases, the routine procedures and the conceptual work itself.

This creates a spectrum of offloading or delegation, from clearly beneficial such as:

  • Translating foreign languages
  • Grammar checks

To debatably harmful, like:

  • Having the LLM write your thesis

And apart from this, there is an enormous gray area in between where these boundaries are not that obvious.

What's the Solution?

So what's the solution? I really don't know.

It's not abstinence, that's never a good answer. The productivity gains one can get from AI assistance are way too significant, and the competitive pressures are too strong.

Instead, we must become more intentional about which cognitive processes we preserve.

However, it's easy to say, delegate the execution, preserve the reasoning. In my case, if you think about writing code, you might have an idea about the code that you are trying to write, but if you delegate the writing part completely to an LLM, you end up with a system you don't know and should a problem or a bug occur, good luck finding the source of that.

But perhaps most importantly, we need to get better at recognizing when we are crossing the line from tool use to tool dependence.

The Real Issue: Wrong Metrics

This brings me to what I think is the real issue.

We are optimizing for the wrong metrics. Efficiency and correctness are easier to measure, but they are and must not become the only things that matter.

The process of struggling with a problem, making mistakes, and figuring things out is also valuable, not just for the solution it produces, but for the cognitive changes it triggers.

If we lose the ability to think through complex problems independently, we don't just become dependent on AI, we become less capable of the creative, integrative reasoning that the AIs still can't replicate. We risk creating a world where humans become supervisors of AI systems they no longer truly understand.

The Ironic Trap

And this is ironic, right?

The very cognitive capabilities that let us build these remarkable AI systems might be the ones we are actively undermining by using them.

Which means, we are in danger of creating tools that make us less capable of creating the next generation of tools.

So maybe the question isn't whether AI will replace humans, but whether we'll accidentally automate away the cognitive processes that make human intelligence worth preserving in the first place.


[ comments ]

This website uses Umami Analytics, a privacy-friendly solution that doesn't use cookies or collect personal data. Learn more