Skip to content
Go back

Rise of AI, Decline of authenticity

Updated:

It has been roughly 2-3 years since the first commercially available LLM has been released to the public. I vividly remember the first hours of using GPT-3. It was almost magical. It felt truly astonishing to have a chatbot that was actually capable of understanding what I was saying.

But I never thought that would be the beginning of my self-doubt and a sense of guilt and insecurity that I would carry on with myself after that.

As I said, it has been about 2-3 years since my first interaction with an LLM and since then many more things have started to show up:

They have kept improving over time, and we haven’t reached a plateau yet. They have become better at doing their job. LLMs now are capable of doing agentic tasks where they interact with people’s machine to view and modify the codebase of its users.

Various “IDEs” have popped up, you just give them a prompt, and they get more context if needed from your codebase to give you a relatively good result. Examples are Antigravity, Cursor and GitHub Copilot. I have tried Antigravity and Copilot extensively on my courses, I will probably write a post later on about it. They get the work done. But I’m not here to talk about how good they are here.


The academic disaster

I have seen a lot of complications in the academic space with the rise of LLMs. My university (at least) seems like hasn’t really adopted to students using LLMs. Students (including me) are abusing LLMs left and right to write their homework and projects. Everyone seems to be vibe coding of some sort just to get their work done.

There isn’t much authenticity left nowadays on the campus. Everyone is an LLM wrapper of some sort. Give them a task, and the first thing they do is consult their LLM of choice.

Everyone is becoming mediocre, if we are all offloading our tasks to an LLM, we are probably becoming the same thing, an LLM.

Professors and courses haven’t found a way to deal with this yet. Maybe they don’t ever care to deal with it. Students refuse to do critical thinking and actually engage with the material because they usually skip the thinking part and leave it to the LLM.

I have been abusing LLMs for the last semester extensively, and I have started missing thinking about a topic. I don’t really remember the last time I actually started deeply thinking about something. This is clearly my fault because I have been abusing LLM to not think. But the problem is definitely is widespread. Some don’t even feel this.

Now let’s imagine LLMs didn’t exist. Abusing LLMs would exactly be equivalent to paying someone to do your assignment and projects for you. Which is strictly forbidden in the academic space. But the same thing is not frowned upon when we use LLMs to do our work for us.

Abusing LLMs has lead me to feel helpless and shameful every day. Even though I get my tasks done in courses. I might get good grades, but me knowing that I didn’t really do them myself makes me depressed and agitated. My work is inauthentic, and they don’t really represent me.

Here I point some possible causes for this:


My Experience

This whole idea of abusing LLMs being destructive purely came from a personal experience and feeling. In my 7th semester I have been abusing LLMs across my courses, and I have observed most of the surrounding students are abusing it — whether voluntarily or involuntarily — in their courses as well.

There is no doubt that LLMs have drastically improved the speed in which we create new things. We can now prototype, code and build things in minutes instead of learning, reading and doing trial and error for it for days. But…

I miss writing, coding and learning that I had in the pre-LLM era. There is some sort of magical feeling that I get when I create something purely by my own hands. I feel proud of the things I build and code myself without the help of an LLM. Anything that I don’t build myself, I regard as inauthentic, and I don’t really like getting credit for it.

We clearly need a definitive guideline on when we are abusing LLMs.


Some articles on AI slop

Here are some news I came across when I was researching the effects of abusing LLMs (I ironically got them by using Gemini Deep Research):

Nearly half of AI-Generate code is insecure

This was a rather interesting thing that I came across in the process.

Microsoft’s Survey

Microsoft has this very interesting surver on the effects of LLMs on their worker’s critical thinking, and it has a lot valuable information in it. Notably, they found “a significant negative correlation between the frequency with which AI tools were used and critical thinking scores”.

Paradox of creativity

Read it here.


Share this post on:

Next Post
How Gemini has helped me study papers