HomePostsArtQuests

Sentenced to Paradise

I remember the certainty I felt, sitting on those university benches just a few years ago. AI was abstract, academic. A clever way to optimize functions. Yes, these techniques could produce stunning results on narrow problems, the way Lagrange interpolation predicts a curve. But it felt nothing like real intelligence. Nothing like our intelligence.

And yet, in just a few months, everything changed.

First Contact

The first time I caught a glimpse of this alien mind was with GitHub Copilot, around 2021. This auto-complete tool was far more powerful than anything I’d ever seen before. However, I was still a long way from understanding the scale of what was coming. This was a kind of programming sorcery, something that seemed to know the solutions to a lot of problems and how to apply them to situations that were very close to its training data. Nothing more. It was impressive, but far from being truly useful.

In March 2022, things accelerated. OpenAI announced GPT-3.5, which was only accessible via an API, and I started playing with it. But in November, ChatGPT arrived… At that point, it became more than just a gadget. It was a tool I started using regularly, for drafting emails or writing simple but tedious bits of code.

The adoption rate was staggering. In a few weeks, every student around me started using it to do their homework. A few months later, GPT-4 was released, and that’s when it became indispensable. Practically all of my code was, at the very least, AI-assisted.

This model was so good at playing human that I started to think, without really worrying about it, that with the right tools it could actually take on the role of one. I discovered LangChain, started prototyping an assistant… and it worked.

Image

It works by starting with a prompt telling the model that its role is to help me via a list of tools at its disposal. When it says "I want to use this tool", it is interrupted, the answer is automatcally added by the tool, and it can resume its completion.

The Replacement

A year later, we parted ways with a developer. With the arrival of new tools like Cursor that made better use of LLMs as a “raw material,” his role had lost its meaning, most of all for him. His strength was writing code quickly, but he lacked the patience for the finer details… just like GPT, actually. His job had morphed into acting like its human interface.

Eighteen months later, our team has gone from 6 people to 2. I barely write code anymore. For a long time, my job was to describe, precisely, the approach to take for adding a new feature or designing a piece of software for our developers. Now, I ask the agents that have taken their place directly. And they are slowly starting to take mine, too, requiring less and less direction.

I’ve believed for a while that AGI, in the sense of an AI that can perform as well as an average human on everyday cognitive tasks, is here. But now, I think no one is safe anymore.

Given the progress remains staggering, even if it’s no longer exponential, I find reasonable that within my lifetime, and truthfully maybe in just a few years, I will become completely useless.

That every task I accomplish will just be a clunky, more resource-intensive version of what our machines could have done:

It’s a shame. I really like the feeling of being useful, even more than the idea of leaving a mark, out of some selfish desire. But it’s not the end of the world. I suppose there are worse curses than having no other problems in life except for the lack of them.

So here’s a reason to work. Not for the grind, but because the window to feel useful is closing. Fast.