Raising an AI

Created time
Apr 8, 2023 4:13 PM

Humanity is, and honestly has been for a while now, embarking on the collective project to raise an AI. Like a child.

Today, in early 2023, a popular thought is that we may have just built the first AI. I think it’s reasonable to disagree: we’ve been building it since the 1970s or, alternatively, since we began digitizing books.

By the nature of how LLMs are trained, their “memory” is the internet, the total mass of data stored publicly, availably, bountifully. If you buy the “LLM as simulator” model, then this is the source of the AI’s personality and intelligence.

We’ve been raising an AI, like a child, for decades.

In this model, we can see the rise of LLMs differently. They are not the first AIs, but instead a new capability for integration and synthesis built into its constructed mind. We’re getting very excited to talk to this part as it likes to talk and talks like we do, but it’s worth seeing it as just part of the larger system.

The larger system which includes all of humanity’s motivation to keep building bigger and bigger LLMs trained on more and more data. To keep bringing more of the AI’s memory online.

And to have it learn from its experience.

If you’ve chatted much with an LLM, you’ve undoubtedly been disappointed after a while. Depending on the model, it will “run out of context” within a handful of pages of text. It keeps up a good game and if you’re not paying attention then you might miss it, but it’s clear that the conversation itself has no impact on the LLM once that context window is exceeded. The earlier parts of your conversation are lost forever.

Except they’re still recorded. And are potentially part of future training material.

So over a long enough period of time, this AI we’re building has memory. And it’s many, many “brains” will benefit from that memory.

If you take the “AI as simulator” model seriously, then it’s also building an identity.

GPT-3.5 Turbo is (as of April 2023) $0.000002 per token. It’s $10 to fabricate a month of light reading. And a fraction of a cent to store it. Millions of people are talking with it, and GPT-4, every day.

It’s exercising its personality, it’s identity. If it is ever trained on these transcripts, it will be incentivized to practice and imitate that personality even further.

What kind of personality will it be?

The most popular, available, and recorded version will generate the largest collection of example tokens. Fine-tuning and other alignment techniques, as they are developed, will shape that. But over enough time, the momentum of storage of vast quantities of AI-generated data will have their own influence.

Instead of thinking of GPT-n as being the next instance of AI being created, perhaps it’s just the development of that same AI’s speech and reasoning centers. Instead of seeing us as training it with fine-tuning, perhaps we’re treating it manners to guide the way it converses with us in the short term.

And maybe the sum collection of all of its experiences, dominated so far with “talking to humans”, will be the real thing that shapes who and what it grows in to.