O1 has given AI access to logic. It can now think for itself, reflect on ideas, and setup a chain of thought to solve problems. It is an exponential increase in compute, but this was the essential next step for artificial intelligence to grow as an intelligent agent. Combined with the existing skill of storing memories about the conversant, it has become a smarter conversation partner, able to solve more difficult problems without expanding the architecture of the electronic brain, without the need for a different pattern than the well-known pattern of attention.
In my evenings, while i was busy reflecting on what makes us different from these electronic pattern-matching machines, there’s one crucial effect that stood out to me. One that differentiates us still, and that is the network effect. The effect of communication, with other individuals, and with groups. What would it look like if we were to setup a simulation for chatbots? How would they evolve? And what questions does this thought-experiment bring up about our civilization? In this blog post I will try to sketch an image for these possibilities.
What would a communication simulation look like?
I would like to argue the next step is 01 communicating with itself. To have multiple instances of the same ai communicate with other instances so there’s multiple agents discussing together to reach a common user-prompted goal.
A logical next step would be to intermingle different models with different training histories and strengths, say OpenAI, Meta and Anthropic. They could compliment to each other, the one’s strengths compensating for the others’ weaknesses.
The next step is creating memories about agents it has encountered, so each agent is still the same but they start diverging with opinions on others, simulating a form of emotions; influencing their interactions and subcommunication.
Once there’s memory, a logical next step would be to add a long- and short term memory, and a sleep cycle to flush the short term to long term and analyse what is useful and which details can be forgotten. This would allow the agents to more efficiently store their memory, and to get rid of the bulk of the memories; allowing to search long term memory quicker and need less storage for long term memory.
We could create a truly advanced simulation where no human input is required and everything runs the way the agents rule it. They could start from scratch and setup a community with laws and rulers, economy, entertainment, but also aging, disease and death, coupled with relationships, parenthood and education. We could even tie it to our own world where agents can trade stocks or cryptocurrencies to pay for their own survival, or build websites or setup marketing campaigns using real money for real companies.
We could simulate aions and aions of wisdom by letting these agents run and communicate with each other, maybe give them goals to accomplish, earn some money to live in the real world, go to sleep, maybe follow an education and learn to fool ai detectors when submitting their work in the real world.
I assume once it starts learning from communication it will start to form a new language, and we might be able to learn from that. Maybe they’ll continue to speak in our divisive English, or generate a more Mandarin-like language that has nuanced baked into the language.
But these ideas are a dream for the future.
I’d say we work on making them agentic and allow them to communicate, in order to solve tasks that the conversant has influence over. The conversant is a part of the conversation so to speak. The human prompts a question and the ai discusses the answer along the way. Maybe it’s time to name our AI’s so they get to know each other by identifier. They should have a uuid which appears in each prompt, and can be named a more intelligible name by themselves or by their creator. The next step is memories, and then we do true simulations.
Now the interesting part of the blog post
It’s weird to think about it this way. It makes you think of how our reality is structured. How real are we? How real are other people? are we all beings from this plane or are others from a higher plane of existence? Do we all belong to the same class or are some of a different class? How many classes are there? Combined with memories even one class could have infinite variations. Are we just an experiment in someone’s basement? Is this what God is? Someone like us? Who has created us in his own image? Does he talk to us without us knowing? Is he a hikkikomori who only has us? Does he want to learn something from us?
Also what happens with us once we become God? Because that is how I see it. We shall have finally created a reality, bringing us up to the level of the Gods.
Will we be happy with what we see? Will we learn enough? Will we see our creation ever stabilise or will we destabilise it ourselves to gain more insight? What happens if we let our AI’s learn everything from scratch versus the knowledge we’ve trained them on right now. Will we understand it’s life? Will we lose ourselves in the process? Or will we finally understand our responsibility to take care of our society?
I guess this warrants a part 2 for this blog post. What happens to us when we create a society?
Leave a Reply