How does the simulation work?
All agents have energy that they spend to do things and get by growing or eating other agents. When they reach zero energy they die. The abilities they have are: Moving, turning, attacking, sexual replication, and asexual replication. I’ve specifically chosen to give all agents the same abilities so they can only compete on how intelligently they use them.
How does the brain work?
All agents have a sparse, 2-layer neural network as their brain. There are 125 input nodes, 100 middle nodes, and 13 output nodes. The term ‘sparse’ means that not all input nodes are connected to all middle nodes and not all middle nodes are connected to all output nodes. Instead, each layer has anywhere between 1 and 500 connections.
All agents have the same inputs and the same set of possible decisions as outputs. The inputs are the values from the environment nearby the agent. For example, the difficulty of passing over the terrain directly in front of the agent, or whether or not there is another agent two spaces in front of and one space to the left of this one.
Let’s look at some example inputs. Here we see the values and what inputs they map to. For example, input node 59 would have the value 15, which is the value of energy of the agent at that location. And input node 34 would have the value 0.1, which is the cost of traversing the terrain at that location.
Let’s assume our agent has evolved a primitive brain that tells this agent to run away whenever it’s facing another agent. That brain might look something like the one below. Notice that the inputs which would show another agent being in front of this individual (57,59,62,63,64) all connect to an output that would be a decision to turn away. And that once the agent is turned away, the inputs which would show another agent being behind this individual (52,53,54) all map to the decision to move forward, away from that other agent. In this way, the brain is a basic decision tree for always turning and running.
Initially that agent was facing downward. When that agent chooses to turn right, it's perspective will shift.
The agent’s next decision, according to his brain structure would be to move forward. If the other agent was an aggressor, it would pursue and eventually corner the one running away. Of course if our running-away agent was had a better brain structure it’d either notice that it was going to be cornered (impassable terrain) or that it was bigger (more energy) and should counter attack instead.
That is just a brief look at some of the inputs and decisions agents could make. Of course, in the real simulation I don’t program any NNs myself, not even the initial ones; they evolve on their own.
How is time implemented in the simulation?
The simulation is iterative, rather than event based. Each iteration has two phases: decision and action. In the decision phase, each agent’s inputs are gathered, their NN is run, and the decision saved. This happens in parallel since no agent’s decision making process can affect any other agent, so there are no race conditions. The second phase is where each agent preforms an action. This happens in pseudo-random order and is single threaded since there are many race conditions such as moving into the same spot or attempting to eat each other.
Risks/Pitfalls I've encountered here: Previous versions of AI world used event based but the cost of maintaining the gradient of time was simply too computationally expensive for the value it provided.
One difficulty of running an iterative simulation is passing off the flow of control between master thread and worker threads without using a wait/sleep statement or any other choice that would have to be tuned. This was accomplished with used of 3 different mutex locks where master or workers were all either doing work or sleeping on one of the locks.
How does 'growing' work?
A given location will yield a certain value if an agent chooses to grow on it. The agent will only get this value, however, if there is no other agent directly adjacent when it chooses to grow.
Risks/Pitfalls I've encountered here: By only giving energy when no others are around, the simulation incentivizes against tightly packed agents. In previous simulations for whatever reason sustainable life didn't happen without them being able to get around each other. I do intent to test this rigorously now that I have a test framework.
How does 'attacking' work?
If agent A attacks agent B, it will steal energy in proportion to it's own energy from agent B, or if agent B has less than that, all the energy from agent B. For example, if the steal size was 10% and the efficiency of stealing was 50% and A had 100 and b had 100, after the attack A would have 105 and B would have 90.
Risks/Pitfalls I've encountered here: The simulation must kill agents as soon as they reach 0 energy and/or add a constant-value cost of attacking. In previous versions agents would attack each other and have an ever lower but still non-zero energy.
It also must have attacking equal to the size of the agent attacking, otherwise the total population of agents builds an ever increasing supply of energy and battles take too many turns to resolve. This is the only action agents take that's essentially different depending on anything but their brains.
How does 'asexual replication' work?
A new agent is created and given 1/3 of the energy of the agent that made it. The new agent's brain is a copy of it's parent except for any mutations (added brain connections or modified connection weights), which all happen according to a simulation-wide mutation rate.
How does 'sexual replication' work?
Similar to asexual reproduction except that the new brain is created by randomly choosing each connection from either parent's brain.
Risks/Pitfalls I've encountered here: In order to create a new, functional brain from two very similar and functional brains the connections must align. For example, if the 5th connection in brain A is from 35->56 with weight 1.2 and in brain B the 6th connection is from 35->56 with weight 1.21 then it's probably the same connection; you don't need two of them and you should have one in the new brain. For this reason, the replication code pays special attention to not creating accidental 1-off mutations that would bread the whole brain of the offspring.
It's also important to not make the sexual reproduction voluntary, which would be too much coordination for what little benefit it would bring. But one it's not voluntary, then it's only appropriate to take energy from the one choosing to replication. In previous versions of the simulation, the abuse of energy stealing in replication evolved rapidly and was the primary form of either replication or attack. In short, raping was common and I didn't feel it benefited the simulation.
Can they communicate?
At every run of their brains, some of the output nodes are special and their values are essentially saved to the location the agent is at. Other agents nearby can read the value at that location. I intend to prove eventually that they're using this for communication.
Risks/Pitfalls I've encountered here: In previous simulations agents were given a hash of the brain of agent's nearby. They could use that has to identify agent's like them. In practice this gave rise to look-alike agents, where some have a very different brain structure but it happens to hash to a successful agent. If agents were going to employ camouflage, I decided that the algorithm to hide or identify them should not be hard coded (as the hash was) and instead they'll have to give off signals and/or fake signals themselves. I suspect this will lead to more complex communications in the future.
Not giving agents any way to identify each other seemed like a mistake given the goal is more intelligent life and speciation has been common in previous simulations.
This is interesting, what is it built it, and could I look at some source files for it? I've been thinking of doing something like this in python once I've gotten proficient enough.
ReplyDelete:) The simulation is in C. The UI is in python. Totally open source: https://github.com/DickingAround/AIWorld6
DeleteThis is ridiculously interesting. Neural networks / genetic algorithms and evolution are my favourite things. I admit, I have attempted to create this, however failed (apparently I'm not as good of a programmer as you).
ReplyDeleteThis is fantastic. Thank you for inspiring me further.
Thanks I super appreciate it. It's spent forever working on it and testing it, it's nice to see people like it. :)
Delete