Saturday, July 5, 2014

Hydro electric power for a house

Question: I have a hill in a rainy part of the world, can I build my own mini-hydro-electric plant?

We get about 40 inches per year and I have a roof about 40' by 20' that will be about 50' in the air. That works out to 2,667 cubic feet of water ever year.

Potential energy is E = M * H * 9.8 N/kg. We have 75.5 cubic meters of water at 15.24 meters of height. That works out to 11.3E6 joules. That's 3,130 watt hours. Even at a 100% efficiency I'd only get 3kWhrs per year.

Which is really unfortunate because it would have been fun to runs some pipes and I could have easily 3D printed a little hyro engine.

Monday, June 16, 2014

My house has too many fucking railings

My house is essentially done now. Little thanks to me. But I did have to build the railings. Or at least I chose to. Because I'm both a cheap skate and dumb ass. Here are some random pictures of that.

But what are random pictures without random stats?

  • 7.8 ft of 1/4" holes drilled
  • 116 square inches (0.81 square feet) of steel cuts on the saw
  • 65 ft of welding
  • 13 1/4" drill bits destroyed
  • 6 band saw blades destroyed
  • ~70 cubic feet of argon lost
  • 2,394 individual acts of machining (not including installation)

Sunday, May 11, 2014

Experiment: How much light penetrates a welding helmet?

Summary

Problem: How much light does a welding helmet block? They come in shades like '9' or '12' but the only documentation I can find defines what jobs to use what shade on. It never seems to say what the shades mean.

Conclusion: A basic 'shade 10' welding glass will block all but one part in 20,000 of low-frequency light.

A given shade number seems to allow 1/3 the light of one shade below it. So, for example, a shade 10 will allow 1/3 the light of a shade 9. Also, a shade 8 will allow 3x the light of a shade 9. A 'shade 1' appears to be equivalent to 'no shade'.

This appears to hold true only for light of wavelength green or longer. When viewing blue laser light we anecdotally saw the light appeared much dimmer than viewing green laser light of the same non-shaded intensity.

The experiment

Big thanks to Woody and Erin for helping design and do this experiment.


Original data:

Setup: We measured the shades of a Hobart welding helmet with a variable shade and a regular no-brand set-shade welding helmet. We used a LX1010B light meter inside the helmet, with cloth packed around it to shield light. Without any of the lights on, the sensor read '0' on it's most sensitive setting.

For each measurement, we took one of two flashlights and shone them through the helmet and centered on the light meter sensor. We also shone them directly on the sensor to get a no-shade measurement. For the variable-shade welding helmet we also needed to use an extra flashlight to activate the shade (normally it stays low-shade in order to allow the welder to see before they start welding and then changes it's shade when welding begins; it uses a light sensor on a different part of the face of the helmet to do this.)

We were only able to take a few measurements but they're a highly linear pattern, consistent from one light to the next, and the measurements from the variable-shade helmet closely matched the static-shade-of-10 helmet.


Results

The relationship between lux and shade looks very linear when the lux is plotted on the log scale. At a glance, it's about 1/3 the lux for every additional 1 of shade.


What's most interesting is if we assume this 1/3rd relationship going back down the shade levels, we predict that we'll see about the same as the 'unshaded' lux measurement when we reach shade level 1. This makes sense and reinforces that we've done the measurement at least approximately correctly since I can imagine an engineer/scientist choosing the shade level 0 or level 1 to be the unshaded level.

Sunday, April 20, 2014

AIWorld6: The Tree Of Life

The Mechanics

I almost want to get immediately sentimental. But let's for a moment talk about how this tree of life was created and thus how to read it.

To explain how this is build, let's talk about what happens with a single creature. This creature will have a number that represents their species. When the creature replicates, their offspring will have a number that's either +1 or -1 from their parent's number. Using this simple scheme, over many generations different species will have numbers that drift away from each other.

By chance there will also be a lot of cross over of the numbers as species' numbers wander. To fix this I bias the probability of picking -1 or +1 based on where the other creatures are. So for example if there are 1000 creatures with the number 40 and this creature has the number 41, their offspring will likely get a +1 and be 42. This has the effect of spreading out the numbers.

Next I want to use these species numbers to see which creatures are which on the map. So I assign that number to a color. On a 0 to 255, RGB color scale I get to have 1530 different full-brightness colors; so I'll just mod the number of the species with 1530 and plot it on the color scale.

Now I have colors that represent species. How to build the tree? Every X turns I can take a census of what species there are and plot a single line where the intensity is proportional to the percentage of the population with that color. Stacking those slices on each other for tens of millions of turns gives us the full and complete tree of life.

The meaning

Now for the fun part. More accurately, now for the amazing part.

At a high level, it's the first time I've ever seen every single species ever in the tree of life. No stumpy branch or leaf in this tree is left un-turned. We don't have to rely on some spotty fossil record that we piece together by comparing physical structures. We don't even have to parse the DNA. We have a tree of life based on real heritage data. And crazier yet, this is a tree of life playing out just like all our evidence said ours did. Right in front of our eyes in a repeatable way.

Look at the long stretch of yellow and orange. Notice how the yellow is longer lived (ave age ~5000 turns vs less than 1000 turns for orange) and branching less often because replication rates are slower. If we look at the real-time video of the world we can actually see yellow being killed off by the more aggressive pink who are themselves overtaken by the more stable red species.

Look at the times when a branch fans out and then suddenly tightens. That's a small set of individuals or even a single individual which turned out to be more fit than the others and out-competed them. That property alone is what keeps the lines of the species from drifting farther from each other.

I built this system with the intention of building true causational (not correlational) studies of life. The first step of that is getting life. This looks a lot like life to me. Maybe not life just like ours. Maybe without temperature-based homeostasis and carbon. But are we so sure that's the right definition of 'life'? :)

Sunday, March 23, 2014

AIWorld6: Without the ability to be functionally different, there is no predation

Normally the predator species would evolve quite quickly (first 1000 generations) but this world goes to 50,000 generations and we never see it.

The difference is a change I made. It's clear that predation relied on a feature I just removed: When a creature attacks they steal a certain amount of energy from the creature they attack. Normally the amount of energy is a function of how much energy they already have (I was trying to model the idea of being able to eat more because you're bigger). In this world, I made it a constant (20 energy).

In retrospect, I was foolishly doing something I said I wanted to avoid: allowing the agents to be functionally different in any way. I want them to compete on brain power alone.

But it does make me wonder if somehow forcing them to compete on brain power will actually result in a less complex and less advanced world. Luckily, I have a simulator to test such things. :)

First then, I'll try to generate speciation without allowing them to be functionally different in any way; I'm going to modify the terrain. Some places will be harder to move over. Some will have less food. We'll see if that causes it. (The simulation for this is running now)

Monday, March 17, 2014

AIWorld6 GreenVsPurple

AIWorld6: New UI and abilities

New abilities

I fixed a bug that prevented them from using the signal/communication system. So now you'll be seeing a lot more signaling than they did before. As reminder, the way this works is that they can write three floating point numbers to the location they're standing on. Those numbers persist until another agent re-writes them. Any agent nearby can read the numbers from that location.

Also, I gave all the agents memory. Twenty of their outputs are saved and then fed back in as inputs the next turn.

In both cases I haven't figured out if they're using those features. Even if they are saving data to memory and communication, it could be just noise.

The new UI

The new UI shows a lot of things. For starters, I sized the world down to 200x200 and then made the UI map 600x600 so it's easier to see what each individual agent is doing.

I also gave them shapes that represent their decisions. So a 'C' shape is attacking, 'O' is replicating, '+' is growing, 'x' is turning, '^' is moving. I made the growing/turning/moving shapes small and the attacking/replicating shapes big because I want to highlight those less common activities.

The speciation is also dramatically improved. Before I'd parse their brains and then make species judgement from that. Now I just simply give an agent a number, when they replicate their spawn gets that number + or - 1. It turns out most of the time there's so much selection going on, the current generation is only 10 or 20 generations from having a single common ancestor. Thus, the numbers stay quite tight around a species. I then simply map that number to a color to make it easy to see.

This speciation improvement also made parsing species stats a lot easier. You'll notice the system now differentiates and gets statistics on predatory behaviour far easier.

Monday, February 24, 2014

AIWorld6 - The Joy of Small Data - Follow up

I got some great feedback from people, props go to rolisz for suggesting a stack exchange conversation that lead me to Kernel density estimation.

You can see from the plot of evaluated and the plot of the derivative of the plot that it'll be easy to pick out what the species are. A naive parser that just looks at the local maximums as indicating a species and local minimums as indicating the difference between species did quite well on this data. I would have guessed [14900, 15050, 15300, 15500, 15650, 15900, 16100] and the KDE with this naive parsing went with [14732, 14861, 15054, 15674, 15725, 16087]. With the exception of 15674 and 15725 being too close to each other I think it was a total win. I'll be implementing this algorithm shortly. And perhaps even showing the histogram to let a viewer second guess the automatic stats gathering if they feel the need.

Sunday, February 23, 2014

AIWorld6 - The joy of small data

I'm happy because I've failed at clustering.

I do programming in my free time and I'm working on a system that evolves artificial life an example video). This might seem like a cool-ass AI programming but it's mostly the regular drudgery of c arrays and test case writing. After all, I'm building the world, not making them smart. It's up to them to evolve smartness.

As a result, I was super psyched today when I realized I've failed at my naive approach to detecting species. I knew they'd speciated because in every world I start from scratch there's a clear moment when almost everyone dies, and then after that I see a lot of agents with low energy and a small number with 100x as much. That die-off is the start of predation. Those high energy agents are predators. And from then on, those two different types of agents are probably evolving differently. So how can I prove this? I want to gather stats on the two different kinds.

Of course, I could just check how often agents attack each other and label the ones that do it the most predators, but that's differentiating on the same thing I want to measure, which is pointless. It also doesn't scale; if there are other species doing other behaviours I want to find them too.

The first clustering algorithm I tried was simply taking a sum of all the connections in their brain and assigning a color to that number. The idea is that if agents have similar brains, they should get similar colors. So as the brain construction of two species diverges, they turn different colors. When I see a bunch of them around a given color, I know I have a species. This works ok, but it takes a lot of tuning to not get the color spectrum too wide that it's washed out or too narrow that everyone appears to be the same. Worst of all, it's still relying on my visual ability to discern where one color band ends and the next begins.

Then I tried what any thinking-inside-the-box, data-too-big programmer would do, I wrote a simple clustering algorithm that could run in linear time. Essentially, if there's a gap of more than X spaces where I see less than Y agents, it's probably a gap between species. Then I just call all the parts that aren't gaps different species. It's a shit algorithm. It takes a ton of tuning and I'm still missing the really uncommon species.

At that point, I figured it was time to call in the big guns of a real algorithm and use a real comp-sci library for it. I looked up clustering algorithms, chose one arbitrarily, and spent an hour prototyping k-means clustering using scipy. The results are shit.

For the histogram of brain sums, even if I give k-means my personal guesses ([14900,15050,15300,15500,15650,15900,16100]) it still gives me results that I'd describe as not-helpful ([14828,15072,15451,15575,15694,15825,16056]). And it doesn't even automatically give me the min-max of those means.

Which brings me to what makes me so happy. I'm reminded of all those comp-sci classes where they talked about picking the right algorithm for the problem. I have a small-data problem; there's never more than about 40k agents in the world so I can pick almost anything. I can think about why my problem is different than the generic clustering would be; outliers matter because they're under the same selection pressure as anyone else and they're still alive. Also, small clusters as equally as important as big ones since they must still be a species. I'm hoping to find an algorithm that works for me. And if I don't, I'll invent one and code it myself.

For someone who spends most of their time crunching big data and having to use simple algorithms to do it, I'm gonna love picking just the right one for my small data.

Tuesday, February 4, 2014

AIWorld6: Real Science

It's time to setup the first experiment. The system runs. The lifeforms do crazy things. Now the question is, what to start with first? Some options:

  • Is life more successful with a concept of age?
  • Is life more resilient when the population is genetically diverse?
  • Is life more resilient when the terrain diverse?
  • Do more species occur when the terrain is diverse?

So far, the interesting things I've observed are die-offs from very aggressive predators, specialization into predators and prey, and specialization based on terrain. So far the most interesting is the speciation. I think the next step will be seeing what causes a diverse set of species and then seeing if that prevents die-offs.

Tuesday, January 7, 2014

AIWorld6: Finally proof of predators

I've been working on making the system automatically detect new species; to show a species tree and then also to detect those species and show stats about them. In the course of testing, it seems like predatory behaviour evolves very often (always?) as can be seen by the faint red species tree. In the later screenshot, I've also added some code to detect them and show stats. We can see here that in fact the predators have much more energy and attack far often; just like a predator would.