≡ Menu

A learn more AI

Yesterday, I spoke of learning in the computer sense. I want to continue talking about something I’m finding really interesting that has to do with computer’s learning, just not as much. How do human’s learn? It’s simple enough to say that we learn by trial and error, positive reinforcement, and a whole series of other methods… but what in the world taught us these methods. What are they based on? Why are certain things hard coded?

These are all very interesting questions. Let’s start by talking about what’s hard coded at birth. Let’s start by saying I’m not refering to actions that have nothing to do with the brain, for example, the knee jerk response or response that automatically causes your hand to move away from fire when it’s in it. Those are all controlled basically at spine, never even truely messing with anything in your brain (it would take to long).

What I’d like to focus on is the idea of path finding. When a baby is born, the first thing you see them doing is flailing there arms and legs. According to my professor, they honestly don’t realize that their body parts are attached. They can see them, hear them, and feel them hitting things like there face. But it doesn’t click with them for a good period of time that these things are actually being controlled by them. Once they figure it out though, they start trying to do more and more complex things, mostly by imitation, things like crawling, walking, pikaboo, clapping, and about half a dozen other things your parents took pictures of when you were a baby.

So obviously, knowledge of what’s attached to our bodies, and how to use them isn’t entirely programmed. Knowledge of what all our inputs, and more so, what all our outputs are is limited… And yet…

We can pathfind/track.

Right out of the womb, you can get a baby to follow your finger with it’s eyes, and normally it’s head. It will follow it around and around. It will focus on whatever it is that moves. Now, it’s believed this action is something that is encoded in the brain primarly going back to the predator thing where the two questions you’re always asking yourself are "what just moved" and "can it kill me". But still, it seems this also is some sort of basis or requirement for learning. If we can’t see our environment, and focus on parts of it, how can we learn. Well, turns out we have a lot more inputs besides just our eyes, even blind people pathfind…

MIT of course has been on this issue for some time. In fact, they have a robot called COG:


This robot learned to track people, repeat their movements, and I guess achieved the level of an 8-month-old. I’m not sure I understand entirely how, but I must admit, it’s crazy what they managed to get it to learn. I must admit, I enjoy Kismet as well, he’s quite famous much like Cog. If you get a chance, you should check this stuff out.

Well, since my AI 2 homework didn’t quite kill me, I suppose I should be heading off to bed now. Afterall, I have a meeting at 9:30 on the other side of the twin cities. I also hope to do something fun tomorrow night, seeing as my honey bunny will be gone on some church event. Maybe I’ll get a hair cut and do some custom programming and homework catchup. My last month at the U might actually be the hardest and most fun one to date. Hopefully all my friends don’t get bored with me.

Next post:

Previous post: