NTAFTAWT Future
Last update: 1-13-2008
The Present
Before talking about the future, there needs to be some more explanation
about where we are currently.
First of all, I am starting a new job, and so I most likely will not be
updating this as fast as I was previously. I would have liked to have
been further along by this point.
Currently, the charpatt program can distinguish between hundreds of
patterns with just a few non-unique identifications. I think that
changing the neuron configuration would fix this problem, and that
there is no fundamental flaw in the program that prevents better
identification.
I first experimented with this idea about 15 years ago, and found that
computers were too slow and the graphics were not good enough to allow
easy development. At that time, I was just using DOS, with the standard
CGA graphics. There was no windowing library anything like what is
available today.
Even though computers have increased in speed thousands of times, it
still appears that the current computers still aren't fast enough, or
the proper configuration to be able to run a huge neural network at
a reasonable speed. At the moment, it is possible to run a few thousand
neurons, and get some learning in seconds. Of course the brain takes
a few years to be able to recognize many patterns, but I just am not
that patient.
I thought before I started this that it would not be worth optimizing
to get a factor of two or more. I was mistaken. Compiling in release
mode gets factors of 10 or so, and optimizing gives a few more factors.
So most likely we can run the next types of simulations that I have
in mind.
I don't think that this model is the same as how the brain actually
works. Normal neurons most likely have stronger and weaker connections
in various parts of the neuron. I think this model is mathematically
similar, but that multiple neurons are required in place of a single
neuron in the brain. And of course, that this first stage of simulation
is still only a binary neural network. I am not saying that this is a
complete model, just describing how some portions may be similar.
What is a Line?
I think this is a simple question about what is an analogy in the visual
space. What gets us to think that a line on what part of the retina is
the same as when it is projected onto another part of the retina? When
an object moves across the retina (either the head or eyes are moving,
or an object is moving), how does the brain know that it is the same
object?
The object can be moved across the eye, and the object can be scaled when
the object is moved closer and farther apart from the eye. The object
can also be rotated. For all of these effects, a line of the object
as it is moving on the eye will fire closely in time.
I have thought for many years, that maybe the only thing that is happening
is that the neurons that are fired closely in time are wired together.
This is the next phase of development that I would like to incorporate is
neurons that fire when their dendrites are fired closely in time.
The feature that could be easily added to the charpatt program is that
the characters are moved across the "eye". Then the neurons and layers
would have to be modified to be able to handle time. I don't have the
complete solution in mind yet.
So what is happening when somthing is rotated 360 or 180 degrees?. There
is some sort of mathematical beauty when something is moved a certain amount
and that this concept can be applied to all shapes. It seems like there
are a few more layers of nuerons neede to be able to understand this problem.
What are Relationships and Other Patterns?
So what is happening with shapes that are related to each other? For example,
one shape is to the right of the other? This problem is most likely similar
to "What is a line". The brain has some good abstraction going on. Is this
just from the number of neurons, or is something more special going on?
For more information, search for "Bongard problems".
Sound
I have always been interested in sound and have wondered if this network
could be useful to analyze sound. Basically the ear uses hairs of different
lengths that vibrate when they are stimulated.
I was thinking of using some sort of sampling system to emulate this.
It seems like this could be simpler than using an FFT.
Higher Level Thinking
So when dreaming, is the brain wiring things together? Since we seem to
think visually, are there times when the input stimulus ciruitry is turned
off and the brain fires neurons randomly? Is this where the rambling
thoughts come from? What happens when something new is learned, and
a name is attached to many common things? Is this as simple as the fact
that the unnamed neurons were already sort of tied together, and then
a name was attached, or is there something more interesting happening
where a concept somehow links things together?
There are so many more questions and I don't even have time to write
all of them down, but all of this will have to wait at this time.