Montag, 18. Mai 2009
A complete human behaviorism simulation
A very interesting article I found in the Phrack Magazine:
In 1956 John McCarthy defined the term Artificial Intelligence as
"The science and engineering of making intelligent machines".
Obviously, while studying artificial thoughts processing, we eliminated
the human factor of Neuro-linguistic filters that the brain applies on the
meta receptors layer. A computer program simply will never tackle a thought
based on impulsive reactions. There is no human behaviorism in AI.
Instead, we substitute this mechanism with an algorithm of computational
processing and information storage indexing. Here resides the very core of
AI science. Many algorithms have been evolved and applied, overcoming
faulty thinking, infinite loops of exclusive-OR decision making, and
geographic/territorial mapping of robotics, and so forth.
Fifty two years later, we intend to rationalize a new approach to this
field, trying to keep away from science fiction.
This paper is fairly introductory to both classic AI and our AI model,
and requires no prerequisite except a bit of curiosity for this field,
so enjoy reading.
b) Central Processing Spirit
II. Character assignment design
a) Psychological growth
c) Reception -> Reactions -> Style
III. Ontology / knowledge engineering
a) Are we Heuristics?
c) Sub-symbolic design
d) Artificial Desire
a) As a primary reflection on Artificial Intelligence, one must find it
vital to proceed according to a precise scientific approach for analysis
A thought is an idea. A piece of information that the mind recalls.
This (POI) is processed by the chemo-electronic factory of the brain,
and gets tainted by the MIND. Neuro Linguistic Programming theories
organized the brain functions and how it deals with information through
various filters. We borrow a basic segment and build upon it a
resemblance in machine land:
- Similar thoughts stored in memory (anchoring)
- Virtues and choices (decision making)
- Freedom to take risk endeavors. (problem solving)
b) To tackle this ordeal from ground zero would be an enormous load on one
article or paper to handle. Therefore we assumed our study from a point
where we have advanced in applying to our AI subject, using the
international common speech utility that is the English language, coupled
with a set of rules we engineer as an esthetical appeal for this entity.
These rules, as we will explain later on, do not follow the EXPERT SYSTEM
decision making logic, where as human knowledge changes, the entire module
will have to be rebuilt; instead our model will take its decisions based on
Index Priority, that varies automatically as the AI experiences new events.
In other words, we design our program a personality. This generic model is
based on dual channel thought processing, fed by a hierarchy of database
storage devices where each family of thoughts is preserved in its own
design. Some of these databases, or sets of thoughts, will have a read only
access permission, some will have read and write access permission, and
some will only have write permissions, to be used as a temporary allocated
space for acquired thoughts before sending the entire data structure to
Central Processing Spirit. As we amusingly named(CPS).
While the read only access structures will hold information that relate
to the fundamentals of the entity's core design, and provide our first line
of defense once the automation process of Self Information Gathering (SIG)
will be launched; From supplied text experts, web information, news, and
much more feeds;
the write only permission space will be as such for the sake of restricting
the AI entity from any premature usage of these new acquired thoughts
being processed, organized and approved by the CPS (or the programmer).
Thinking about this design will surely summon up issues of time
How long will this construction take?
A minimum of one human lifetime? But that is highly acceptable with the
implementation of 2nd generation AI replication algorithms, which is a
base model of AI hybrid reproduction, resulting in the jointure of 2 AI
entities CPS and dBases.
This dual model inheritance is not designed to be similar to the human
reproduction between one male and one female human beings, but it has
to do with the design of dual channel thought processing as mentioned
previously. We tend to believe that our model's reproduction, has to occur
between one couple.
This is matter of MULTI AGENT PLANNING and community scheduling,
and is beyond the scope of this paper.
II. Character assignment design
a) Psychological growth.
"It's not how perfect you do something that's important, but how others
A famous question is always there. Could a machine have life, or will it be
dry simulation. Basically, this should not matter at all!
Regardless of the process a human being achieves in developing a unique
character, and the tremendous complexity that subsists in his
neuropsychological growth, it almost only pertains to how others interpret
his reactions and behavior.
That in mind, we couldn't care less about how a machine obtaining a real
genuine thought, or a lifelike style of its own. We will design an
algorithm of learning that will eventually attribute a character to the
machine, depending on how much and how fast it could process the SELECTED
In other terms, given one action that is having ten children listen to an
adult instruction such as "Finish your homework, then watch TV" we could
observe ten different reactions ranging from obedience, to defiance. And
that relates to:
- How bad the child wants to watch TV
- How important are his homework
- What is shown on TV
- Is it a circumstance free situation or has it to do with another
(might say no in revenge for not having ice cream an hour ago)
- and much more situational parameters.
In machine land, our CPS would have built a certain database of events in
correlation with the outcome that happened, indexed them according to a
judgmental scale, and throughout its uptime, it will select how to react
upon life events according to what it has been fed as an Index Key
Technically speaking, this is very easily programmable with recent database
technologies and language. The more parenting and training our CPS takes,
and efficiency in our database selection design, the more chances are to
obtain a unique character and attitude.
b) The two paradigms of learning that concern us are:
SUPERVISED and UNSUPERVISED.
First, let us talk about the basics of machine learning.
Classic AI always outlined machine learning from reaching a state of
consciousness, as it is believed that computer learning is only about
designing algorithms to find statistical regularities or various data
patterns. And then tried to resolve problems such as Classifications,
exclusive-OR decisions, and so forth, with Decision Trees.
Decision Trees are a simple but effective logic design, where a chain of
boolean questions might take you down the leafs as you pick up your choices
In our opinion, and as Neural Networks progress only showed, this design
could attain at its best a good speech recognition utility or a medical
assistance program that would diagnose and evaluate cancer subjects.
In this approach, Supervised Learning is the logic of feeding the AI with
rules of a world (classification system) we already created, and relying
heavily on the training our machine gets in order to minimize the error
margin (Markov theorem, Bayesian Networks.)
On the other hand, Unsupervised Learning shifts interest more toward
decision making than it is to classification. It is simply a way to find a
framework suitable for decision oriented reasoning, based on a
This model builds up a history of results, upon which it bases a
statistical decision making techniques for the future questions at hand.
Clustering is the second Unsupervised Learning type, and is achieved by
finding similarities in the training data, not trying to boost a utility
Now that this was said, let us think about the limitations.
Whether we preprogrammed the AI to reach a certain purpose, or by itself
as a systematic self-learning module, the LEARNING mechanism should always
follow one standard set of stimuli.
Let's outline this from the point of view of cognitive psychology:
- Memory and recognition. It is a fairly straightforward job for a
programmer to design a system with pattern matching, measuring of
distances, and sequential treatment of information.
- Verbal and linguistic evaluation. The AI should have a method to
distinguish and assess similar information indices. Just like a
human deja vu, it MUST relate the event to different occurrences
stored in the memory, and decides its reaction based on how
significant the other instances were at the time. This will
eventually allow us to hope that this design would one day generate
(i.e)new sentences of its own.
Interesting! Imagine the AI analyzing large amounts of data, images,
sounds... on a regular basis while indexing the databases according to
what possible emotions they would generate, and have it finally trained
for this evaluation. It should come as no surprise if we had a design
that could write poetry!
- Comprehension. As absurd as this might sound, AI has better assimilating
facilities than humans. The process of comprehension is the most
complicated phenomenon observed in Neuroscience, but from the very simple
facts we concluded, machinery has the upper hand due to greater speed of
processing than and better memory storage than ours. Now this is not a
comparison between Men and Machine. We simply mention this as a fact that
AI surpasses the human limitations of comprehension and it is only
intuitive that such a design is possible.
- Neuro linguistic science has shown that humans have more
difficulties understanding negation sentences than simple affirmative ones.
Unlike humans, computer science treats booleans with the same speed and
Many more disadvantages of comprehension process simply do not exist in AI.
That also means a great supply of information will be needed.Be it
negations, complexity or ambiguity of sentences, this AI model is able
to treat them with promising speeds to reach a self learning stage.
Associating actions to the reactions analysis.
c) Perceiving the Universe is a mapping mechanism of Time and Space that
we encounter through our biological senses. This we shall call the
Most importantly is to apply a successful text mining / pattern matching
module to our AI, which will analyze text as its only sense of the
Fifty years of progress in this field allowed many researchers to reach
a respectable level to the extent of using some customized AI programs in
forensic psychology and criminology investigation.
The only new idea our model has to bring to the world of AI is an
EMOTIONAL SIMULATION module.
Emotional simulation is a two-way library of reactions prediction.
Let us go into that.
For each event, a human being would display a certain emotion.
This is vastly exposed in body language, tonality, timeline of the speech,
and verbal choices.
We will only be concerned in expression analysis and timeline display.
Our model was based on a typology study done at the University of Kent,
for a police interview tactics handbook. We used the same algorithm to
detect emotions and display them in return.
(Think of it more as detecting the logic behind emotions.)
The Kent model is a big failure and nonsense, but nevertheless the three
steps of the design set the stage for our Emotional Simulation (but in
[Note: some readings about Kent's model could be in hand at this time]
In short, delivery has about 12 variables of expressions, open, close,
leading. Maximization occurs when police agents try to intimidate the
subject and push him to give out more clues. Our maximization is fishing
for clues technique, used by asking more key questions.
There is not much to explain about manipulation. In our study, it could
easily be merged with the maximization step.
Hopefully, in a few months (by mid 2008), we could put a complete program
to test, that will not only detect the meanings of expressions, but will
also "assume" the emotions behind them, and switch its mode to the
This AI will be the first model that could literally switch moods.
a) Are we heuristics?
For the second part of this paper, we will be discussing the core formation
for this AI model. Its existence and what constitutes its regulations.
A common pitfall most AI scientists encounter is, to have a design based on
precise mathematical reactions and decisions. Now let's go over this method
The evolution of AI happens to go from mimicking human responses, to
impressive behavior simulation. The designer usually tries to produce a
perfect replica of the human performance.
Now what will happen if even the most advanced researches in neuroscience
haven't even scratched the shell of human neuropsychology? most importantly
DECISION MAKING. How would we replicate a phenomenon if we haven't fully
understood what drives it and how it reaches its steps?
This is why computer science employs a preprogrammed set of responses,
based on what the designer believes is appropriate for humans, implementing
a classic database structure that can only lead him to a one-way exclusive
This classic method is flawed. Period.
One better approach to this is to avoid concepts of imitation and jump
right through to what is called a Rule Of Thumb mode, which allows a
certain vague margin for correct answers.
Using a heuristic method to resolve the problem of decision making, not
only set stage for a more fluid AI, but also showed us practicality and
broader playground for the programmer.
In terms of allowing many indexing modules to play the role of priority
selector for each decision that is to be made.
For example, if you have to decide what pizza to order. Your mind filters
would process and infinite numbers of POI before putting the decision in
perspective, and most of the times, you normally "feel" it was a random
choice and you could have survived with others.
But what if you try one kind, and found it was so delicious that you wanted
to order it again next time? This is where the priority indexes come in
The CPS has the freedom to add, remove, promote or demote indexes for each
POI or family of POI, based only on what it "assumes" to be an acceptable
This is a tricky turnaround in theory, but for the programmer, it is still
the same straightforward job, and could be the first gap filler between
boring Q/A programs and Hollywood Incredible Sci-Fi.
c) Sub-symbolic design
In order to architect a sophisticated knowledge design, Newel and Simon
invented the theory of symbolic design, where a set of semantic rules might
be applied to construct a further more complex structure.
I won.t say we are restricted to the subsymbolic design, but in fact, this
theory fits perfectly.
So both sides of sub-symbolic design are used to our advantage and not
changed in theory:
- Alternative development
- Hybridism of subsymbolic and classic symbolic NLP (Natural Language
Processing). Once the learning module is concluded, SS design will set the
stage for the real MULTI-AGENT interference.
The geometrically increasing computing power promotes five factors tending
to reduce radically the role of any species of logic in IT :
1. Since deterministic applications are vanishing, the conventional
algorithm (pattern marching) is not anymore program backbone.
2. Even when still useful, the conventional algorithm is not anymore the
main programming instrument.
3. In AI the symbolic paradigm is steadily replaced by several sub-symbolic
ones, based on fine-grain parallelism.
4. Even when symbols are used, they are stored in and retrieved from huge
and cheap memory, rather than processed through sophisticated reasoning
schemes (case-based reasoning is just a blatant example).
5. Cognitive complexity of new, sophisticated logics is too high for a
designer, when cut and try, is affordable.
This might sound a big deal of mambo jumbo at this point, considering the
introductory nature of this paper, but more in depth details about this
once the publication of this project will be official.
d) Artificial Desire
Very little have we to say about Artificial Desire. As we saw in the index
priority and POI set of rules, we might easily trigger a vice-random
decision when it comes to natural desire for things, but at this point, we
haven.t even perfected any way to make the AI really desire something. More
like have it chose from a similar range of choices based on time
variations, frequency of this choice, or event yet, have it try something
for a first time. Neither us, or anyone who previously indulged in AI
science dares to claim giving a machine this attribute.
Nevertheless, having a decent normal vice-random desire choice maker
module, will simulate human behaviorism to a great extent, and fakes it.
"We could have different choices with each one a probability of success
based upon the past:
- 85% " as explained earlier.
So generally AI choses the first choice but for one time AI wil go for the
second and see what happen. That could be considered as a "desire".
This fake simulation might also apply to the reasoning construct.
Even if applied conforming to Kant.s practical reasoning, we still have to
forge a "moral decision" based on what our CPS has a-priori set of rules,
and have the programmer stand in charge of supplying
the AI with this route.
In a few words, I would like to apologize for the dry style of this paper,
it started as a dissertation thesis and ended up as my future hobby side
project. We might never even come close to a full artificial conscious
design, but this model introduction surely drew a few interesting
turnarounds that might facilitate future innovations.
I hope reading it was exciting enough for as many of you to be interested
in further studies about this wonderful field.