… with all things unknown held constant, of course.
– – –
August, 29th, 2008
In a recent discussion with someone about human rational and economic theory of decision making, it occurred to me that perhaps this simple philosophy would lead to AI (adaptable artificial intelligence). This whole postulate is based on an idea of creative interactive software and hardware (which may or may not be possible)
The theory is based on the principle of Rational. Humans are considered rational because we suppose that we are able to understand why we interact with our environment, while most other animals do not understand why they interact, thus while we are able to understand the rational of a dog to better train them, they are not rational beings. However, in this sense we must make the concession that all rational is based on one irrational principle. Like math is based on the fundamental 0/1, yes/no, so too rational is provided only by the notion of good/bad. From this principle we develop, rather our brain develops, and we make decisions based on it for the rest of our lives. So to, if we wish create a system of artificial intelligence, that system should be built the same.
Therefore here is a brief process from which I propose this intelligence is built and can be built artificially.
All environmental interactions (i.e. Light, Pressure, Temperature, Humidity, Chemical reactions, Taste, etc.) need to be categorized by two factors. (Named here for illustrative purposes, however only the concept is important)
Factor 1 – Charge: Is this interaction positive or negative? (Preference)
Factor 2 – Degree: To what degree is it charged? (absolute scale) (Influence)
Charge is determined initially by chance (i.e. I hate tomatoes, I like potatoes)
Charge is then modified with future interactions (i.e. I hate tomatoes, but then I had them on a potato and I liked that, so I don’t hate tomatoes as much, or now I like tomatoes, or now I dislike potatoes with tomatoes, or I dislike potatoes all together.) How much the degree changes is dependent on the new interactions. Is the interaction within a positive or negative environment. Therefore degree (and consequently charge) can be changed via training with positive and negative reinforcement. Thus, as we make our decisions the most heavily charged preference wins.
I have a choice of two meals. Meal one is a vegetable platter which I like (+2) followed by a fruit dessert (which I also like (+1), however meal two is fish, which I don’t like (-2) followed by chocolate cake, which I like very much (+5). Therefore I would choose the second meal due to it’s higher value. Now, if moon grits, which I’ve never had before are included with both meals I would need to build a preference for moon grits. This new preference would be built on past preferences (textures, tastes, color, smell, temperature, as well as environment. And depending on how strongly I feel about all the factors involved in the decision moon grits are then assigned their degree. Because I dislike fish, I would expect my preference to be lower when paired with fish than it would be when paired with the vegetables, of which I enjoy thus adding to the environment. Therefore as we experience more interactions, decisions become easier as they are based less on chance and based more upon history. And inso all decisions are made, with initial preferences based on the chance and future preferences based on environment. It’s identifying the initial conditions as random Yet malleable which I believe vital to this idea.