Being a
biologist, I am not an expert in the field (
disclaimer or cop out =)), but I know that in
computer science, most concepts are built using logical deduction from a set of
axioms. This got me to wondering whether in the field of
artificial intelligence, or AI, if there is some sort of a standard, rigorous definition for intelligence. If one had such a definition, one could then develop a
metric which could define how intelligent a particular computer system or algorithm implementation is.
If intelligence requires the ability to adapt to a new situation, then the trivial, but fundamental implementation of artificial intelligence might be the infinite rule set. This is the stupid way to solve the AI problem. That is, if you're building a chess computer, why not program it with every possible outcome for every possible scenario ... which may be an infinite set, or it may be finite if eventually, scenarios become degenerate. Still, that doesn't really solve the AI problem, for an infinite rule set could, for any given instance, take an infinite amount of time to parse and implement.
Starting with the infinite rule set as the axiomatic definition of intelligence, then, it becomes the challenge of computer scientists in this field to implement a meaningful subset of this infinite rule set and find a way to traverse it in a workable amount of time. These are two separate challenges. By limiting the rule set, you are limiting the options the algorithm accesses, therefore affecting its overall knowledge. Its adaptability will also depend on how fast it can find the right bit of knowledge.
Creativity may also be considered a subset, or defining parameter in intelligence. Empirically, creativity is the impression that one has developed a new idea that is either logically unrelated but important, or is obtained through some path of logic that is unlikely to be traversed based on standard experience. The first could be implemented based on rule set searching algorithms that have a stochastic component to them. Akin to Monte Carlo methods for finding the solution to a problem, a stochastic component to rule searching could allow the best answer to be reached by a random jump to a new logic path that is unrelated to the original one being pursued.
These are just some thoughts on quantifying intelligence for the sake of developing computer models of intelligent systems. Currently available definitions such as the Turing Test, are in many ways unsatisfying, because they do not have the axiomatic rigor found in other aspects of computer science. For further reading by someone who has thought a lot more about this issue than I have, look at the work of Marvin Minsky.
Source: Bar conversation at the Club Charles in Baltimore.