Journal on Policy & Complex Systems Volume 1, Number 2, Fall 2014 | Page 9

������������������������������������
Formally , Gell-Mann and Lloyd define the Algorithmic Information Content ( AIC ) as “… the length of the shortest program that will cause a given universal computer U to print out the string and then halt ” ( Gell-Mann ; Lloyd , 2004 , p . 388 ). Note that Kolmogorov ( 1965 ) and Chaitin ( 1966 ) had also proposed AIC as a measure of complexity . However , strictly speaking AIC is not computational itself , but it is only used as a theoretical tool to support the reasoning .
Still according to Gell-Mann and Lloyd , the intention of the scientist when proposing a theory is to minimize its complexity so that the theory remains simple and at the same time accurate in relation to the data . Thus , the tradeoff of the scientist is to include as much detail as possible in order to make more precise predictions , at the cost of complexifying its theory .
II - Cellular Automata and Artificial Intelligence

The pioneers of computer science —

Alan Turing , John von Neumann ,
Norbert Wiener and others — were all motivated by the desire to use computers to simulate systems that develop , think , learn and evolve ” ( Mitchell , p . 209 ) �
Turing ( 1952 ) worked with chemical substances called morphogens which diffuse and react with the mean where they are . Turing used this work to propose the concept of pattern formation . Turing demonstrates further that given very simple premises it is possible that homogeneous systems with different rates of diffusion become heterogeneous after receiving small exogenous shocks or by the influence of irregularities in the structure of neighboring cells . Other small changes in the morphogens such as alterations in their chemical concentrations , presence of catalyzers interfering on the growth of other cells , changes in temperature or in the rates of diffusion may lead the system to become heterogeneous .
Marvin Minsky ( 1961 ) reviews the state of the art of the field of artificial intelligence at the time . He focuses on the classes of activities that a generic computer is programmed to do and that leads to a superior level of processing , such as learning and problem solving . Thus , goes Minsky , the first stage for building artificial intelligence consists in searching . In order to solve a problem , a machine can be programmed to test a vast space of possible solutions . For trivial problems , this process is very effective , but highly inefficient . The heuristics of searching would consist in finding techniques that would allow using incomplete results of the analysis to make the search more efficient . Some of the methods for that consist in linking objects to models or prototypes ; and testing each one to identify such relevant heuristic characteristics . In complex problems , however , Minsky ( 1961 ) notes that it may be challenging to divide complex objects into parts and describe the complex relationships among them .
The machine would also be able to increase its efficiency in problem solving if it could consider its own previous experience . The learning heuristics imply that the machine would apply similar methods that had previously worked for similar problems . The implementation of these systems is based on decision models that positively award past successes . However , in complex systems it may be hard to tell which decisions were relevant to find the solution .
Minsky also mentions the cost of unlearning . The machine will built-up its memories based on past experiences and , in case there are better procedures than the ones experienced , the machine would incur the cost of correcting its old ‘ beliefs ’.
7