Javascript Menu by Deluxe-Menu.com
          




we will provide the technical
papers as PDF format.

FREE TO DOWNLOAD.



Technicalpapers About us Dedicators Contact us Site map Bio-informatics Bio-metrics Bacterial computing more... Mobile computing(data on road) Mobile networking Mobile computing(past,present,future) More... Anti-hiv Nano robotics Utility fog more... Network awareness service Bluetooth-wireless network more... Self learning system more... Datawarehouse Genetic algorithm(travelling salesman) Artificial intelligence Automiszation of umpiring(fuzzy logic) Red taction Stego hunter attacking more...


HOME > custom > Artificial intelligence >

paper no:
Custom2
last update: 22/05/08
                                                                                          

ARTIFICIAL INTELLIGENCE
(Next Level)



INTRODUCTION:

       A rtificial intelligence started as a field whose goal was to replicate human level intelligence in a machine. Now it has become a field to create "intelligent assistants" for human workers. No one talks about replicating the full capacity of human intelligence anymore. Human level intelligence is too complex and little understood to be correctly decomposed into the right sub pieces at the moment and interfaced. Furthermore, we will never understand how to decompose human level intelligence until we've had a lot of practice with simpler level intelligences.

In this paper I therefore argue for a different approach to creating artificial intelligence:

       •  We must incrementally build up the capabilities of intelligent systems

       •  At each step we should build complete intelligent systems that we let loose in the real world with real sensing and real action.

       •  Hence when we examine very simple level intelligence we find that representations and models of the world simply get in the way. It turns out to be better to use the world as a model.

THE EVOLUTION OF INTELLIGENCE:


        We already have an existence proof of the possibility of intelligent entities, human beings. Evolution has taken place over the past 4.6 billion years.

       Single cell entities----->photosynthetic plants----->fish and vertebrates----->insects----->reptiles----->dinosaurs---->mammals----->primates and apes----->man

       (3.5 billion years) (2.5 billion) (1 billion) (450 million) (370 million) (330 million) (250 million) (120 million) (2.5 million)

       Man then invented agriculture a mere 19,000 years ago, writing less than 5000 years ago and "expert" knowledge only over the last few hundred years. This suggests that problem solving behavior, language, knowledge and application, and reason, are all pretty simple once the essence of being and reacting are available. That essence is the ability to move around in a dynamic environment, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. I believe that mobility, vision and the ability to carry out survival related tasks in a dynamic environment provide a basis for the development of true intelligence

REPRESENTATION –KEY TO AI :

       It is common to say that when nobody has any good idea of how to solve a particular sort of problem (e.g. playing chess) it is, known as an AI problem. The principal mechanism of AI, abstraction- is a self delusion mechanism. In AI , abstraction is usually used to factor out all aspects of perception and motor skills. These are the hard problems solved by intelligent systems, and further that the shape of solutions to these problems constrains greatly the correct solutions of the small pieces of intelligence which remain.

       Early work in AI concentrated on games, geometrical problems, symbolic algebra, theorem proving, and other formal systems. In each case the semantics were fairly simple. The key to success was to represent the state of the world completely and explicitly. For example when we observe an object or read a book, there are basic obvious features we remember. These features tend to represent the object, etc.
Soon there was a new slogan: "Good representation is the key to AI". The idea was that by representing only the pertinent facts explicitly, the semantics of a world (which on the surface was quite complex) were reduced to a simple closed system once again. Abstraction to only the relevant details thus simplified the problems.

       Consider a chair for example. While the following two characterizations are true:


       A chair is an object that you can sit on. But there is much more to the concept of a chair. Chairs have some flat (maybe) sitting place, with perhaps a back support. They have a range of possible sizes, requirements on strength, and a range of possibilities in shape. They often have some sort of covering material, unless they are made of wood, metal or plastic. They sometimes are soft in particular places. They can come from a range of possible styles. In particular the concept of what is a chair is hard to characterize simply. There is certainly no AI vision program which can find arbitrary chairs in arbitrary images; they can at best find one particular type of chair in carefully selected images.

       But this abstraction is the essence of intelligence and the hard part of the problems being solved. Under the current scheme the abstraction is done by the researchers leaving little for the AI programs to do but search for the right image after putting the pieces of input together. The only input to most AI programs is a restricted set of simple assertion deduced from the real data by humans. The problems of recognition, spatial understanding, and dealing with sensor noise, partial models, etc. are all ignored.

       For example, MYCIN [13] is an expert at diagnosing human bacterial infections, but it really has no model of what a human (or any living creature) is or how they work, or what are plausible things to happen to a human. If told that the aorta is ruptured and the patient is losing blood at the rate of a pint every minute,
MYCIN will try to find a bacterial cause of the problem.

       Thus, because we still perform all the abstractions for our programs, most of the intelligent work is being done by us. It could be argued that performing this abstraction (perception) for AI programs is merely the normal reductionist use of abstraction common in all good science and hence you can't really refer to the program as a truly intelligent one. We find there is no clean division between perception (abstraction) and reasoning in the real world through representation. The brittleness of current AI systems attests to this fact.

       Now let us observe a different approach. Imagine yourself in the following situation. You are tied to a chair and asked to obtain a book placed on a table out of reach. Now how would you get the book? What image of the chair would you get? A wheel chair? A swivel chair? Or maybe even a remote controlled chair which you can move towards the table (while positioning yourself in the chair) to get the book.

       I have come to realize that when the human mind is provided with a situation or problem to solve, thinking tends to broaden (note that our extent of imagination broadens from a simple chair to a high-fi mobile one when we input a situation rather than specific details as in the representative method) .Imagination, creativity sets in implicitly. In the above problem we find that more than the object (chair) by itself in the picture, greater importance is given to the object's surroundings (table with a book on it).Hence our thinking is not only based on an isolated object but also the world around it. A truly intelligent program would do the same- study the situation, perform the abstraction and solve the problem itself rather than we feeding it prioritized inputs. Therefore, a higher level of intelligence will be achieved.

       But we can put forth a question. How is an artificially created system capable of learning?
Some insects demonstrate a simple type of learning that has been dubbed "learning by instinct". It is hypothesized that honey bees for example are
pre-wired to learn how to distinguish certain classes of flowers, and to learn routes to and from a home hive and sources of nectar. Other insects,
butterflies, have been shown to be able to learn to distinguish flowers, but in information limited way. If they are forced to learn about a second sort of
flower, they forget what they already knew about the first, in a manner that suggests the total amount of information which they know, remains constant.
Hence we require a network of finite state machines which can perform learning, as an isolated subsystem, at levels comparable to the above example.

 
INCREMENTAL INTELLIGENCE :

       I wish to build a system that co-exists in the world with humans, and are seen by those humans as intelligent beings in their own right. If my goals can be met then the range of applications for such systems will be limited only by our (or their) imagination.
For the moment then, consider the problem of building a robot as an engineering problem. We will develop an engineering methodology for building such systems.

First, let us consider some of the requirements for our systems:


       * It must cope appropriately with changes in its dynamic environment.

        * It should be robust with respect to its environment; minor changes in the properties of the world should not lead to total collapse of
the system.

        * It should be able to maintain multiple goals and, depending on the circumstances it finds itself in, it should be able to adapt.

       * It should do something in the world; it should have some purpose in being.

       
        Now, let us consider some of the valid engineering approaches to achieving these requirements. As in all engineering problems it is necessary to decompose
a complex system into parts, build the parts, and then interface them into a complete system.


DECOMPOSITION BY ACTIVITY :

       
Perhaps the strongest traditional notion of intelligent systems has been of a central system, with perceptual modules as inputs and action modules as outputs. The perceptual modules deliver a symbolic description of the world and the action modules take a symbolic description of desired actions and make sure they happen in the world. The central system then is a symbolic information processor (just like a brain to a human). In such a system all the interfaced parts are connected and dependant on the central system. Failure of the central system can lead to total collapse of the entire system itself (similar to collapse of the entire human body due to brain failure). This proves to be a great disadvantage.

        An alternative approach is decomposition of the entire system into parts or layers and then interfacing them. This procedure implies layer independence. Each layer will work with respect to its own makes no distinction between peripheral systems, such as vision, and central systems. Rather it is the slicing up of an intelligent system and dividing it into activity producing subsystems. Each activity or behavior producing system individually connects sensing to action. An activity producing system is a layer. An activity is a pattern of interactions with the world. The layers must decide when to act for themselves, not be some subroutine to be invoked at the call of some other layer.

       The advantage of this approach is that it gives an incremental path from very simple systems to complex intelligent systems. At each step of the way it is only necessary to build one small piece, and interface it to an existing, working, complete intelligence. The idea is to first build a very simple complete autonomous system, and test it in the real world. An example of such a system is a mobile robot, which avoids hitting things. It senses objects in its immediate vicinity and moves away from them, halting if it senses something in its path. It is still necessary to build this system by decomposing it into parts. There may well be two independent channels connecting sensing to action (one for initiating motion, and one for emergency halts), so there is no single place where "perception" delivers a representation of the world in the traditional sense.

        Next we build an incremental layer of intelligence which operates in parallel to the first system. It is pasted on to the existing debugged system and tested
again in the real world. This new layer might directly access the sensors and run a different algorithm on the delivered data. The first-level system continues to run in parallel, and unaware of the existence of the second level. The second layer injected commands to the motor control part of the first layer directing the robot towards the goal, but independently the first layer would cause the robot to veer away from previously unseen obstacles.

  For Further more download pdf...



WELCOME TECHNICAL PAPERS

send technical papers with your details. we will publish them shortly with your details
for more info goto Dedicators section
.

our email id is given below:

TheTechnicalpapers@gmail.com


WELCOME ADS

Our email id is given below:

TheTechnicalpapers@gmail.com








Home | About Us | Contact Us | Site Map

Copyright © 2008 Know Technology Team. All rights reserved.

Your Ad Here