Artificial Intelligence or, AI, seems to be the word on many scientists' minds at this point in time. The concepts are so close; close enough that they can't ignore it, but not close enough to grab it. And so we are left to contemplate it, which is technically a good thing, because there would be nothing worse then to create it blindly. The question is now, should we do it, rather then can we do it. The idea of an artificial being that is intelligent enough to rival us brings up many questions, of all natures. What about jobs? What about armies of them? What about religion ethics and God? But are we getting ahead of ourselves? Is it really such a close reality? Can we really create a being of human intelligence or are we merely speculating? In this assignment when Artificial intelligence is mentioned it is referring to a being, artificial created that can operate in a relative manner to human.

It is effectively a human just made from machine. It walks talks and does everything that an ordinary human can do. The first question that should be asked of advance A. I.

is whether it is even possible, because if it isn't then what is the use of questions concerning their existence in society. Scientists (generally) say that we will be able to create a computer with the ability to process one bit at the same speed as the brain, within the next twenty years. However this is far from the overall effectiveness of the brain. The brain is also able to process not just that one stimuli, it is able to process billions at a time, even without us trying to do so. It instructs everything on what to do and all at the same time. Computers on the other hand can only process on bit of information at a time.

The second question that needs to be addressed is do we really know how to make an artificial being. Physically maybe, we are most likely to make it in our image, that way it is more likely to be accepted. But more to the point, once we have a being of similar brain function, can we make a human robot mentally. Every human is different and so rightfully, we should make every robot that is going to be labeled human different. How would we then teach it to let its personality grow, how would we condition it? Would we teach it the difference between right and wrong or only logic? If so, then what would happen when it reached the real world, only to see that it is not like what it has been taught, that not everyone is 'a good person', that not everything can be explained by logic? If taught in this way then couldn't the whole world just take advantage of it? Humans in general are a bad example of how to be; we cheat, lie and worse. This is not how we want to be creating intelligent robots, but could it cope with being taught one thing and then discovering otherwise? Should we just give them a parent, and see how things turn out, let nature take its course.

The truth is, is that we are no more ready to deal with intelligent robots then they are to deal with us. If hypothetically we did create an artificial being, as intelligent as us, can it then be called human? Just because it is intelligent, does that make it human? Or does it have to feel, both emotions and physical pain. The question is, is it really feeling pain. All it is feeling is electrical impulses to the brain. Humans not only feel this, but the pain that someone would want to hurt them, or that they would be so stupid to hurt themselves.

Would a robot simply say ouch and move on. Would it dwell over what has happened like a human? What about feelings, love, hate and fear. How can we imagine to create this, we don't even know what is happening we simply feel it. How can we teach a robot to feel this, besides emotions are different for ever person, the possibilities are too infinite for us to create. Suppose we once again work out that problem, coming to some conclusion. Then what? By many peoples values the robot is still not human.

One of the biggest Christian ethical questions that can be asked is whether or not this being has a soul. Can we create something that has a soul? If not then do we have the right to create it, to create something hat has nothing to live for, nothing after death? And then if we were to make a robot fully human, isn't one of the requirements that it is not immortal, that it can die. If this robot does have to die, and it is without a soul, then we go to heaven while it will just cease to exist. There is simply no point to that, we as humans live to die in the knowledge that it is not the end, what comfort will an artificial being have? So if do create an artificial being then it has nothing to live for but its life, but what if that life is made a hell. The biggest ethical problem with creating robots is not whether they will love us but whether we can love them, or, will we just create them as slaves and treat them as inferior. Can we deal with the fact that they are better then us in some ways? And if we are to treat them as inferior, that how do you think they will feel about us.

For example the Christian view is that God created us, how many people want nothing to do with him, or will we simply just not give our creation a free will. So if the robots have nothing after death to live for and nothing in life itself to live for what is the use of creating them other then for our on gain? Is it right to do something like that, did God enslave us... no. And so, to begin with, the likely hood of us being able to create a race that is equal or better then us is extremely optimistic.

Chances are that eventually we might, but is it worth it. Why waste so much money now, why not use it for something that is closer at hand. And if we ever do create artificial intelligence equal to our own then I would not wish them to live long for their sake. They shall live in a corrupt world that they did not corrupt and even if they are good it will get them nowhere. We need to take a long look at ourselves and decided if we are fit to create such a being, if it is our right. Because if we were to do this now, then there would be no bigger mistake in history..