The most difficult part of this is that nobody agrees on what consciousness is, or even if it exists. Many people believe that consciousness is illusory, that it is an artifact of other things and isn't real.
My take on it is that you can't prove that anything is real. The only evidence that you have is what you sense, and what you perceive from those senses. This means that what you take as reality is only a model that is formed in your brain based upon your perceptions.
There is an old analogy that talks about the brain-in-a-jar concept: if you were only a brain in a jar, and all of your sensory stimuli were artificially fed to you, how would you know?
My answer is that it wouldn't matter. Those things that are being fed to you would be your reality, and you can choose to either accept or reject it. Accepting it means talking it on faith that there is a reality "out there", and that you can function within its framework. Rejecting it pretty much means a trip to the loony bin.
So to have a framework to hang all of these theories on, we accept that there is a hard reality "out there" that is relatively consistent. This is no small thing, for it means acceptance of not only hard physical existence of objects, but also of other entities with the similar ability to discern their own personal reality.
And that is key. There is an acceptance of "self" vs. "other" that is to me the very seat of consciousness. Somewhere along the way, we develop this sense that there is that separation, that there is an outside world and an inside world, there is a sense of "I".
Part of this is the study of semiotics, which is essentially the study of symbols, signs, and icons, and how the brain recognizes them. The essentials of this for me come down to the point that for an icon or a symbol to represent something, there must be an entity that exists as a target for that representation-- there must be a "somebody" that understands that symbol.
It also broadens out into the study of artificial life, with some wonderful pioneering work by the MIT Arificial Insect lab, where "insects" were modeled with simple shelled behaviors, but their actions grew far more complex than what the simple programmed behaviors would predict. This is a lovely thing called emergent behavior, where complexity emerges from simplicity.
Now I don't think that insects as such have a consciousness. They are far more reaction machines, relatively simple in the scheme of things, with no real centrally located "brain"; rather they have collections of nodes that handle specific behaviors such as walking (move leg), find food, run from light, etc.
This puts them very close to automatons, which is covered by automata theory.
A single insect is pretty boring, behaviorally speaking, But colonies of insects end up developing structure, with specialties and cooperations that emerge. Taken as a whole, the colony has a form of intelligence, a behavior as a whole unit.
Emergent behavior arises out of the interactions of systems. One of the best examples of this that I have ever seen is The Game of Life (not the Milton Bradley one) developed by John Conway in 1970.
Here is an excellent version.
So some things that I believe:
1.) Consciousness is a real thing, whether it is an artifact or not.
2.) Consciousness is emergent.
3.) Consciousness is something that can be synthesized.
I don't believe that it is something that will appear on your PC anytime soon. I think that it is something that will emerge from the chaos of a massively distributed system, much more massive than anything that we have today.
Now here's the fun part: I know people in the AI community that look at emergent behavior as a nuisance. Their feeling is that intelligence is something that requires hard design, rather than something that evolves out of chaos.
Doesn't that sound similar to another debate?