Steam-powered Automata and Machine Intelligence
Over at the S.W.A.G. boards, a discussion popped up that really struck a chord with me, especially the part about machine religion. The question as I interpreted it came down to this: how would intelligent machines behave in the absence of their creators?
Any discussion of machine awareness must also include some discussion of machine learning. What degree of self-awareness is necessary? I can open up my computer’s device manager and I am immediately informed that my computer is aware of what parts compose the whole. It also knows it is connected to the Internet and it’s identity relative to the outside world; it has a name for use in private networking (PsychicToaster) and one for use in the outside world (its IP).
Now, that’s all very well and good, but we still don’t consider that on the same level as our own self-awareness. Why not?
We built machines to think in a way that we do not: sequentially. We have the equivalent of a computer network in our heads, computers have the equivalent of a super-neuron. We do relational thinking, they do sequential thinking. Naturally, any sort of machine consciousness will be different from ours until we build computers more like our own brains. (Actually, there are scientists working on that exact premise)
Artificial intelligence as we know it today is still just a gross approximation. Almost like a model of intelligence rather than functional intelligence. Purpose-built machines can beat people at game shows, but you can’t put Watson in charge of a band saw without completely rebuilding it and programming a new band saw interface. (Although things like the Wolfram Alpha project are trying to address that, too)
There’s also the inherent biases of the creators. We are trying to build machines that are intelligent like us rather than trying to build machines that are intelligent in any way possible. So, we shape their sensory devices around our own, even though a machine could “see” better by a combination of other sensory devices: magnetic resonance, direct electrical stimulation (e.g. a wired internet connection).
There’s a ton of possibility for some truly alien thought processes and reasoning for machines, systems of morality that have no relation to our own, or only a tangential relationship to our intent for them. (I, Robot anyone?)
I don’t think you’d see anything we would consider irrational, such as machine religions, except as a complete inversion of that: some previously unconsidered element is introduced into the program code as axiomatic, forcing the machines to believe it in spite of all evidence. A hyper-rationality based on flawed input, rather than irrational or emotional motivation.
Place your comment
You must be logged in to post a comment.