Nov 17, 2008

Oh the Games We'll Play!

Dear Internet,

I can't wait to take the Turing test. We'll have volunteers, all over the world, signing up to pretend to be computers. Or if not computers, AI. And if the test is to be authentic, you need a decent chance that you're talking to a human. That takes a lot of volunteer humans to let people take that test on them. Sit down at an AIM tube and prove you're organic.


When people ask me stupid questions, will I respond to them stupidly? How will they know if I human? My responses are patterned. My understanding is limited. Some portion of the time people will think that I'm you.

By the time a computer passes the Tt, it'll have to know all kinds of fun stuff. It'll have to know jokes, and deep questions. It'll have to get awkward when other people are awkward. Too smooth an imitation and the jig is up.

Can you beat a contender for Turing test by asking so much trivia, that if it knows all the answers it can't be a human?

Will AI have to lie to confuse us sometimes?

At some point, we'll be able to distinguish between unmodified humans and purest AI by the fact that the AI is better. Tests differentiate. Differentiation creates categories. Sooner or later, the categories created will rank You above Me, and that's awesome.

When you're close to the Turing test, I wonder what questions will stump you. Will you be able to talk about traffic? And your childhood? Will you be able to talk about relationships? Or will I beat you at the game by asking you for romantic advice?




SK said...

I wonder, if there were a post-apocalyptic war between AI and humans as there have been in so many human imaginings, if I would see AI as the enemy. This post evoked fears in me of having to let go of human control over nature, which we do not control, but think we do, and technology, which we currently do control but may soon not be able to. It makes me wonder, though, IF another being, even an artificial one, is intellectually and potentially spiritually ahead of me, shouldn't IT win the war? Shouldn't I sort of willingly lose the battle with AI? If the earth held another sentient being, animal or computer, that was just smarter, more evolved, and more in tune with eco-harmony than humans, shouldn't we then bow down graciously in this lucky hierarchy? What role would humans play in the new world order?

But what if that sentient being is only smarter and more strategic, but not as evolved? That's what is scary to me. And that is why it is cool that you write to grown-up AI/Internet with compassion, so different from the fears and insecurities the rest of us may be battling in looking towards the new technological era. I hope as I keep reading this blog, my fears subside and my compassion grows.


Yotam said...

I think if the computers are ever smart enough and evolved enough that it's worth letting them win, we won't have to. We're talking about a digital cross between Desmond Tutu and Barack Obama. This is the ultimate community organizer, who can do all of his/her own grassroots canvassing. I think such a creature would ask you nicely to cooperate in some grander project than you could think up yourself, and there wouldn't be a battle to lose. Some folks might respond to it with violence, but if it's worth following, it won't start that.

If it's not worth following, if it's a jerk or a dictator, then we're pretty much screwed. I'm totally with you right there. The trick is to make sure we're creating something with good desires and respect for sentient beings. That's the downside of the model of its development that I've been talking about here. If this being is focused mostly on passing the Turing test, it won't have all that much interest in helping or hurting us; it will just want to talk. To move beyond that I think we have to start it off with control over small, independent tasks with limited goals that benefit humanity. Once service and friendship are in its soul, then we can move on to cooperatively running the world together.