February 17th, 2011
|03:00 am - More Days of Miracles & Wonder|
So, as expected the IBM computer Watson did vastly better than the two best human players when playing the game show Jeopardy. The ability of computers to parse natural language has gotten very impressive indeed. Of course, I saw another (admittedly less impressive) example of this up close and personal a couple of weeks ago, when I watched my friend athenian_abroad do a voice search on Google on his Android phone. The resemblance to the crew of the Enterprise interacting with the computer on the original 1960s Star Trek was quite impressive.
In any case, I'm guessing that the best that Google or IBM could put forward could now pass an actual Turing test, as well as being able to win at Jeopardy, work together to build objects, and drive a car. Of course, the fascinating part of this is that computers are not conscious, self-aware, or in any way sentient. Instead, we now have the power and the software to process language, vision, and many similar aspects of the world on a computer in ways that are very clearly not as good as a human can do so, but which are no longer vastly worse – and both hardware and software keep getting better every year. As I've mentioned before, I expect we’re only 5 years away from people have regular interactions with computers that they won't always be certain are with computers and not people – limited interactions, like many customer service calls, but full verbal interactions, not just speaking isolated words or phrases and hoping to be understood.
I also find it interesting how many people continue to talk about how computers are vastly less capable than humans, despite having to regularly put new limitations on their definitions of vastly less capable. Computer consciousness remains purely speculative and may prove impossible. However, a computer that almost any human can't tell from another human in almost any circumstances (at least over the phone or in any other sort of mediated interaction) is not merely possible, but will likely be a reality in less than a decade. I have no idea what that will mean other than the fact that (as I've mentioned before) we're going to treat them like people, and that will be a fascinating social and cultural change.
Current Mood: impressed
Passing a Turing test is a fairly vague standard. From what I've heard, there were people who were fooled by the ELIZA program.
|Date:||February 17th, 2011 11:34 am (UTC)|| |
That's part of my point.
Computers remain dumb as hell, but the people who write algorithms are getting cleverer and building off of the effort of more of their predecessors. When we come up with an algorithm that is itself clever, then shit will get crazy, but until then, no recursive AI development process for us.
That said, this is pretty frickin' awesome
The snarky comment is it goes both ways- based on their limited responses, it's hard to tell if some service people might actually be computers. ;'/
This really leads into the question of whether we really even need "smart", as in generally intelligent computers. We don't really need automated trucks that get bored and change jobs, or space probes that bum around Facebook. What we do need is systems that are smart in the specific area where they need to be smart, and if they deal with humans, have the smartness to understand complex, possibly ambiguous instructions.
One of the interesting things about the SF game Trinity was coining the term "Satisfactory Intelligence". The computer systems weren't self-aware, but were very smart in the areas they needed to be competent in. The computers had artificial "personalities" which while not self-aware could seem very lifelike- with inevitable confusion on the part of some users. And of course one could easily plop an SI into a more-or-less human appearing body if one wanted.
|Date:||February 17th, 2011 03:25 pm (UTC)|| |
Because I'm an econ dork
The first thing that comes to mind for me is the potential labor difficulties. That's going to be a LOT of people out of a job. And the people that would be out of a job would most likely have to make the jump to jobs that have totally different skill sets.
If there will be jobs to be had at all. Which isn't at all assumed.
Re: Because I'm an econ dork
I'm not so sure about that.
We've gone through numerous phase changes in technology before, ones equally as drastic as the transition to computers, and we've still had a more-or-less steady job market, or even openings to more jobs. It's debatable as to whether the transition to expert systems will be as far-reaching as the one from horses to cars, or subsistence farming to mechanized farming. Consider the percentage of the population involved with farming in the 1700s, with the percentage involved now, for example.
Which isn't the best argument to make, since it's arguing from past experience not general principles, and one can (and many people do) argue that "this time it's different". Still, one of the reason it's hard to anticipate what people will do in the future is that it's very difficult to see what the effects of future technology will be. Ask someone in the year 1400 what the effects of mechanized agriculture would be, and they'd have no idea of what to say other than to predict the collapse of society.
|Date:||February 17th, 2011 03:48 pm (UTC)|| |
The only reason I don't agree with you 100% is that historically the innovation itself produced the jobs. People that maintained horses trained to maintain cars, etc.
My big concern(and my apologies if I dork out too much here) centers around limits of consumption and resource availability.
One of the primary purposes of technological advancement has been to reduce the amount of human labor involved in production. In the past companies have responded to this by increasing production and rarely to reduce labor. The reason for this is that industry was trying to meet demand.
The problem is that we are beginning to reach hard limits in regards to how much stuff we can make(resource availablity) and how much people want and/or can pay for.
That problem also makes "new industry" type innovations(like computers) difficult as those new industries would have to not only account for people being born but it would also have to compensate for all of the people whose jobs are lost due to labor reduction. Also said "new industry" would have to be able to keep those people busy without using that many resources.
|Date:||February 17th, 2011 10:07 pm (UTC)|| |
I agree. The future of first world work in 15+ years seems (to me at least) remarkably unclear & I think that actual civilized first world nations (IOW, most of them other than the US) are going to be heading towards a guaranteed minimum income by that time, simply because if you can automate most manufacturing work & most clerical work, it's not at all clear what there will be for most first world citizens to do. The declining populations in most of the first world are definitely a benefit.
|Date:||February 17th, 2011 10:02 pm (UTC)|| |
*nods* There are some uses for fully sentient AIs, but there are vastly more uses for socially & physically competant SIs.
|Date:||February 17th, 2011 08:13 pm (UTC)|| |
I also find it interesting how many people continue to talk about how computers are vastly less capable than humans, despite having to regularly put new limitations on their definitions of vastly less capable.
People feel threatened. Afraid that if our reasoning ability can be outdone, we are no longer "special" as a species.
I never put year predictions on things, but it's gonna happen at some point.
|Date:||February 20th, 2011 08:12 pm (UTC)|| |
In any case, I'm guessing that the best that Google or IBM could put forward could now pass an actual Turing test
Umm, not a chance. It was pretty clear that e.g. Watson didn't really understand the meaning of the writing it processed in any real sense of the word. An obvious indicator of this is that it couldn't properly take advantage of the category names: it thought about individual years on "Name the Decade", offered Toronto in the "US Cities" question, and failed consistently on the "Also on Your Keyboard" category".
Sure, I agree that for many applications, a computer will be just as good as a human, and the average user has no reason to wonder about whether he's talking to a human or a computer. But the Turing test involves a human who's specifically aware of the person he's talking with possibly being a computer, and is trying to use all of his smarts to figure out whether that's so. That's a completely different situation from somebody who starts off assuming that he's talking to a computer (the ELIZA case) or somebody who doesn't care about whether she is talking to one.
|Date:||February 20th, 2011 08:31 pm (UTC)|| |
What I mean is that humans are largely fairly easy to fool. Even an obviously non-sentient computer either can or will soon be able to fool most people in text-only & possibly voice-only communication, simply because the highest end software is responsive enough that it can fool us.