?

Log in

No account? Create an account
Musings in the alleged risks of intelligent AIs - Synchronicity swirls and other foolishness

> Recent Entries
> Archive
> Friends
> Profile
> my rpg writing site

January 2nd, 2016


Previous Entry Share Next Entry
12:19 am - Musings in the alleged risks of intelligent AIs
Breakthroughs in artificial intelligence have begun making the news, and while anything close to human intelligence, or for that matter the intelligence of any vertebrate is a ways in the future, the recent advances are fairly impressive, in ways they haven’t previously been.

It therefore unsurprising that concern about the dangers of AI is in the news for the first time. I share a few of these concerns – autonomous weapons (which the US Air Force is considering ) are from my PoV an astoundingly stupid and terrible idea, not because intelligent machines would use them to kill us all, but because a single software glitch can result in lots of dead humans.

However, I’ve always been deeply suspicious of the sort of fear and occasionally even panic about human and superhuman level AI found on sites like Less Wrong or described somewhat more sensibly here, and in greater detail, here.

I’ve read counterarguments against the risk of AI by Charles Stross and in this interesting and excellent piece. However, none of them felt like they fully addressed the feeling I had that the entire debate was silly and pointless. Then, when reading the “Should AI be Open” article linked to above I had an epiphany – for any of the “AI Risk” arguments about the inherent dangers of superhumanly intelligent AI to make sense, you need to posit a hard-takeoff singularity.

Without that, then absolutely none of the arguments make sense, because instead of run-away superintelligence swiftly becoming unknowable and unstoppable, you have a slow and difficult process of teams of humans and one or more human-intelligence AI slowly working to find ways to increase AI intelligence, and then many months or more likely, at least several years after creating an AI as intelligent as an average human, you have one as intelligent as one of the smartest humans, and then at least a few years after that (if not significantly longer) someone finally learns how to make an AI more intelligent than any human who has ever lived. Given that every other recent technological advance required considerable effort and time, it seems impressively unlikely that AI will prove any different, especially since it’s already proven to be exceedingly difficult. It’s not like a human-level AI is going to have all that much better idea about how to make a more intelligent AI than the people who created it. Also, many of the “AI Risk” scenarios require even more than a hard takeoff singularity, they also require self-replicating nanotechnology of the sort that can swarm over the planet, and which breaks a few physical laws and would likely end up being eaten by far older and more determined nanotechnology (ie existing microscopic lifeforms). It seems to me that the basis of the fear of AI by intelligent well-educated IT professionals comes down to seeing a sort of AI that is more at home in a grim version of Disney’s The Sorcerer’s Apprentice, rather than anything that anyone has any actual evidence will or even could exist.

In any case, I suspect that in less than five years we’ll have software will not be in any way conscious or intelligent, but which can fool most people into thinking it is, since humans are easy to fool, and eventually – perhaps in 20-50 years, something like true human-level artificial intelligence will exist, but creating it will be a slow and difficult process, as well creating something smarter than it.

(20 comments | Leave a comment)

Comments:


[User Picture]
From:mrteufel
Date:January 2nd, 2016 08:58 am (UTC)
(Link)
I'm more worried that they'll become sufficiently "intelligent" to serve instead of human workers.
[User Picture]
From:heron61
Date:January 2nd, 2016 09:30 am (UTC)
(Link)
That's already happening, and looks to be getting a lot more common. I expect peak employment to occur within 15 years, and the only remotely humane path forward will involve guaranteed minimum income.
[User Picture]
From:drplokta
Date:January 3rd, 2016 12:32 pm (UTC)
(Link)
Everyone thought employment would decline when machines were able to replace humans for most physical tasks. And it didn't happen, although there were some short-term dislocations. It won't happen with mental tasks either.
[User Picture]
From:kalimac
Date:January 3rd, 2016 04:28 pm (UTC)
(Link)
I didn't die when I jumped off the roof of a one-story building. Therefore I won't die when I jump off a twenty-story building either.
[User Picture]
From:heron61
Date:January 3rd, 2016 08:20 pm (UTC)
(Link)
If most factory and office work is automated, then other than a relatively small number of people working in IT, keeping the automation going, we're left with jobs in science, art, and construction work, and people are working on automating construction work. What the heck do you think most people will do for jobs? It's not like there's any sort of rule that says new jobs must suddenly appear once we've automated others.
[User Picture]
From:drplokta
Date:January 3rd, 2016 08:32 pm (UTC)
(Link)
Consider someone in 1750 asking what most people would do for jobs if we employed 5% of the workforce in agriculture rather than 95%. What would those people do? Now answer for 2015 in the same way, but make sure you use the benefit of 250 years of speculative hindsight.
[User Picture]
From:heron61
Date:January 3rd, 2016 09:33 pm (UTC)
(Link)
Except that we're not talking about 250 or even 50 years in the future, we're talking about 20 years in the future. Some of the new office automation is starting to be used now and most of it will be in place in 10-15 years. So, I'm talking about what most people will be doing for work in 20-30 years.
[User Picture]
From:drplokta
Date:January 4th, 2016 06:52 am (UTC)
(Link)
No, we're talking about the rest of time. If "peak employment occurs within 15 years", that means that employment never again reaches that level. Not in 250 years, not in 2,500 years, not in 2,500,000 years.
[User Picture]
From:resonant
Date:January 5th, 2016 04:07 am (UTC)
(Link)
That's actually a good example of technological disruption causing suffering for a generation. In the late 1700s, the introduction of more efficient ploughs and rotary threshers led to widespread rural unemployment in parts of Europe and the UK. Poverty became widespread, and unemployed labourers migrated to towns. A generation later their descendants got factory jobs, but there was much suffering in the short term.
[User Picture]
From:heron61
Date:January 5th, 2016 04:16 am (UTC)
(Link)
That's an excellent point. Also, while it's possible that most people will find new jobs 30 years from now, I think it's far from guaranteed. At this point, it's clear that everything from low-end service jobs, to even more factory work than is automated now, to more than half of all office work will be automated in a decade or two, and that really doesn't leave many options for everyone who is out of work, or even their children.
[User Picture]
From:drplokta
Date:January 5th, 2016 06:25 am (UTC)
(Link)
And I acknowledged that there may be short-term dislocations. The claim to which I am responding is that employment will never again rise to the same level.
[User Picture]
From:ciphergoth
Date:January 2nd, 2016 01:37 pm (UTC)
(Link)
I read everything I can on this subject, and this is among the best anti-risk arguments I've read, so thank you. I'll try to write a proper response at some point, but in the mean time here are some links you might find interesting. Your argument seems to me to have a lot in common with Robin Hanson's arguments. Hanson's debate on this with Eliezer Yudkowsky was turned into a book with commentary by Kaj Sotala. Eliezer followed that up with his paper, Intelligence Explosion Microeconomics. It's worth noting though that even Hanson believes the risk from AI is enough to be worth funding safety research.

Edited at 2016-01-02 01:38 pm (UTC)
[User Picture]
From:siderea
Date:January 2nd, 2016 07:42 pm (UTC)
(Link)
Ah, see, the fact that all these scenarios posit a hard-takeoff singularity is so taken for granted that nobody mentions, but is actually where all this concern comes from: it arose out of people taking the hard-takeoff singularity scenario seriously.

I think taking the hard-takeoff singularity scenario (HTSS) seriously is a good thing. I'm not sure how likely it is – I think I think it's more likely than you think it is – but if it happens, the crucial problem is it will be much faster than we will be able to react to it. It's definitely a potential catastrophe that would need to be addressed by foresight.

Given that every other recent technological advance required considerable effort and time, it seems impressively unlikely that AI will prove any different, especially since it’s already proven to be exceedingly difficult.

I disagree with your reasoning in two places. First, the digital revolution basically happened in my lifetime; the pace of technological innovation is radically accelerating. Predicting the pace of technological innovation from the past is not justified. We don't know the slope of the curve we're on, and we have evidence that getting steeper faster.

Second, as the HTSS link you provide mentions, the issue with HTSS is that it involves a recursive process of crude AIs working on less crude AIs. The whole issue with recursive processes is that they're terrifically explosive. There are no other technological developmental processes that I can think of that have the same potential for recursion that AI does. That's what makes AI different, and potentially much, much more dangerous, via a HTSS.

It seems to me that the basis of the fear of AI by intelligent well-educated IT professionals comes down to seeing a sort of AI that is more at home in a grim version of Disney’s The Sorcerer’s Apprentice, rather than anything that anyone has any actual evidence will or even could exist.

Even poorly educated IT professionals see real-world Sorcerer's Apprentice type problems on the regular. It's something computers are super prone to. What you see as a failure pattern associated only with fantasy, we see as a basic fact of life.
[User Picture]
From:autopope
Date:January 2nd, 2016 07:49 pm (UTC)
(Link)
I think taking the hard-takeoff singularity scenario (HTSS) seriously is a good thing. I'm not sure how likely it is – I think I think it's more likely than you think it is – but if it happens, the crucial problem is it will be much faster than we will be able to react to it. It's definitely a potential catastrophe that would need to be addressed by foresight.

I think it's actually very un-likely, and when we get close enough to appreciate the minutiae we'll discover reasons why it ain't going to happen (much as Drexler's "grey goo" basically got exploded by thermodynamics a few years ago, AIUI).

This doesn't mean it's not worth studying, though. It's a low probability event with very damaging potential side-effects if it happens, much like three nuclear reactors melting down simultaneously ... oh, look! Putting some effort into contingency planning is therefore sensible. But by the same token, making other plans on the assumption that it's going to happen is also unwise.

As for design patterns, one of the things that makes me deeply suspicious of the whole hard take-off singularity and the mind-uploading rapture of the nerds scenario is that it follows almost exactly the same pattern as pre-millenarian Christian apocalyptic eschatology. If something walks like a religion and talks like a religion, it's probably a religion. And the word for a religion one doesn't give credence to is "superstition" ...
[User Picture]
From:siderea
Date:January 2nd, 2016 08:26 pm (UTC)
(Link)
As for design patterns, one of the things that makes me deeply suspicious of the whole hard take-off singularity and the mind-uploading rapture of the nerds scenario is that it follows almost exactly the same pattern as pre-millenarian Christian apocalyptic eschatology. If something walks like a religion and talks like a religion, it's probably a religion. And the word for a religion one doesn't give credence to is "superstition" ...

That's illogical. Disbelief in Zeus doesn't prevent one from believing in electricity. We have 4,000 year old eclipse records thanks to Babylonian theocrat astrologers; their religion was false, but their celestial observations accurate. Resemblance to or role in a myth doesn't mean an idea is factually incorrect.

By all means be suspicious, but judge cases by their actual merits.
[User Picture]
From:autopope
Date:January 2nd, 2016 08:57 pm (UTC)
(Link)
Disbelief in Zeus doesn't prevent one from believing in electricity.

False analogy.

Disbelief in Zeus does preclude belief that Zeus causes lightning; but we have no equivalent of lightning as a phenomenon in search of an explanation for which AI is the only option.
[User Picture]
From:heron61
Date:January 2nd, 2016 10:18 pm (UTC)
(Link)
I see two especially big problems with believing in a HTSS - the first is that it looks to me like reification of Moore's Law and of exponential growth in technology in general - that these are in some way inherent and inevitable physical laws and not products of large amounts of hard work which will eventually slow down, since everything that looks like continuous exponential growth has so far turned out to be yet another S curve.

The second ties into my comment about how these scenarios work much better if they are accompanies by the sort of self-replicating nanotechnology that actually looks to be impossible, because without that there's a serious hardware issue. It's pretty clear that AI involves more than just software, and so going from a human-level AI to a superhuman one is going to involve adding hardware, possibly lots of hardware, and almost certainly more hardware with each recursive improvement.

Assuming that there are no nano-assembler vats where the AI can simply grow more specially made components at will, those parts will need to be ordered, fabricated, and installed - with each recussion, which makes the whole human-level AI to vastly superhuman AI transition in a few hours, days, or even weeks look pretty darn unlikely.
[User Picture]
From:steer
Date:January 3rd, 2016 03:42 pm (UTC)
(Link)
We're already in a world where a software glitch can lead to a lot of dead humans though -- plane guidance, nuclear power station controls, nuclear missile controls, control systems in vehicles (ordinary vehicles never mind self-guiding). The dangers to human life from autonomous guidance on a drone carrying cruise missiles are surely lower than the dangers we already accept in software (even in a worse case where it deploys the ordinance on its own side indiscriminately).
[User Picture]
From:xuenay
Date:January 4th, 2016 04:18 pm (UTC)
(Link)
I think that a serious problem for this topic is that people are treating hard/soft takeoff as a binary distinction, when it's actually much more of a continuum.

Like, say that we have a soft takeoff for about 20 years during which automation and early AI continues its steady march to take over increasing parts of the economy, and then at some point there's some research breakthrough that allows for a much more advanced AI, which manages to take over the global and totally networked infrastructure that we've conveniently built for it during the last 20 years.

If current trends towards increasingly autonomous, networked systems keep up, it's simply not true that a hard takeoff would require anything like molecular nanotech. We've already seen computer viruses (worms, technically) that managed to infect most of the susceptible networked computers within a matter of hours of initial deployment; theoretical models of worm spread suggest that even those worms underperformed relative to the optimal.

The premise about AI risk only being an issue if there's a hard takeoff seems questionable, too. All kinds of oppressive and problematic social structures have built up gradually over an extended time, but that doesn't make them any less pervasive or powerful. Even with a soft takeoff, there could be plenty of incentives for people to keep developing increasingly powerful AI systems while neglecting safety issues, causing there to be an increasing number of agents-with-human-hostile-values in the world, each wielding increasing amounts of power. As we noted in Responses to Catastrophic AGI Risk:

Current narrow-AI technology includes HFT algorithms, which make trading decisions within fractions of a second, far too fast to keep humans in the loop. HFT seeks to make a very short-term profit, but even traders looking for a longer-term investment benefit from being faster than their competitors. Market prices are also very effective at incorporating various sources of knowledge [135]. As a consequence, a trading algorithmʼs performance might be improved both by making it faster and by making it more capable of integrating various sources of knowledge. Most advances toward general AGI will likely be quickly taken advantage of in the financial markets, with little opportunity for a human to vet all the decisions. [...]

Similarly, Wallach [283] discuss the topic of autonomous robotic weaponry and note that the US military is seeking to eventually transition to a state where the human operators of robot weapons are 'on the loop' rather than 'in the loop'. In other words, whereas a human was previously required to explicitly give the order before a robot was allowed to initiate possibly lethal activity, in the future humans are meant to merely supervise the robotʼs actions and interfere if something goes wrong.

Human Rights Watch [90] reports on a number of military systems which are becoming increasingly autonomous, with the human oversight for automatic weapons defense systems—designed to detect and shoot down incoming missiles and rockets—already being limited to accepting or overriding the computerʼs plan of action in a matter of seconds. Although these systems are better described as automatic, carrying out pre-programmed sequences of actions in a structured environment, than autonomous, they are a good demonstration of a situation where rapid decisions are needed and the extent of human oversight is limited. A number of militaries are considering the future use of more autonomous weapons.

In general, any broad domain involving high stakes, adversarial decision making and a need to act rapidly is likely to become increasingly dominated by autonomous systems. The extent to which the systems will need general intelligence will depend on the domain, but domains such as corporate management, fraud detection and warfare could plausibly make use of all the intelligence they can get. If oneʼs opponents in the domain are also using increasingly autonomous AI/AGI, there will be an arms race where one might have little choice but to give increasing amounts of control [and autonomy] to AI/AGI systems.
[User Picture]
From:heron61
Date:January 4th, 2016 09:06 pm (UTC)
(Link)
All kinds of oppressive and problematic social structures have built up gradually over an extended time, but that doesn't make them any less pervasive or powerful.

I think issues of humans using AIs (human-level or not) as tools of oppression is an exceptionally valid concern and as I mentioned above, I think autonomous weapons are a vastly terrible idea, but I'm a lot less worried about any sort of direct threat from AIs that aren't military weapons.

> Go to Top
LiveJournal.com