July 26th, 2007
|01:12 pm - This is the way the world changes|
Here's an excellent short article on evolutionary algorithms for industrial design. I've been following this for the last 5 or 6 years and it finally looks like it's going mainstream, thanks to increasingly fast and powerful computers. On a practical level, the development of flash memory that lasts 30 times longer is excellent news, and I'm equally certain we'll soon have all manner of similar wonders.
One of the more interesting possibilities involves avoiding patent infringement
Not content with aiming for top results however, another group of researchers is using EAs to produce designs that dodge patents on rival inventions. Koza took a 1-metre-tall, Wi-Fi antenna made by Cisco and attempted to create another that did a better job without infringing Cisco's patent. He used an EA that bred antennas by comparing offspring with how the Cisco patent works and weeding out ones that worked similarly. "Our genetic program engineered around the existing patent and created a novel design that didn't infringe it," says Koza. Not only would this allow a company to save money on licensing fees, the new design was also itself patentable. I'm certain this will have a major affect on patents, and may well render them irrelevant, which is not necessarily a bad thing if the design process is increasingly computerized.
In any case, I'm amused at some engineers being resistant or worried about EA-design, they should be in the sense that these things are going to greatly reduce the impact engineers (or any other human) will have on designing the various devices and products we use, and so some of them will be out of a job. However, I'm not terribly worried about this tech being abandoned, since it's too obviously useful and (more importantly) profitable for companies to ignore – for once, the fact that engineers are not the people in charge of most companies may prove a useful thing indeed.
Most people already have no idea how or why the products they use work or how they are designed, it looks like soon this will be true of everyone, including the engineers involved in their manufacture. I'm certain that for a while engineers will attempt to figure out how and why various EA-created designs work, but w/o significant intelligence enhancement, I'm also guessing that in a decade or two this will prove impossible (at least w/o the use of some form of intelligence-enhancement), and they will simply test to see how well something works and how durable it is.
Our minds are severely limited things, and what excites me most about this technology is that it represents an entirely new way around these limitations by automating the process of trial and error that is ultimately at the basis of all new design. With evolutionary algorithms and 3-d fabricators, I'm guessing we will soon be looking at completely automated production, which is wonderful in that it is a way to cheaply produce high-quality products and avoid the necessity of exploiting workers in various 3rd world nations.
The fun will really start when this technology is routinely applied both to software and chip design, a process that may well allow computers to improve faster than they currently do, which then allows other designs to be created even faster. It does not seem impossible that in a decade or two humans will no longer write programs or design computers, although I'm certain they will still be trouble-shooting them.
In any case, the adoption and spread of this tech will clearly increase the rate of technological change, which fits perfectly with various predictions. It does indeed look like we are quite literally on the verge of hitting the fastest part of (to use Charles Stross' term) "The Accleration".
Of course, like many other advances, this calls into question what jobs most people will have – once again, I'm guessing that in many civilized nations (ie not the US), some sort of general citizen's allowance will eventually become common.
Current Mood: jubilant
this calls into question what jobs most people will have
That is actually something I've been thinking a lot about. I'm a teacher, and I defnitely see my profession (at least at the level I'm at) becoming obsolete - mostly because the mindset of public education is firmly entrenched in what used to be as opposed to what is and what will be.
|Date:||July 26th, 2007 08:43 pm (UTC)|| |
mostly because the mindset of public education is firmly entrenched in what used to be as opposed to what is and what will be.
I can see that, too much teaching seem to be about societal conformity training, which is becoming increasingly silly as society is changing and fragmenting. What do you see replacing current teaching?
I’m expecting to see a resurgence of apprenticeship. It’s how I got my start in the computing industry, and even at my current job, we decided it’s easier to hire fresh-out-of-college engineers and have them learn from the senior engineers than to go out and try to find genuine senior engineers (as opposed to the plethora of people who have the job title, but can’t solve simple problems on a whiteboard during an interview).
|Date:||July 26th, 2007 09:05 pm (UTC)|| |
That makes a great deal of sense for most professions. In part, I'm annoyed with the "practical" focus on so much education. I'd be much happier with public school and undergraduate education being mostly about learning general knowledge and problem-solving and grad school and professional training to consist largely of apprenticeships. Of course, the problem arises with many profession is if they are changing rapidly in response to changing tech, apprenticeships start becoming difficult.
“Practical” focus? If anything, what I’ve run into has been too abstract and divorced from real-world problem solving. I do wish they’d teach critical thinking earlier than college; I asked a philosophy professor once about how young you could teach it, and he suggested sixth grade would be a good place to start.
I think apprenticeships would be good for a rapidly changing profession, because the apprentice can learn from a master who’s coping with change, and can also provide their own viewpoint to the master based on learning something without years of habits. When I was 16, I had a job in a computer graphics lab and often found myself helping out the computer science grad students.
|Date:||July 27th, 2007 08:33 am (UTC)|| |
|Date:||July 27th, 2007 09:39 am (UTC)|| |
It might be, but I also know that a lot of US education (both public and private) is woefully bad and often definitely not focused on teaching critical thinking, so I'd be more likely to accept this if the same conclusions were also drawn in other first world nations.
I don't know because under the current political system (and NCLB) we have a heavy emphasis on teaching children to remember a list of items that a committee has deemed important - the complete antithesis of the type of education people are going to need.
I think in the future there will be a place for basic instruction in a group of age-level peers for younger people, but once learners hit a certain age (maybe middle school) education will have to be individualized based upon the needs and interests of the person. And education is going to be an ongoing, life-long endeavor, probably highly self-directed. Information will be delivered via technology and the hands on practice will be in the real world - collaborating with people engaged in the same task and learning by doing.
How we get there from where we are now - indoctrination, social control, and lists of important facts from Western Civ - is the big question.
|Date:||July 26th, 2007 09:02 pm (UTC)|| |
Of course, the important difference is that Asimov clearly favored the Earthers, and it certainly looks like reality would strongly favor the Spacers.
Okay, mainly I like the region on a parallel Earth that happens to be where Arizona is here, but I'll take what I can get.
I'm actually betting that parallel world travel is more likely than interstellar travel, especially FTL travel (or any other form of conventional time travel). Of course, whether the physical laws are similar enough to permit human life is a completely different question.
I expect that some aspects of software engineering are going to become “precisely describe constraints to an evolutionary system and sketch out an initial architecture from which it can begin evolving”. (It’s also a good way to explain botches in high-tech RPG scenarios: your equipment encountered an unplanned-for condition and failed to handle it; rolling again qualifies as retraining the thing...)
|Date:||July 26th, 2007 09:39 pm (UTC)|| |
Seems to me that in the interests of fairness, when companies fire engineers because the computer programs can do their job more effectively, that those companies pay for a complete college level training for all engineeers it fires to enter another field, including all living expenses. However, I do not see that happening. Corporations care nothing for people harmed by their "innovations". All they care about is making more money.
I think that would be better to generalize by taxing corporations enough to pay for retraining costs for people whose professions have been made redundant. That way, there’s no distinction between “I got laid off because I was replaced by a computer” and “my whole company shut down because it’s obsolete”; either way, the human beings involved have a chance to get a fresh start.
|Date:||July 27th, 2007 08:49 am (UTC)|| |
On the whole society benefits as it gets cheaper to produce the programs, though. (Assuming uncontrollable AI produced by EAs won't destroy us all, as they are likely to do if somebody actually produces a real AI using those techniques, but that's a separate issue.) Should the inventors of the steam engine have been punished for removing the jobs for a lot of workers? Innovation should rather be encouraged.
In the long run, I hope that machines take over enough jobs that there actually isn't much left for humans to do anymore, forcing governments to implement a guaranteed minimum income
good enough to live by. Marshall Brain talks about such a possibility extensive in his Robotic Nation
, a plan which seems quite promising. (Either that, or molecular manufacturing renders money redundant - however, molecular nanotechnology again subjects us to those risks of hard AI.)
|Date:||July 27th, 2007 09:06 am (UTC)|| |
(Assuming uncontrollable AI produced by EAs won't destroy us all, as they are likely to do if somebody actually produces a real AI using those techniques, but that's a separate issue.)
Why would such an AI be likely to destroy us?
In the long run, I hope that machines take over enough jobs that there actually isn't much left for humans to do anymore, forcing governments to implement a guaranteed minimum income good enough to live by.
I completely and totally agree.
|Date:||July 27th, 2007 11:45 am (UTC)|| |
Why would such an AI be likely to destroy us?
Because we would have only limited control of its goal systems - we might be able to create an AI that behaves as we want it to in a certain environment, but without an understanding of how it worked, we'd have no clue of how it behaved when let free.
To quote AI Risk:
Evolutionary programming (EP) is stochastic, and does not precisely preserve the optimization target in the generated code; EP gives you code that does what you ask, most of the time, under the tested circumstances, but the code may also do something else on the side. EP is a powerful, still maturing technique that is intrinsically unsuited to the demands of Friendly AI.
|Date:||July 27th, 2007 06:30 pm (UTC)|| |
Short of creating an exact digital copy of a human consciousness, I don't see any other method available. I think AI is possible, but given how little we know about intelligence, it seems far more likely to me that we will either produce it accidentally (or far more likely) though some sort of evolutionary mechanism. I can see ways in which this might be able to be done, but neither I'm fairly certain that neither I nor anyone else on the planet has the slightest clue about how to deliberately program an AI. If we develop significant intelligence augmentation first, this fact might change. However, given that much of the process of socializing a human is essentially guesswork, I don't see this being any different with AI.
So, does this mean that you think that any AI we create is likely to be destructive or do you see some other possible path to AI design?
From my PoV, I see us as almost certain to be able create an AI through EP well before we could create one deliberately, and it seems very likely that someone do so. Ultimately, I see it as much the same process of trust as creating a human. The useful difference being that it's possible to keep an AI on a secure server with very limited hardware links to the outside, preventing it from spreading to any other system.
|Date:||July 27th, 2007 10:07 pm (UTC)|| |
Well, hanging around AI- and Singularity-related mailing lists, there are a bunch of people who claim to have a working design for AI planned, but typically lacking the funds to implement it. Novamente
is one - I think they claimed to be something like five years away from implementation, if only they'd get the funding for a team that could work full-time on it. As it is, they're trying to squeeze an operating profit by creating narrow-AI implementations of it.Are
they actually anywhere near true AI? Beats me. Their head guy seems quite smart and sane, and they're confident about themselves, but so were all those researchers claiming they were very close to AI 20 years back. I don't really have any clue of how to evaluate those groups properly, but I can only hope that they're on the right track (especially now that Novamente has partnered with the Singularity Institute, who actually understand
all the dangers involved with AI). Because I don't really see us as having many other options for survival. I agree that there's a very high chance that EP will get first... so I try to donate a good amount to SIAI and hope that they'll come up with something good.
|Date:||July 27th, 2007 06:24 pm (UTC)|| |
What forces governments to act is gross social unrest, riots in the streets, and politicians getting firebombed. Big corporations work hand in glove with corporations to oppress the working class.
What makes you think that most corporations won't just keep their fat profits instead of passing savings on to people who buy their products?
|Date:||July 27th, 2007 06:54 pm (UTC)|| |
If unemployment becomes sufficiently high, preventing social unrest will necessitate some form of guaranteed income. In addition, the entire (highly successful )engine of consumer capitalism relies upon consumers having sufficient income to purchase the goods produced, such will also encourage this to avoid economic problems.
Also, it's worth considering the fact that the lessons of the US economy and government are not particularly applicable to any other first world nation. The sad fact is that compared to that nations of Western Europe, Australia, Canada, or Japan, the US governments cares far less about the welfare of its citizens and US citizens have far fewer economic protections. In short, some sort of system like this is essentially inevitable in Britain, Sweden, Japan, and many other first world nations, but considerably less so in the US. In short, this nations sucks as a place to live unless you are rich, but most of the rest of the first world does not.
|Date:||July 27th, 2007 08:52 pm (UTC)|| |
until the US government falls, or changes, policies here do impact other nations.
Given the US record of "helping people" guaranteed minimum income is likely to be not nearly sufficient to have a decent lifestyle.
To me, it all goes back to corporations having rights as people. To me corporations need to have no such rights -- no right to give money to political candidates or to seek to influence elections, etc.
|Date:||July 27th, 2007 08:56 pm (UTC)|| |
The US is having less and less impact on the EU, as the Euro gets stronger and the EU gets increasingly annoyed with US policy.
How do you explain the fact that most of the nations of Western Europe treat their citizens vastly than the US does.
|Date:||July 27th, 2007 09:02 pm (UTC)|| |
I suspect in the EU corporations have far less power to control the government and turn the populace into serfs. But I do not know this for a fact as I know nothing about EU politics or law.
|Date:||July 27th, 2007 09:54 pm (UTC)|| |
It might get trickier in the US, but like Heron said below, corporations have less power around here. Not to mention that giving people extra money that they can spend and not go out rioting actually helps the economy, and thus the corporations. (Admittedly some that money must probably be taxed out of them, but...)
|Date:||July 27th, 2007 08:37 am (UTC)|| |
The prospect of using evolutionary algorithms to design computer software sounds more worrying than exciting to me... at least when they start applying them to the creation of a real artificial intelligence.
|Date:||July 27th, 2007 09:22 am (UTC)|| |
I'm betting it will be easier to produce something that can mimic various useful aspects of intelligence than something actually intelligent. I doubt speech recognition software is likely to become intelligent, but I can see EA being used to develop exceedingly good speech recognition.
Also, the only sensible way to develop evolving software (since at minimum, it could easily become destructive or at least entire take over all available processors) is to create it on a system with no connections to any other computer. So, even if you get something as unlikely as a malicious AI, you merely need to pull the plug to get rid of it.
Also, we have some (admittedly vague) clue about the minimum complexity needed for intelligence, and unless we are wrong by several orders of magnitude, if you want to avoid accidental AI, all you need to do is develop the software on computers insufficiently complex to house an AI. This has the added bonus of also making certain that the software will run easily and rapidly on many systems. Since much of this software will be developed for use on a high end PC, the programs will presumably be developed on PCs, and until high end PCs have the speed and complexity of a human neural network, there's no risk. Also, by the time that happens, I'm hoping we have sufficient intelligence amplification available that AI is not a particular worry.
I found one of the more exciting ideas being using EA on the electrodes in a cochlear implant. With luck, others will have similar luck with other implants and we'll have the first mental prosthetics and amplifiers in a decade or two. I'm guessing that this will happen before AI is developed and so don't worry much about dangerous AI's.
Speaking of AI, have you seen this article
? I'm a bit dubious, since I have no idea how you would test if you have successfully duplicated a mouse brain, but it's both interesting and hopeful. Applying Moore's Law to the figures they mentioned, gives 20 years until we have computers that can do a real-time simulation of a human brain. I'm guessing an AI won't need quite that much power, but that it will also take some time to do, so AI before 2030 seems quite possible, and perhaps even by 2025. The acceleration is definitely in full swing.
|Date:||July 27th, 2007 12:00 pm (UTC)|| |
Yes, I agree that it's (probably) unlikely that EA will create a real artificial intelligence by accident. I was more worried about people who would try to use it to build an AI on purpose, with the aim of having it carry out their goals. Such a behavior would be exceedingly reckless... but I'd be surprised if people, unaware of all the risks involved, didn't try it out anyway.
The idea of using EA for implants boosting cognitive ability is an interesting one, one which I hadn't thought of. It would certainly have the potential to avoid part of the problems currently involved with such projects (like the immense complexity of the brain). However, for as long as we don't replace our brains with electronic components entirely, the biological components are likely to function as a bottleneck for the rest of the system, preventing us from thinking at the speeds that a real AI would be capable of. Technology built to interface with the brain is also subject to strict regulation, leading to rather long development cycles as the any major changes need to be extensively tested first. Accelerating Future had some interesting discussion about AI vs. IA
a while back.
Yeah, I saw the mouse brain article a while back (though thanks for the reminder - I just updated my "AI in our lifetime" article with a mention about it, now that it was brought up again). It does indeed seem likely that AI will be here, soon enough.
|Date:||July 27th, 2007 06:45 pm (UTC)|| |
Accelerating Future had some interesting discussion about AI vs. IA a while back.
I tend to agree with the first comment to that post. We are blessed with nervous systems that are impressively plastic. Provide a potentially useful input and our brains will create some way of using it, even if the input is as unlikely as a belt of vibrating cellphones. Combine that with EA-based implant design and I think impressive results will occur. The best part is that the initial research will be (and in fact already is) both perfectly legal and well-funded, because it will be used to provide mental prosthetics for people who have suffered serious brain injuries or sensory loss - which is already happening, it's already a thriving field. The next (and fairly easy) step is for healthy people to get slightly upgraded versions of these same implants. Within 20 years, and likely less, I'm betting memory augmentation, direct neural interfaces to the internet, and improved senses will all exist, and I think all three will exist somewhat sooner than that. Combine that with various mental performance-enhancing drugs which are already being developed and things look fairly hopeful. I'm certain that AI will eventually outstrip IA, but eventually might be well be a decade or more after AI is developed.