December 3rd, 2010
|03:07 am - Musings on Politics and Morals|
I've seen a number of recent references to Jonathan Haidt's moral psychology, If you want to go into detail about this, take a look at this interesting 18 minute TED talk, and here's a short article about these ideas and how they are reflected in political views. In brief, the 5 dimensions of Haidt's moral psychology are:
The data indicates that liberals value the first two far more than the later three, and conservatives value all three approximately equally. This certainly makes sense in light of some of the current political hot issues in the US, and I think is very clearly reflected in this distinction between liberal Negotiated Commitment families & conservative Inherited Obligation family, that I mentioned in a a post a few years back.
- Care for others, protecting them from harm. (He also referred to this dimension as Harm.)
- Fairness, Justice, treating others equally.
- Loyalty to your group, family, nation. (He also referred to this dimension as Ingroup.)
- Respect for tradition and legitimate authority. (He also referred to this dimension as Authority.)
- Purity, avoiding disgusting things, foods, actions.
If you're interested, someone also examined this data, which shows that it looks fairly robust
In any case, here's the test Haidt mentions in his TED talk, I took it, and my answers were unsurprising –
In any case, my only real disagreement with Haidt's ideas is that I disagree about the value of different approaches, and instead I agree with this excellent article that liberal values are inherently superior, because they are inclusive and not exclusive. In any case, this does seem to be a useful way to look a politics and morality.
- Well higher than the liberal average for Care/Harm (which to me includes actively preventing harm coming to people). To me, this is very obviously the most important aspect of morality, especially when applied to politics or law.
- A bit above the liberal average in Fairness.
- A bit below liberal average in Loyalty (likely because I value loyalty to people I personally care about very highly but feel essentially no loyalty to abstract entities like nations).
- Well below the liberal average for Authority.
- Far below liberal average for Purity – there's another test on the site that measures "liberal purity", which measures disgust with things like fast food or genetically modified foods – I also scored exceedingly low on that scale (junk food is occasionally quite yummy :)
On a somewhat related note, I have heard some people (I can't find the sites now) who advocated a 6th axis – liberty, one reason being that it would help differentiate libertarians from liberals, which is a valid distinction. I'm fairly certain where I would fall on that axis – it's considerably less important to me than the first two (Harm & Fairness). I'm now reminded of my post about automatic cars , and how I think that if they become safe and effective enough that laws prohibiting non-emergency manual car use in populated areas are both likely and a good idea. I'm curious how other people feel about that – have a poll.
If automatic cars could be made safe and reliable enough that they reduced both fatal and non-fatal accidents by at least 75%, should turning off automatic control in a car equipped with it (and driving it manually) be illegal in populated areas?
If no, would your answer change if they instead reduced accidents by 95%
Other (please explain)
Current Mood: thoughtful
|Date:||December 3rd, 2010 08:26 pm (UTC)|| |
Re: There should be an option "C"...
Yes, they can. Computer learning theory isn't a mature field yet, but there are algorithms that allow generalization. Especially if, in our hypothetical, the computers in vehicles form a cloud, there will be a lot of computational power available to work on data and evolve specific solutions. The Boston cloud will be good at managing heavy traffic in snow. The San Francisco cloud will be good at hills in the fog, etc.
Computers are bad at it compared to humans, and likely will for quite some time. And "cloud computing" doesn't simplify the problem, in some ways it complicates it.
"Smart cars" are a good 40-50 years off. We could take all that money and dedicate it to building PRT systems in urban centers and equipping it with vehicles that can operate off the PRT grid and be in the exact same place by then. PRT wouldn't have nearly the AI problems navigating the existing open infrastructure would have.
We need to change the INFRASTRUCTURE to make it safer. Changing the VEHICLE is focusing on the wrong side of the problem: it is exactly where the problem is "hard." Adapting the infrastructure to make it simple to navigate (by installing "rails", whether they are literal steel ones or magnetic tracks) and providing RoW separation is something we can do TODAY, with the tech we have now.
I personally would be prefer more public transit, especially light rail, with little cars being the vehicle that solves the last-mile problem. Get rid of vehicle overcrowding in cities, and you reduce a lot of the problems with urban driving. Hell, just get rid of "the next six cars go through the red because humans are shit at intuitively understanding optimization" problem and you'll reduce a lot of the snarls.
In this, we are 100% in agreement.
EDIT in reply to your edit:
I don't think driving is a necessarily "dumb and useless" skill. America (both the country and the continent) is sparsely populated, and we are always going to have people living in remote pockets. Roads are cheap infrastructure compared to any transit of any kind, and in these pocket cities of <10,000 private vehicle ownership IS the most sensible solution. Oddly enough, for some of these same areas horseback riding is still a "needed" skill as well.
Not everybody wants to live in an urban core. And there are things society needs people to do in these remote places: power lines, pipelines, and telecom circuits need to be maintained; resource extraction will still be needed; and farming still needs huge open spaces sparsely populated. I don't see private vehicle ownership ever being something that can be completely eliminated.
(Disclaimer: my field of research in college was Artificial Intelligence and Networks. I know exactly how good computers are at learning: trust me, they're still 10-20 years away from reaching the intelligence of many domesticated animals in real-time.)
Edited at 2010-12-03 08:32 pm (UTC)