Google and other companies have ironed out most of the wrinkles in computer-driven cars, and have even done successful tests on public highways. But policy questions remain, including: how would a police officer pull over an autonomous vehicle?
Here's a question to ponder: would you feel safer in a computer-driven car than you do in one you drive yourself? A self-driving car could succumb to software errors or poor programming -- but on the other hand, it also couldn't fall asleep at the wheel, get lost, or fail to see another vehicle in its blind spot.
Skip to next paragraphThe choice between the two is hypothetical for now, but maybe not for much longer.
Autonomous cars aren't on the roads yet, but the technological hurdles have mostly been met by now: Google, BMW, Toyota, and other companies have been working on prototype vehicles for years, and they've even been tested on public roads (in fact, BMW showed off a new self-driving Series 5 vehicle a few days ago). The questions surrounding driverless cars now aren't so much "Are they safe?" and "How do they work?"; rather, they're things like "Would a driverless car need insurance?" and "How would these cars yield right-of-way to each other?"
Those sorts of questions were the focus of a Silicon Valley symposium last week, at which government regulators and technologists tried to sort through some of the legal and policy challenges posed by autonomous cars. Take, for example, the routine traffic stop: how would a police officer pull over a driverless vehicle? Would that even be necessary, if the vehicles were programmed to always obey traffic laws?
Or, another example: surely everyone reading this has bent traffic laws at one point or another. A rolling stop, perhaps, or cruising a bit above the speed limit. How would a computer, programmed to play by the rules, respond to other drivers who don't always do the same? A car that's too "polite" to go with the flow of traffic might put its passenger at a disadvantage.
There are a lot of very human questions that remain to be answered, and the symposium was only the first few baby steps. But Google and other companies have already made a lot of progress toward a driverless future. Lots of vehicles already come with driver-assisting sensor systems that help to limit human error -- things such as blind-spot cameras and even infrared systems for detecting pedestrians at night. BMW and Volkswagen both plan to offer semi-autonomous cars in the near future -- models that can perform relatively simple tasks, such as passing slower vehicles, on their own.
There's also a growing body of evidence that driverless cars could cut down on automotive injuries and deaths. Google's autonomous driving program, for example, recently completed 200,000 miles of unassisted driving on public highways without an accident. And computers can cut down on the human error that causes the great majority of highway injuries and fatalities.
So how far away is a driverless future? It's tough to say when the policy and legal hurdles ? and lingering technical challenges ? will be cleared. The New York Times' John Markoff quotes Sven. A Beiker, the director of Stanford University's Center for Automotive Research, as saying autonomous vehicles might be available "twenty years from now ... maybe on limited roads." But others, including Google's own engineers, think the cars could be made safe for public use sooner than that. (Markoff also mentions that Google is apparently lobbying for state laws that would enable driverless delivery vans within two or three years.)
Readers, what's your take? Would you welcome a future with autonomous vehicles -- or is driving too much fun for you to relinquish control? Let us know how you feel in the comments.
For more tech news, follow us on Twitter @venturenaut. And don?t forget to sign up for the weekly BizTech newsletter.
apple store academy barnes and noble nook 12 days of christmas a christmas carol arkansas football player dies anne mccaffrey
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.