MyPage is a personalized page based on your interests.The page is customized to help you to find content that matters you the most.


I'm not curious

Has Analytics Failed in Driverless Cars?

Published on 12 July 16
10 Mario rewarded for 10 times 5 Mario rewarded for 5 times   Follow
6237
0
1

By the end of 2015 if you had not already aware of how far driverless cars have come in terms of their development and acceptance on public roads, you would have surely heard about them at least this year, 2016, because of a couple of accidents on public roads involving driverless cars.
Has Analytics Failed in Driverless Cars? - Image 1

The first accident this year involved a Google self-driving car that had a minor collision with a passing bus when it tried to avoid an obstruction (sandbags) in its path. Although Google’s cars have been in several accidents in the past during trials this was the first in which its self-driving car was responsible.
The second well-publicised incident involved a Tesla Model S Autopilot in which the driver of the autonomous car was killed. This was the first fatality involving a Tesla car. The car was in autonomous mode and failed to brake when a light-coloured truck turned into its path. Tesla did admit that its car owned partial responsibility for the accident.

The brain that continuously analyses all the sensory data and makes decisions based on them to drive the car fundamentally consists of analytics engines. Those who were sceptical about driverless cars becoming a reality had a field day both times and promptly voiced their opinions about how driverless cars are unsafe and that such accidents proved their point that analytics could never replace a human driver.

The truth about whether analytics has failed seems far from this view, in fact, just the opposite. While it would be hard to say that such technologies are 100% accident proof, the statistics show that that driverless cars have so far had lower accident rates per millions of miles driven than conventional cars. This is recognized by governments in many parts of the world, and so instead of putting an end to driverless technologies, they are in fact, using feedback and data from all the extensive trials done (including accident investigation reports) to extend existing regulatory frameworks to include driverless cars. The UK, for example, has very lately proposed to modify rules for insurance so that driverless cars can be insured.

While the driverless car control system as a whole involving sensors, computers and analytics is not yet perfect it is certainly quite acceptable already. In fact, arguments can be made that safety will improve when the driver is rendered completely redundant, and the logic behind this argument may be explained by first understanding the graded classification system that is used (by the US) for driverless technology:

Level 0: The driver is always completely in control, as is the case with the most basic conventional cars. These cars issue warning lights or sounds (at most) but no subsystem is ever in control at any time.

Level 1: Individual vehicle controls are automated, such as auto brakes.

Level 2: At least two automated control systems function in coordination, such as cruise control and lane-keeping.

Level 3: Under defined conditions, the driver can let the car take control of most functions, but with sufficient time and warning, the driver is asked to regain control of the car.

Level 4: The vehicle drives itself completely on its own from start to finish, and does not require human intervention at any time.

According to this classification, a Tesla car probably falls somewhere between Levels 2 and 3, because it certainly does meet the Level 4 definition yet. In terms of safety, however, while it may seem that Level 1 is the safest, the likelihood is that Level 4 would actually be the safest. The reasoning behind this assertion is that a car at that level would have reached a perfect level of driving, whereas a human could always make an error at some point. In fact, one could argue that Levels 2 and 3 are the ones that can never be completely safe because of the possibility of human complacence, error or lack of readiness to respond when required.

It therefore seems that striving to reach Level 4 is the natural way forward, and perhaps it is not all that far away, if the positive governmental reactions we are witnessing to driverless cars are anything to go by.


By the end of 2015 if you had not already aware of how far driverless cars have come in terms of their development and acceptance on public roads, you would have surely heard about them at least this year, 2016, because of a couple of accidents on public roads involving driverless cars.

Has Analytics Failed in Driverless Cars? - Image 1

The first accident this year involved a Google self-driving car that had a minor collision with a passing bus when it tried to avoid an obstruction (sandbags) in its path. Although Google’s cars have been in several accidents in the past during trials this was the first in which its self-driving car was responsible.

The second well-publicised incident involved a Tesla Model S Autopilot in which the driver of the autonomous car was killed. This was the first fatality involving a Tesla car. The car was in autonomous mode and failed to brake when a light-coloured truck turned into its path. Tesla did admit that its car owned partial responsibility for the accident.

The brain that continuously analyses all the sensory data and makes decisions based on them to drive the car fundamentally consists of analytics engines. Those who were sceptical about driverless cars becoming a reality had a field day both times and promptly voiced their opinions about how driverless cars are unsafe and that such accidents proved their point that analytics could never replace a human driver.

The truth about whether analytics has failed seems far from this view, in fact, just the opposite. While it would be hard to say that such technologies are 100% accident proof, the statistics show that that driverless cars have so far had lower accident rates per millions of miles driven than conventional cars. This is recognized by governments in many parts of the world, and so instead of putting an end to driverless technologies, they are in fact, using feedback and data from all the extensive trials done (including accident investigation reports) to extend existing regulatory frameworks to include driverless cars. The UK, for example, has very lately proposed to modify rules for insurance so that driverless cars can be insured.

While the driverless car control system as a whole involving sensors, computers and analytics is not yet perfect it is certainly quite acceptable already. In fact, arguments can be made that safety will improve when the driver is rendered completely redundant, and the logic behind this argument may be explained by first understanding the graded classification system that is used (by the US) for driverless technology:

Level 0: The driver is always completely in control, as is the case with the most basic conventional cars. These cars issue warning lights or sounds (at most) but no subsystem is ever in control at any time.

Level 1: Individual vehicle controls are automated, such as auto brakes.

Level 2: At least two automated control systems function in coordination, such as cruise control and lane-keeping.

Level 3: Under defined conditions, the driver can let the car take control of most functions, but with sufficient time and warning, the driver is asked to regain control of the car.

Level 4: The vehicle drives itself completely on its own from start to finish, and does not require human intervention at any time.

According to this classification, a Tesla car probably falls somewhere between Levels 2 and 3, because it certainly does meet the Level 4 definition yet. In terms of safety, however, while it may seem that Level 1 is the safest, the likelihood is that Level 4 would actually be the safest. The reasoning behind this assertion is that a car at that level would have reached a perfect level of driving, whereas a human could always make an error at some point. In fact, one could argue that Levels 2 and 3 are the ones that can never be completely safe because of the possibility of human complacence, error or lack of readiness to respond when required.

It therefore seems that striving to reach Level 4 is the natural way forward, and perhaps it is not all that far away, if the positive governmental reactions we are witnessing to driverless cars are anything to go by.

This blog is listed under Development & Implementations and Data & Information Management Community

Related Posts:
Post a Comment

Please notify me the replies via email.

Important:
  • We hope the conversations that take place on MyTechLogy.com will be constructive and thought-provoking.
  • To ensure the quality of the discussion, our moderators may review/edit the comments for clarity and relevance.
  • Comments that are promotional, mean-spirited, or off-topic may be deleted per the moderators' judgment.
You may also be interested in
Awards & Accolades for MyTechLogy
Winner of
REDHERRING
Top 100 Asia
Finalist at SiTF Awards 2014 under the category Best Social & Community Product
Finalist at HR Vendor of the Year 2015 Awards under the category Best Learning Management System
Finalist at HR Vendor of the Year 2015 Awards under the category Best Talent Management Software
Hidden Image Url

Back to Top