I was just thinking about the driverless Google Car (actually I was thinking how Apple's lack of interest in this area is probably a terrible mistake). Then I started to think that it is actually very interesting that the computer 'driver' could be tested within a software 'sandbox/test harness' environment over thousands of scenarios - all that is required is an extremely accurate computer model of a road (different surfaces, weather etc).
BUT then I started to think about something far more thought provoking - lets give an example: You are in the Google Car enjoying a coffee on your way to work but suddenly up ahead a lorry tyre blows-out and begins to loose control, it swerves, looses balances and tips over, sliding along the tarmac. As Google knows everything, it knows that the occupants of car driving next to you are a single-mother with 2 children in the back. Now, in sight of this knowledge, does your Google Car act to save your life by swerving to avoid the lorry (at the calculated possible risk of the loss of live of the mother and children), or does your own car act for 'the greater good' by putting your single life on the line to improve the chances of survival of the mother and children?
There are many ways a driver can react in an emergency situation, but a human driver is generally unlikely to react in the most logical or calculated way. I remember a long distance coach driver telling me that his 'mandate' was absolutely to the safety of his passengers - if he were to round a bend and find a car stranded in the middle of the road, however awful it might sound, his training would tell him to drive straight through the car rather than attempt to swerve round it which could potentially result in a far higher loss of life if the coach were to loose control.
So what 'mandate' does the Google Car have? And who gets to choose how your car will react in an emergency situation? Might there even be a dial on the dashboard so YOU can choose where the car's priorities lie?
Imagine the first time it happens - that a Google Car is involved with in an accident and CCTV reply footage shows the Google Car swerving to save its single occupant as it careers into a car containing young children... Hmm, perhaps the Google PR team would have preferred that the single male adult was compromised and that the Google Car acted to save young children? So perhaps you wont get the choice after all...
...and a cynical person may even suppose that the Google Car acting in the greater good by saving the most number of human lives would also conveniently fit with their business model.
But really, it would be fascinating to know exactly how the Google Car would react in these situations and exactly who has decided 'who gets to die?'.
Sunday, August 31, 2014
Subscribe to:
Post Comments (Atom)
It's too better for a man to buy auto amplifiers with properties that would blend the close by gaps. go to my site
ReplyDeleteInteresting thought exercise; who decides?
ReplyDeleteComputers never decide, that's a purely human association, so, that's why it's difficult for US to decide how WE should be held to account. We are the creators of this software of these situations, so, who should pay the ultimate price for our eventual mistakes. Computers don't make mistakes, people do, and often. Should computers be held accountable for human errors? Of course not. Computers were never the solutions, they are the accelerators of our experiences. Like a rocket, it will take off or not. The extent of the fallout depends on how big the risk. We take a risk in our vehicles everyday that we may not return home with our life or limb. We obtain insurance to offset the extreme costs. So the question really should be, who will insure "smart" cars.