The most significant thing to have happened in the automotive world this week is, without question, the death of a woman at the bumper of an Uber prototype autonomous car; the first such fatality ever recorded. With that, 49-year-old Elaine Herzberg takes her tragic place in history alongside Irish scientist Mary Ward.
Who’s Mary Ward? We’d be willing to bet that 99.9 per cent of people, CTzens included, would have no idea until they Googled her. She was the first person to be killed by an automobile, full stop. On 31 August 1869, near Parsonstown, as the town of Birr was known then, she was riding in a steam-powered car built by her cousins when she was thrown from her seat, fell in front of one of the wheels and was killed almost instantly after sustaining severe head trauma and a broken neck.
Today, as I’ve just outlined, to most people Mary Ward is nobody. She’s not even as famous as the Z-listers TV researchers keep digging up for every new series of I’m a Big Brother Dancing on Celebrity Strictly Bake Off, and yet she holds a unique place in car history.
Elaine Herzberg, too, will soon be forgotten. This poor woman, who was reportedly homeless at the time of her death, will be lost in the commercial tides pushing autonomous cars ever closer to reality. Another Mary Ward, collateral damage in the turbulence of progress.
According to a San Francisco Chronicle report, footage taken from the Uber autonomous car showed that Ms Herzberg pushed a bicycle laden with plastic shopping bags out into the road in front of the car. We’ve since discovered that the car simply didn’t see the obvious obstacle.
Arguably the most wonderful thing about the human brain is its capacity to deal with infinite variables. A focused human brain can analyse a driving situation in ways a computer simply can’t, and probably never will be able to. Despite the darkness the Uber car - and the human backup, who was sadly distracted -absolutely should have seen Herzberg crossing the road, if the technology worked. But it didn’t, so we can only assume it doesn’t.
If a self-driving machine fails to see an impending accident, people will die. If we can’t be completely sure the systems are foolproof, how can we ever trust them? Would you trust a mechanical, automated nanny with your baby son or daughter if you knew it might occasionally, if accidentally, try to kill them?
I’m not suggesting humans are safer. Far from it. The science that predicts a massive drop in road traffic deaths when autonomy becomes normal is no doubt spot-on. What I’m saying is that autonomous technology isn’t good enough, yet. It’s nowhere near. At the moment all it takes to upset the whole system is a pothole, rays of sun at the wrong angle, dirt on a sensor or darkness. Various prototypes testing on public roads have had various inexplicable faults, like slamming on the brakes at a green light. That in itself could cause a huge accident – and the machine would be to blame.
Just as we accept the risk posed to us by human drivers getting it wrong, we have to accept the risk of imperfectly-programmed machines getting it wrong, too. That said, I will always prefer the task of anticipating what a fellow human might do, as opposed to what a machine will do – or not do – when its software is momentarily compromised.
Maybe a fully-focused human driver could have anticipated Elaine Herzberg’s movements, or maybe not. We may never know for sure. It’s clear that there’s a lot more work needed before all the kinks are ironed out of self-driving cars. Just as it always has been, progress is subjective.