Is that because vehicles controlled by artificial intelligence systems, using a network of onboard sensors and in continuous communication with nearby vehicles and other external entities, are less infallible than humans, or is it because you can’t call such systems to the witness stand?
Liability is certainly one of their concerns.
“People are still perceived as being more flexible, adaptable and creative than machines, and better able to respond to changing or unforeseen conditions”.
It begins to be the opposite
“Pilots are able, therefore, to wrest control from fly-by-wire technology when key computers fail.”
Strictly untrue. A pilot can’t fly an F-16, Spoace Shuttle or Eagle Lunar lander by himself, without computer assitance.
Very interesting Essay.
Your “Moving Forward” are all good I think, definitely it is well understood about Liability and Safety. It would be like railway train but in more comprehensive as a) units are smaller, hugely exponential numbers and b) units has more variability in terms of motions and decisions than train.
What caught my eye and thereby responding to the Essay.
You mentioned “Regular Check on User Competency”.
Who is “User” in the context?
Essay mentions Operators, Manufacturers, Regulators as distinct actors from User?
Any kind of automation in transportation is already able to be overridden by human operators at any time, there is no reason it should be different in automotive.
Full automation (Level 5) does not mean full control.
With a Bachelor’s of Engineering and a Master’s of Systems Engineering, Matthew professionally consults on system safety and risk. He also teaches and writes on these subjects.
Enter your email address to subscribe to this blog and receive notifications of new posts by email.
Join 400 other followers