If a system is not armed, which is what I believe should be the way, and maintain human oversight at all times, >12 years of practical history indicates not all that much goes wrong. Remove human errors in maintenance and extensive vetting of software and "not all that much" becomes much, much less. The odds of getting in a car accident on the way to the grocery store are higher.
In the Tesla event the driver removed himself from active observation, electing to play video games. Similar has occurred during autonomous flight where the person tasked with oversight elected to turn off audible alarms and read a book. An aircraft that could have been easily guided to a safe landing was allowed to descend unobserved to an altitude where little time was left to arrange a good landing. A fault occurred in the system for sure, but that fault could have been offset to take advantage high glide ratios to permit a better outcome. That could lead into a discussion of why multirotors are lousy for BLOS ops but such can be saved for another day.
The better autonomous systems are amazingly good. It usually requires a person at some stage to make them go bad. Training is a key element to success.