Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Between anti-Musk sentiment, competition in self driving and the proven track record of Lidar, I think we’ll start seeing jurisdictions from Europe to New York and California banning camera-only self-driving beyond Level 3.




Nah, you don't need to ban anything. Just force the rule, that if company sells self driving, they are also taking full liability for any damages of this system.

Why is it preferable to wait for people to die and then sue the company instead of banning it in the first place?

People die in car crashes all the time. Self driving can kill a lot of people and still be vastly better than humans.

But who gets the ticket when a self-driving car is at fault?

> who gets the ticket when a self-driving car is at fault?

Whoever was in control. This isn’t some weird legal quagmire anymore, these cars are on the road.


Apparently it IS still a legal conundrum: https://www.motortrend.com/news/who-gets-a-ticket-when-a-way...

And will continue to be until every municipality implements laws about it.


> it IS still a legal conundrum

It’s not a conundrum as much as an implementation detail. We’ve decided to hold Waymo accountable. We’re just ticking the boxes around doing that (none of which involve confusion around Waymo being responsible).


So how many violations before Waymo's driver's license is suspended?

The point of self driving is that the car is in control. Are you going to send the car to car prison?

Personally, I'd argue that if the AI killed someone due to being incompetent (as in, a human in a fit state to drive would not have made this mistake), the punishment should go to the corporation that signed off on the AI passing all relevant tests.

The nature of the punishment does not necessarily follow the same rules as for human incompetence, e.g. if the error occurs due to some surprising combination of circumstances that no reasonable tester would have thought to test, which I can't really give an example of because anything I can think of is absolutely something a reasonable tester would have thought to test, but for the sake of talking about it without taking this too seriously consider if a celebrity is crossing a road while a large poster of their own face is right behind them.


Let me re-iterate my original caution: human drivers are really bad: more than 40,000 people die in car crashes every year! If a self driving cars makes mistakes that humans would not in some cases, but overall they would only cause 30,000 deaths per year then I want self driving required. Thus I want liability to reflect not perfection is required but that they are better than humans.

Don't get me wrong, perfection should be the long term goal. However I will settle for less than perfection today so long as it is better.

Though better is itself hard to figure out - drunk (or otherwise impaired drivers) are a significant factor in car deaths, as is bad weather when self driving currently doesn't operate at all. Statistics do need to make sure self driving cars are better than non-impaired drivers in all situations where humans driver before they can claim better. (I know some data is collected, but so far I haven't seen any independent analysis. The potentially biased analysis looks good though - but again it is missing all weather conditions)


These are marginal numbers. This would make AI worse than the safe driver.

The benefits of self-driving should be inrefutable before requiring it. At least x10 better than human drivers.


The AI's benefits should be irrefutable, but this isn't as simple as "at least x10 better than human drivers", or any fixed factor, it's that whatever mistakes they do make, if you show the video of a crash to the general public, the public generally agrees they'd have also crashed under those conditions.

Right now… Tesla likes to show off stats that suggest accidents go down while their software is active, but then we see videos like this, and go "no sane human would ever do this", and it does not make people feel comfortable with the tech: https://electrek.co/2025/05/23/tesla-full-self-driving-veers...

Every single way the human vision system fails, if an AI also makes that mistake, it won't get blamed for it. If it solves every single one of those perception errors we're vulnerable to (what colour is that dress, is that a duck or a rabbit, is that an old woman close up facing us or a young woman from a distance looking away from us, etc.) but also brings in a few new failure modes we don't have, it won't get trusted.


x10 improvement is the minimum bar after which a conversation can start. We should not even have a conversation until this threshold is reached.

They don't have to die first. The company can avoid the expense by planning how not to kill people.

If you charged car makers $20m per pedestrian killed by their cars regardless of fault you'd probably see much safer designs.


By this logic, then we should also create a rule for regular, non-self-driving that says, if you have a car accident that kills someone, all your wealth is taken away and given to the victim's family. If we had a rule like this, then "you'd probably see much safer driving". Are you willing to drive under those circumstances? I am sure you will say yes, but it does not make your suggestion any less ridiculous.

> They don't have to die first. The company can avoid the expense by planning how not to kill people.

This is an extremely optimistic view on how companies work


I can think of one example where something similar works. The requirements from insurance companies on airline pilots are considerable tougher than the government ones because they are on the hook for ~$200m if they crash.

A big reason car companies don't worry much about killing pedestrians at the moment is it costs them ~$0.


You clearly haven't lived in my city :).

About half our road fatalities are pedestrians. About 80% of those are intoxicated with alcohol. When you're driving at 40mph, at night, and some drunk guy chooses to cross the road, no amount of safety features or liabilities can save him.

Sure, cars can be safer for light collisions with pedestrians where the car is going slowly. Especially in the US where half the cars have a very high hood. But where I live the problem is not safer cars, it's drunk pedestrians.


I wonder how a Waymo would do with your drunks? Really the answer for that is probably more a different road layout so the drinking is separate from the traffic. I live near Soho in London which is full of drunk people in the streets but most traffic is blocked off there or doing 10 mph.

I’ve been paying more attention to Waymos recently.. and noting that it stops to let people cross that i didn’t even see first.

And sometimes at places that aren’t even a cross walk.

Im in DTLA frequently and I am almost even developing a secondary instinct to cover my brake and have an extra look around when a Waymo stops in a street.

Because it may be dropping off or picking up a rider or it saw something or someone I didn’t. Just happened Saturday in fact. I saw it do an abrupt stop when I was yielding to it at a “T” intersection and expected it to have the right of way and keep going. I didn’t proceed until I could figure out WHY it had just stopped, like “okay WHERE’S the passenger”

and then five or so people started running across the street in front of it that I would not have seen if that Waymo wasn’t there and I was clear to turn left.

As an added bonus it stayed stopped after they all crossed and I decided to be a jerk and turn left in front of it. It stayed stopped for me too. There’s no driver in it. It ain’t mad. XD

I have a good eye for spotting uber drivers who are about to load or unload too, Especially if they have some common sense and are trying to line up to do that so their passenger can get on or off curbside. A Waymo is just.. way more immediately identifiable that I can react that much faster to it or just be like.. alright. I’ll take a cue from it, it’s usually right.

And hell even if it’s wrong, maybe this isn’t a good time to pull out in front of it anyway!


Why are you doing 40mph in a built up area at night?

Here in the UK we have a standard 30mph built up area limit, dropping to 20mph in most residential area.

Result - a massive reduction in serious injuries and fatalities, especially in car - pedestrians collisions.


Not so much a built up area. We're talking about main roads, or even motorways.

This doc from 1999 has an answer: https://www.youtube.com/watch?v=SiB8GVMNJkE

Usually its capitalism, because in America, they can just buy carveouts after the fact.

We cannot even properly ban asbestos, expecting people to die first is just having a realistic perspective on how the US government works WRT regulations.

> if company sells self driving, they are also taking full liability for any damages of this system

This is basically what we have (for reasonable definitions of full).


That's a legal non-starter for all car companies. They would be made liable for every car incident where self-driving vehicles were spotted in close vicinity, independently of the suit being legit. A complete nightmare and totally unrelated to the tech. Makes would spend more time and tech clearing their asses in court than building safe cars.

Mercedes explicitly accepts liability when Drive Pilot L3 is active and used as intended.

That's... Not how it would work.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: