Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't disagree with that characterization of the technical details. However I felt the task those drivers set out was asking a different question: how good would the FSD system be at completing a coast-to-coast trip? I don't think this can be answered after hitting a singular, highly unlikely accident without a lot more trials.

Imagine there was a human driver team shadowing the Tesla, and say they got T-boned after 60 miles. Would we claim that human drivers suck and have the same level of criticism? I don't think that would be fair either.



If you don't disagree on the characterization of the technical details, then you must realize how very fair it is for us to criticize the system for failing in the exact way it's predicted to fail. We don't need 1000 more trials to know that the system is technically flawed.


What if there is no debris the other 999 times, and the system works fine? The video does not give me that information as a prospective Tesla customer. This looks like a fluke to me.


Those 999 other times, the system might work fine for the first 60 miles.

This is a cross-country trip. LA to New York is 2776 miles without charging. It crashed the first time in the first 2% of the journey. And not a small intervention or accident either.

How you could possibly see this as anything other than FSD being a total failure is beyond me.


>asking a different question: how good would the FSD system be at completing a coast-to-coast trip?

>They made it about 2.5% of the planned trip on Tesla FSD v13.9 before crashing the vehicle.

This really does need to be considered preliminary data based on only one trial.

And so far that's 2.5% as good as you would need to make it one way, one time.

Or 1.25% as good as you need to make it there & back.

People will just have to wait and see how it goes if they do anything to try and bring the average up.

That's about 100:1 odds against getting there & back.

One time.

Don't think I would want to be the second one to try it.

If somebody does take the risk and makes it without any human assistance though, maybe they (or the car) deserve a ticker-tape parade when they get there like Chas Lindbergh :)


> This really does need to be considered preliminary data based on only one trial.

Statistically yes, but look at the actual facts of the case.

A large object on the road, not moving, perfect visibility. And the Tesla drives straight into it.

Not hitting static objects in perfect visibility is pretty much baseline requirement #1 of self driving. And Tesla fails to meet even this.


It does look like lower performance than a first-time driving student.

I really couldn't justify 1000:1 with such "sparse" data, but I do get the idea that these are some non-linear probabilities of making it back in one piece.

It seems like it could easily be 1,000,000:1 and the data would look no different at this point.


As a prospective Tesla customer this one test tells you that Telsa's FSD is not always able to identify or avoid objects in the road large enough to significantly damage your car in situations where humans can identify the object from a significant distance away. Running 999 other tests where there are no objects in the road does not improve your understanding of Tesla's ability to handle objects in the road. Ideally maybe you'd actually want to run 999 more tests with objects in the road to see if Tesla fails every time. If it identifies and avoids the object 99.9% of the time, then you could say this particular test was a fluke.

Now you can certainly argue that "objects in the road" is a category of failure mode you don't expect to happen often enough to care about, but it's still a technical flaw in the FSD system. I'd also argue it points to a broader problem with FSD because it doesn't seem like it should have been all that hard for the Tesla to see and avoid the object since the humans saw it in plenty of time. The fact that it didn't raises questions for me about how well the system works in general.


Tesla in 2016: "Our goal is, and I feel pretty good about this goal, that we'll be able to do a demonstration drive of full autonomy all the way from LA to New York, from home in LA to let’s say dropping you off in Time Square in New York, and then having the car go park itself, by the end of next year" he said on a press call today. "Without the need for a single touch, including the charger."

Roboticists in 2016: "Tesla's sensor technology is not capable of this."

Tesla in 2025: coast-to-coast FSD crashes after 2% of the journey

Roboticists in 2025: "See? We said this would happen."

The reason the robot crashed doesn't come down to "it was just unlucky". The reason it crashed is because it's not sufficiently equipped for the journey. You can run it 999 more times, that will not change. If it's not a thing in the road, it's a tractor trailer crossing the road at the wrong time of day, or some other failure mode that would have been avoided if Musk were not so dogmatic about vision-only sensors.

> The video does not give me that information as a prospective Tesla customer.

If you think it's just a fluke, consider this tweet by the person who is directing Tesla's sensor strategy:

https://www.threads.com/@mdsnprks/post/DN_FhFikyUE/media

Before you put your life in the hands of Tesla autonomy, understand that everything he says in that tweet is 100% wrong. The CEO and part-time pretend engineer removed RADAR thinking he was increasing safety, when really he has no working knowledge of sensor fusion or autonomy, and he ended up making the system less safe. Leading to predictable jury decisions such as the recent one: "Tesla found partly to blame for fatal Autopilot crash" (https://www.bbc.com/news/articles/c93dqpkwx4xo)

So maybe you don't have enough information to put your life in the hands of one of these death traps, but controls and sensors engineers know better.


> What if there is no debris the other 999 times, and the system works fine?

This argument makes no sense. I take it that you're saying that if we provide the Tesla a road which contains nothing to hit, it won't hit anything?

Well, sure. Also, not interesting.

In a real world drive of almost 3000 miles there will nearly always be things to avoid on the way.


> I take it that you're saying that if we provide the Tesla a road which contains nothing to hit, it won't hit anything?

Not quite. I am saying that basing the judgment on a rare anomaly is a bit premature. It's a sample size of 1, but I base this on my own driving record of 30 years and much more than 3000 miles where I never encountered an obstacle like this on a highway.

> Also, not interesting

I would have liked to see the planned cross-country trip completed; I think that would've provided more realistic information about how this car handles with FSD. The scenario of when there is a damn couch or half an engine on the highway is what's not interesting to me, because it is just so rare. Seeing regular traffic, merges, orange cones, construction zones, etc. etc. now that would have been interesting.


Here's an edge case I'm sure everybody has seen very rarely, but that's still not as uncommon as you think. Watch the video by Martinez if the top video is not correct:

https://abc13.com/post/loose-spool-involved-in-crash-near-be...

This was the 5th time in two months.

Now 2018 might have been a record year, but there have been a number of others since then.

Fortunately for us all, drivers don't have to go through Houston to get from CA to NY, but you're likely to encounter unique regional obstacles the further you go from where everything is pre-memorized.

As we know 18-wheelers are routinely going between Houston and Dallas most of the way autonomously, and a couple weeks ago I was walking down Main and right at one of the traffic lights was one of the Waymos, who are diligently memorizing the downtown area right now.

I'll give Tesla the benefit of the doubt, but they are not yet in the same league as some other companies.


> It's a sample size of 1, but I base this on my own driving record of 30 years and much more than 3000 miles where I never encountered an obstacle like this on a highway.

How is that relevant? You may not personally encountered this precise circumstance but that doesn't mean anything.

If you were to encounter this same scenario, however, it is a near certainty that you wouldn't crash into it. And yet the self-driving did. That's what matters.

> I would have liked to see the planned cross-country trip completed

I mean once the car significantly damaged itself, it's not like it can continue.

Big credit to the people running the experiment for letting it run and show failure. Many vloggers might've just interfered manually to avoid the accident and edited that part of the video out in order to continue and claim success.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: