Hacker Newsnew | past | comments | ask | show | jobs | submit | TGower's commentslogin

The author repeatedly states that they stayed within the scope of the VDP, but publishing this clearly breaks this clause: "You agree not to disclose to any third party any information related to your report, the vulnerabilities and/or errors reported, nor the fact that a vulnerabilities and/or errors has been reported to Eurostar."

Should be about half a millisecond round trip difference. 70 km / speed of light = 233.5 microseconds one way duration.

Seconded, "oh I'll always have a charged battery", but the real experience is spending much more time swapping batteries than I used to spend plugging in. More frequent and longer duration interruptions.


I did a quick alalysis and it actually matched the ~1.5 degree celcius rise pretty accurately. It required a bunch of incorrect simplifying assumptions, but it was still interesting how close it comes.

Estimated energy production from all combustion and nuclear from the industrial revolution onwards, assumed that heat was dumped into the atmosphere evenly at once, calculate temperature rise based on atmosphere makeup. Ignores the impact of some of that heat getting sinked into the ground and ocean, and the increased thermal radiation out to space over that period. In general, heat flows from the ground and ocean into the atmosphere instead of the other way around, and the rise in thermal radiation isn't that large.

On the other hand, this isn't something that the smart professionals ever talk about when discussing climate change, so I'm sure that the napkin math working out so close to explaining the whole effect has to be missing something.


Your math is completely wrong.

We use ~20 TW, while solar radiation is ~500 PW and just the heating from global warming alone is 460TW (that is, how much heat is being accumulated as increased Earth temperature).


Well the math is correct, the methodology has obvious flaws some of which I pointed out. If you took all the energy that has been released by humanity burning things since the industrial revolution and dumped it into a closed system consisting of just the atmosphere, it would rise by about 1.5 C.


The discussion thread (and original question) you are participating in is about heat being rejected to the atmosphere through vapor-compression refrigeration or evaporative cooling, not CO2 or emissions from combustion. Reread the top level comment.

The amount of heat rejected to the atmosphere from electronic devices is negligible.


"AI coding is so much better now that any skepticism from 6 months ago is invalid" has been the refrain for the last 3 years. After the first few cycles of checking it out and realizing that it's still not meeting your quality bar, it's pretty reasonable to dismiss the AI hype crowd.


It's gotten ok now. Just spent a day with Claude for the first time in a while. Demanded strict TDD and implemented one test at a time. Might have been faster, hard to say for sure. Result was good.


I think we have a real inflection point now. I try it a bit every year and was always underwhelmed. Halfway through this year was the first time it really impressed me. I now use Claude Code.


But Claude Code costs money. You really want to introduce a critical dependency into your workflow that will simultaneously atrophy your skills and charge you subscription fees?


It's also proprietary software running on someone else's machine. All other arguments for or against aside, I am surprised that so many people are okay with this. Not in a one-time use sense, necessarily, but to have long-term plans that this is what programming will be from here on out.


Another issue with it is IP protection. It reminded me stories where the moment physical manufacturing was outsourced to China, exact clones appeared shortly after.

Imagine investing tons of efforts and money into a startup, just to get a clone a week after launch, or worse - before your launch.


Right, we the workers are giving away control over the future of general purpose computation to the power elite, unless we reject the institutionalization of remote access proprietary tooling like this


Any new useful tool must be managed in a way so that one isn’t overly dependent on it.

- google maps

- power tools

- complex js frameworks

- ORMs

- the electrical grid (outages are a thing)

- and so on…

This isn’t a new problem unique to LLMs.

Practice using the tool intelligently and responsibly, and also work to maintain your ability to function without when needed.


this is why I say "AI is for idiots"


A year ago I could get o1-mini to write tests some of the time that I would then need to fix. Now I can get Opus 4.5 to do fairly complicated refactors with no mistakes.

These tools are seriously starting to become actually useful, and I’m sorry but people aren’t lying when they say things have changed a lot over the last year.


It might even be true this time, but there is no real mystery why many aren't inclined to invest more time figuring it out for themselves every few months. No need for the author of the original article to reach for "they are protecting their fragile egos" style of explanation.


The productivity improvements speak for themselves. Over time, those who can use ai well and those who cannot will be rewarded or penalized by the free market accordingly.


If there’s evidence of productivity improvements through AI use, please provide more information. From what I’ve seen, the actual data shows that AI use slows developers down.


The sheer number of projects I've completed that I truly would never have been able to even make a dent in is evidence enough for me. I don't think research will convince you. You need to either watch someone do it, or experiment with it yourself. Get your hands dirty on an audacious project with Claude code.


It sounds like you're building a lot of prototypes or small projects, which yes LLMs can be amazingly helpful at. But that is very much not what many/most professional engineers spend their time on, and generalizing from that former case often doesn't hold up in my experience.


We use both Claude and Codex on a fairly large ~10-years old Java project (~1900 Java files, 180K lines of code). Both tools are able to implement changes across several files, refactor the code, add unit tests for the modified areas.

Sometime the result is not great, sometimes it requires manual updates, sometimes it just goes into a wrong direction and we just discard the proposal. The good thing is you can initiate such a large change, go get a coffee, and when you're back you can take a look at the changes.

Anyway, overall those tools are pretty useful already.


It sounds like you're assuming I'm not a professional engineer and I only work on prototypes.


They're basing it on what you described in your previous comment. I got the same impression.


Finishing projects makes me sound unprofessional?

I've been at it multiple decades. TC $1M+. Forever beginner I guess.


"sheer number" combined with "completed" sounds more like lots of small projects (likely hobbyist or prototypes) than it does anything large/complicated/ongoing like in a professional setting.


Research is the only thing that will convince me. That’s the way it should be.


https://youtu.be/1OzxYK2-qsI a 6-12% increase in pull requests per developer per month.


It is, at this point, rather suspect that there are mountains of anecdata, but pretty much no high quality quantitive data (and what there is is mixed at best). Fun fact; worldwide, over 200 million people use homeopathy on a regular basis. They think it works. It doesn't work.


I don't suppose anything will change your mind, but here you go.

https://youtu.be/1OzxYK2-qsI a 6-12% increase in pull requests per developer per month.

"But those diffs are AI slop and rework" you will object. Oh well, I tried.


> Fun fact; worldwide, over 200 million people use homeopathy on a regular basis. They think it works. It doesn't work.

Homeopathy works for sure. Placebo works. There are many studies confirming that.


That's what it really all comes down to, isn't it?

It doesn't matter if you're using AI or not, just like it never mattered if you were using C or Java or Lisp, or using Emacs or Visual Studio, or using a debugger or printf's, or using Git or SVN or Rational ClearCase.

What really matters is in the end is, what you bring to market, and what your audience thinks of your product.

So use all the AI you want. Or don't use it. Or use it half the time. Or use it for the hard stuff, but not the easy stuff. Or use it for the easy stuff, but not the hard stuff. Whatever! You can succeed in the market with AI-generated product; you can fail in the market with AI-generated product. You can succeed in the market with human-generated product; you can fail in the market with human-generated product.


What does “can use” mean though. You just ask it to do things in basic English. Everyone can do that with no training.


Do you have evidence?



0.1x


If only you put half as much effort into learning ai as you do trolling people who are getting gains from it...


Because it has been true for the last 3 years. Just because a saying is repeated a lot doesn't mean it's wrong.


Completable in a timely fashion is not a design goal of this game. Currently being playtested by professional puzzle game designers and they are over 200 hours in without completing it.


Dissapointing that the paper is full of simplifying, and seemingly unreasonable, assumptions instead of simulation based on the known orbital elements of all these tracked satellites. For example, collision cross section of 200 square meters when discussing starlink even though the satellites are about 4 x 3 meters. Assuming random distribution of trajectories. I'm also unconvinced that "how fast would a collision occur if all the electronics got fried" is a useful metric, in that scenario I'm much more worried about the situation on the ground and commercial avaition...


Need to do a full read in more depth but it looks like they used a collision cross section of A=300 m^2, which is a little conservative but not insane given that the current Starlink v2 mini has about 90-120 m^2 of total surface area on its solar arrays. [1] The solar arrays are the largest part of these spacecraft by far and what defines the “collideable” area. A combined hard-body radius of 2 x 120 = 240 is in the ballpark for starlink-on-starlink collisions.

However most of collisions of concern are going to be starlink-on-debris, which is back down at the 120 m^2 level. Starlink already self screens for collisions and uplinks the conjunction data messages over the optical intersatellite link backbone or over their global ground station network.

If they aren’t able to talk to their satellites regularly from somewhere, you’re right we have MUCH bigger things to worry about on the ground.

[1] https://spaceflightnow.com/2023/02/26/spacex-unveils-first-b...


And wouldn’t the solar panels have less cross section than the satellite bodies, so even an apparent collision might just be a very near miss? (Honest question, not rhetorical, could be I’m wrong)


This is confusing terminology in the field, but you generally talk about the cross sectional area in the plane of the conjunction (https://www.space-track.org/documents/SFS_Handbook_For_Opera...) to calculate the probability of collision.

It’s a conservative definition in the field. It’s generally defined as the hard body radius: take the smallest sphere centered at the center of mass that would entirely enclose the object, then use the maximum cross section of that sphere to define the potential “area” of the colliding object.

Maybe put more simply, it’s the worst case area size / orientation you could be looking at. So yes. Solar arrays have a narrow cross section from the side but looking at them head-on (which is the angle used for Pc calculations) they’ll be very large.


Shouldn't they try and take some kind of probabilistic average area, rather than worst-case? I assume this is a statistical analysis.


It depends on what you're going for.

Generally people really don't want collisions due to cascading effects, so they take the worst-case probability of collision found with bounding assumptions. Additionally, while often all these vehicles have active attitude (orientation) control, sometimes they go into safe mode and are spinning (often spin stabilized to point at the sun), so it will clear the entire potential radius while rotating.

Also how do you define the probabilistic average area for a space object that you don't know how it's control system works or what it's been commanded to do / point at. Yes we can make some pretty good assumptions for things like Starlink, but even those do take safemodes occasionally.

So It's an engineering judgement call on how to model it. It's hard to get a probabilistic average for attitude that you can confidently test and say is "right", it's a lot easier and conservative to take the worst-case upper-bound. That's at least not-wrong.


Worth adding that the actual collision avoidance manouevres Starlink (and other satellites with propulsion) makes are based on more conservative assumptions

The papers assumptions lead to the conclusion that with no manouevres, we'd see a catastrophic crash between two or more satellites in LEO within 2.8 days. To be on the safe side, Starlink did over 144000 in the first six months of the year (and based on historical doubling rate, will probably be doing 1000 per day by now)...


Yeah the solar array on Starlink is held perpendicular to the velocity vector, so the cross section relative to the colliding body will invariably be smaller than the worst case.


They do verify their analytical calculation using a N-body simulation, that's section 4.4

> We verify our analytic model against direct N-body conjunction simulations. Written in Python, the simulation code SatEvol propagates orbits using Keplerian orbital elements, and includes nodal and apsidal precession due to Earth’s J2 gravitational moment. [...] The N-body simulation code used in this paper is open source and can be found at https://github.com/norabolig/conjunctionSim.


The cross section isn't actually all that outrageous, it corresponds to a hardbody radius of 4.5 meters. Hardbody radius is equal to the sum of the radii of the two colliding bodies, so 2.25 meters - which seems about right for Starlink.


They did an n-body simulation based on the known Keplerian orbital elements. That's exactly what you're asking for, right?

Also, the formalism is the standard way astrophysicists understand collisions in gases or galaxies, and it works surprisingly well, especially when there are large numbers of "particles". There may be a few assumptions about the velocity distribution, but usually those are mild and only affect the results by less than an order of magnitude.


"N-body simulation" doesn't mean what it's normally taken to mean here.

And the colliding gasses models have the huge assumption of random/thermal motion. These satellites are in carefully designed orbits; they aren't going to magically thermalize if left unmonitored for three days.


That's why I mentioned the assumption about the velocity distribution. Sure, the velocities aren't Maxwell-Boltzmann, but that doesn't matter too much for getting a sense of the scale of the issue. The way an astrophysicist thinks (I am one) is that if we make generous assumptions and it turns out to not be a problem, then it definitely isn't a problem. Here they have determined it might be a problem, so further study is warranted. It's also a scientist strategy to publish something slightly wrong to encourage more citations.


Well, sure, they won't be thermally random, but they will be significantly perturbed from their nominal orbits, particularly at the lower orbital altitudes.

Solar flares cause atmospheric upwelling, so drag dramatically increases during a major solar flare. And the scenario envisioned in the paper is basically a Carrington-level event, so this effect would be extreme.


The current "carefully designed orbits" has a starlink sat doing a collision avoidance manuever every 1.8 minutes on average according to their filing for December 1 to May 31 of this year.


Interestingly, the report from which they draw that number is one of the few that they cite but do not link to. Here's a link:

https://www.scribd.com/document/883045105/SpaceX-Gen1-Gen2-S...

It also notes that the collision odds on which SpaceX triggers such maneuvers is 333 times more conservative than the industry standard. Were that not the case (and they were just using the standard criterion) one might naively assume that they would only be doing a maneuver every ten hours or so. But collision probabilities are not linear, they follow a power law distribution so in actuality they would only be doing such maneuvers every few days.

It is disingenuous to the point of dishonesty to use SpaceX's abundance of caution (or possibly braggadocios operational flex) as evidence that the risk is greater than it actually is.


Agreed. This reads more like a hit-piece than a good-faith effort to quantify the risks. They make long-tail pessimistic assumptions, explicitly ignore possible mitigating factors, and act as if this "worse than worst case" scenario is a reasonable description of the world we live in.


Even the title "Orbital House of Cards" is unnecessarily editorializing.


  >we introduce the Collision Realization And Substantial Harm (CRASH) Clock
The needless forced backronym is another clue. It's Cargo Cult technical writing.

Why did this need to be a (badly done) acronym at all? It's a countdown to a collision, a collision clock, but of course "crash" (in all caps no less) sounds worse, and science writing needs sciencey acronyms don't ya know...


200 might be more reasonable for the next gen Starlink satellites.


Yeah they seem to have gotten excited to do the probability math (with bad assumptions, conflating a 300m^2(!) cross section collision with an actual probable collision), and with no consideration that this can actually be trivially simulated.


Also, if a solar storm actually wiped out all satellites in LEO (a huge assumption), who really cares how long it takes them to collide? Realistically it's all dead space until they de-orbit in a couple years.


Using two ESP32-S3 modules you can get ~6000 packets per second with CSI data. I'm using this as a cheap replacement for specialty high-G gyroscope modules, but it could see use for this type of motion detection as well.


Are you using two separate S3s as a dedicated Transmitter/Receiver pair, or are both transmitting data simultaneously?


Transmitter/Reciever pair.


Sub 1ms latency is a big deal, currently some gamers who would otherwise want wireless devices stick to wired for the latency advantage.


It sounds like the dev kit is more of a way to get devices out to devs before the full launch, I'm sure you can develop using the consumer hardware. The Adam Savage Tested video had interviews with the Valve team and it was pretty clear that "it's your computer" was a core part of the philosophy.


I'm sure you can develop using the consumer hardware. I'm sure you can develop using the consumer hardware.

> It depends though. Some console's devkit have memory or vram larger than consumer device. So it will allow un-optimized dev version of the softwares to run without crash. (And allow you to check what part goes wrong later instead of immediately fix it) Although you will need to test the production build on retail device eventually, it will make development easier.


Ah okay, a bit confusing wording there. I really think they should make that clearer...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: