Hacker Newsnew | past | comments | ask | show | jobs | submit | withinboredom's commentslogin

Probably. I know a guy who roots phones for older people or friends parents, installs pirated games and such for them and making sure it is locked down in certain ways for the older generation.

In other words, the correlation is that older people are more likely to have a rooted phone and are more susceptible to fraud.

Dunno how widespread this is, just something to keep in mind.


Have you seen the roads there?

yes the roads, which in some municipalities, are paved in a manner acording to who voted for who on some country roads, with perfect asphalt extending the whole way in front of one big farm, exactly between property markers. But that, and other things are what you get from a constitutional monarchy that has some if the oddest legal provisions on the planet. I wont say we like it, but we are good at it.

They can be reasoned about from a mathematical perspective yes. An LLM will happily shim out your code to make a test pass. Most people would consider that “unreasonable”.

I'm not dismissing it. I've been working on something secret-squirrel for over 5 years. It wasn't until November that I made a major breakthrough, resulting in four computer science revelations. At first, I wrote about it in a blog post; people didn't even believe me. Some researchers I wrote to validated it.

I hadn't really used Claude before, but if nobody cares ... then commercialize it, delete the blog post and code from the open source world. In the last month, Claude has helped turn it from a <700 line algorithm into nearly a full-blown product in its own right.

But yeah, the moat is small. The core of everything is less than 5k LoC; and it'd be easy af for my soon-to-be competitors to reproduce. The only thing I've got going for me is a non-technical cofounder believing in me and pounding on doors to find our first customer, while I finish up the technical side.

With the computer science revelations, we can basically keep us 6-8 months ahead for the next couple of years. This is the result of years of hard work, but AI has let me take it to market at an astounding speed.


Thats basically how the web started. You can serve a ridiculous number of users from a single physical machine. It isn't until you get into the hundreds-of-millions of users ballpark where you need to actually create architecture. The "cloud" lets you rent a small part of a physical machine, so it actually feels like you need more machines than you do. But a modern server? Easily 16-32+ cores, 128+gb of ram, and hundreds of tb of space. All for less than 2k per month (amortized). Yeah, you need an actual (small) team of people to manage that; but that will get you so far that it is utterly ridiculous.

Assuming you can accept 99% uptime (that's ~3 days a year being down), and if you were on a single cloud in 2025; that's basically last year.


I agree...there is scale and then there is scale. And then there is scale like Facebook.

We need not assume internet FB level scale for typical biz apps where one instance may support a few hundred users max. Or even few thousand. Over engineering under such assumptions is likely cost ineffective and may even increase surface area of risk. $0.02


It goes much further than that.. a single moderately sized VPS web server can handle millions of hard-to-cache requests per day, all hitting the db.

Most will want to use a managed db, but for a real basic setup you can just run postgres or mysql on the same box. And running your own db on a separate VPS is not hard either.


Why? My router won’t even let me DMZ a single ipv6 device or open all ports to a single ipv6 device. It will only let me open one port at a time.

different routers have different options, but all of them have come with a pretty strong firewall out of the box, turned on by default, for the last 10 years.


Valgrind won’t show you leaks where you (or a GC) still holds a reference. This could mean you’re holding on to large chunks of memory that are still referenced in a closure or something. I don’t know what language or anything about your project, but if you’re using a GC language, make sure you disable GC when running with valgrind (a common mistake). You’ll see a ton of false positives that the GC would normally clean up for you, but some of those won’t be false positives.

Ghostty is written in Zig.

It will, but they will be abbreviated (only total amount shown, not the individual stack traces) unless you ask to show them in full.

They globally reset the privacy settings for pretty entertaining reasons. Every few years a post goes around saying crazy privacy things that gives you instructions to change your privacy settings to only “share with yourself”. If you’re dumb enough to do it, you basically shadow-ban yourself. If enough people do it, they have to change the settings back because also those same people will complain about their aunt/neighbor/dad/sister whatever not being able to see their posts and have no idea why.

I just honestly feel Zuck's lost the benefit of the doubt but I'm willing to be proven wrong

Um. 240 is a multiple of 60.

Yes, so you either get a strobe on/strobe off every two frames if you're in 60 Hz country, or a slower crawling flicker in 50 Hz land. Migraine-inducing either way. Also, your phone won't shutter at exactly 60.00/50.00 Hz (mains freq. is pretty stable, usually stable to at least the first decimal) so you'll see a jittered, jumpy phase drift on top of that.

Yep, and this breaks all sorts of computer vision setups. We had to compensate for it on the cameras that track the Oculus controllers, since folks are often playing under indoor lighting

Sometimes, you just need to know if an idea will even work or what it would look like. If you have to refactor half the codebase (true story for me once), it makes the change a much harder sell without showing some benefits. IE, it keeps you from discovering better optimizations because you have to pay the costs upfront.

In Rust, it's a lot easier to refactor half the codebase than it would be in another language. Once you're done fighting the compiler, you're usually done! instead of NEVER being sure if you did enough testing.

I can’t tell if you missed the whole point of “exploratory”…

I don't know either. Personally I can spend days or more on exploratory efforts that end up scrapped. My source code is usually version controlled, so I never have to worry about messing things up. But I suppose not everyone has this kind of time for stuff that isn't guaranteed to pan out.

Sometimes I will prototype an exploration in another crate or module so I can see if there are performance gains in a more limited application. Sometimes these explorations will grow into a full rewrite that ends up better than if I had refactored.


> Sometimes, you just need to know if an idea will even work or what it would look like.

I think what GP is trying to say is that the value of such exploration might be limited if you end up with something incompatible with "proper" Rust anyways.

I suppose it depends on how frequently "transition through invalid Rust while experimenting and end up with valid Rust" happens instead of "transition through invalid Rust while experimenting and end up with invalid Rust", as well as how hard it is to fix the invalid Rust in the latter case.


In my case, I was adding a new admin api endpoint, which meant pulling through a bunch of stuff that was never meant for the api and got in a fight with the borrow checker. For me, I just wanted to see if I broke something on a feature level (it was never meant to be exposed by the api after all), and I didn’t care about memory safety at that point. Refactoring it properly just to get memory safety just to see what would have broke, ended up breaking out of my time-box, so it never saw the light of a merge request. Had I been able to prove the concept worked, I would have opened a PR and against the open issue to find out the best way to refactor it “properly” into a right way. As it was, I would need to completely guess what the right way was without ever even knowing if the idea would work in the first place.

I guess that doesn't neatly fall into the categories I described, though I think it's closer to the former than the latter.

That being said, I think what you describe sounds like a case where relaxed checks could be beneficial. It's the eternal tradeoff of requiring strong static checks, I suppose.


Can't you usually just throw some quick referenced counted cells in there, to make the borrow checker happy enough for a prototype without refactoring the whole code base?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: