Hacker Newsnew | past | comments | ask | show | jobs | submit | zamadatix's commentslogin

I think the intent was making a blog has additional requirements one might not need to just make a website, a la what the "The Hard Way" section tries to argue against, not a claim a blog is not also a website (anything the page says not to use will also lead you to having a website as well - just with more than minimal work).

E.g. the section covering RSS for your post is longer than the section covering HTML, you don't really need a fixed structure, and you don't need to think of a story to write unless that's what you want to do. You can just post a picture of your cat and try to add googly eyes later if that's what floats your boat. Or just "Hello World" and let your mind go from there.


It's a tricky term, but the definition given towards the top where "age-adjusted" is clickable helps clarify. It's introduced in "But an aging population only partially explains the rise in these deaths. Deaths by falls have risen 2.4-fold on an age-<adjusted basis>. " The <clickable> part in brackets (easy to miss) expands to:

> Age-adjusted data helps to compare health data over time or between groups more fairly by accounting for the age differences in populations. For example, suppose Population A has a higher average age than Population B. In that case, age-adjusting ensures that Population A's naturally higher death rate due to age doesn't skew comparisons of overall health between the two. This measurement makes death statistic comparisons more accurate than crude death rates.

An "age group" might be "45-54" or "85+" but a "population" would be "all age groups in 2000" or "all age groups in 2023". Age-adjusted here means the differences in the number of people in each population (2000 vs 2023) are normalized to each so we can compare the absolute numbers from each age group directly, not the other way around.

There is some immediate follow-up text which helps clarify they do not mean to normalize the age groups themselves together within a single population for comparison:

> While they [population age-adjusted fall deaths] have fallen among younger people and only risen slightly among the middle aged, they have risen substantially within every age bracket of the elderly.

This all gave me flashbacks to stats class, and I now need to go relax :).


It was a big factor, but so were things like the way they treated their mobile browser for years and years, which is the platform 2/3 of browser traffic now originates.

According to statcounter's stats, Firefox never cracked 1% of monthly mobile traffic any month from when stats started in 2009. Even Opera and UC have more than double Firefox's average for the last year and they are just Chromium forks users are downloading off the stores.


For context, I recall that for years and years, Firefox was the highest ranked mobile browser. Mozilla invested a lot in mobile, Firefox devs had to rewrite the Android linker, invent new ways of starting binaries on Android, etc. just to make Firefox work (all of which were later used by Chrome for Android).

It still didn't make a dent in mobile browser shares.

Sure, Mozilla could have invested even more in Firefox mobile, but at some point, this would have come at the expense of Firefox desktop, which was the source of ~100% of the funding.


What Firefox was doing 4 months after Android 1.0 GA'd would indeed been unlikely to have made a dent in mobile share compared to what effort was going on once Android had a billion users. Why put all of that effort in before something is even used to just then let it rot for years anyways? In the end, they ended up spending the resources to refresh it in 2019 anyways - by which time billions had already decided Firefox was just a battery hog and slow on mobile.

It's a sad story because Firefox was so good on mobile when nobody had a chance to use it then it was crap when they did. On desktop Firefox is still the #1 non-bundled browser, things went so poorly on mobile they can't even come close to that today. In a parallel universe timings were inverted and Firefox may have even had more users on mobile than it does desktop today.


As far as I understood, both rates ultimately come from trying to map to video standards of the time. 44.1 kHz mapped great to reusing analog tape of the time, 48 kHz mapped better to digital clocking and integer multiples of video standards while also having a slightly wider margin on oversampling the high frequency.

44.1 kHz never really went away because CDs continued using it, allowing them to take any existing 44.1 kHz content as well as to fit slightly more audio per disc.

At the end of the day, the resampling between the two doesn't really matter and is more of a minor inconvenience than anything. There are also lots of other sampling rates which were in use for other things too.


That's a clear need IMO, but it'd be slightly better if the game could have 48 kHz audo files and downsampled them to 44.1 kHz playback than the other way around (better to downsample than upsample).

They're both fine (as long as the source is band limited to 20khz which it should be anyway).

The analog source is never perfectly limited to 20 kHz because very steep filters are expensive and they may also degrade the signal in other ways, because their transient response is not completely constrained by their amplitude-frequency characteristic.

This is especially true for older recordings, because for most newer recordings the analog filters are much less steep, but this is compensated by using a much higher sampling frequency than needed for the audio bandwidth, followed by digital filters, where it is much easier to obtain a steep characteristic without distorting the signal.

Therefore, normally it is much safer to upsample a 44.1 kHz signal to 48 kHz, than to downsample 48 kHz to 44.1 kHz, because in the latter case the source signal may have components above 22 kHz that have not been filtered enough before sampling (because the higher sampling frequency had allowed the use of cheaper filters) and which will become aliased to audible frequencies after downsampling.

Fortunately, you almost always want to upsample 44.1 kHz to 48 kHz, not the reverse, and this should always be safe, even when you do not know how the original analog signal had been processed.


yeah but you can record it in 96kHz, then resample it perfectly to 44.1 (hell, even just 40) in digital domain, then resample it back to 48kHz before sending to DAC

True.

If you have such a source sampled at a frequency high enough above the audio range, then through a combination of digital filtering and resampling you can obtain pretty much any desired output sampling frequency.


the point is that when down sampling from 48 to 44.1 you can for "free" do the filtering since the down sampling is being done digitally with an fft anyway

44.1kHz sampling is sufficient to perfectly describe all analog waves with no frequency component above 22050Hz, which is substantially above human hearing. You can then upsample this band limited signal (0-22050Hz) to any sampling rate you wish, perfectly, because the 44.1kHz sampling is lossless with respect to the analog waveform. (The 16 bits per sample is not, though for the purposes of human hearing it is sufficient for 99% of use cases.)

https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampli...


22050 Hz is an ideal unreachable limit, like the speed of light for velocities.

You cannot make filters that would stop everything above 22050 Hz and pass everything below. You can barely make very expensive analog filters that pass everything below 20 kHz while stopping everything above 22 kHz.

Many early CD recordings used cheaper filters with a pass-band smaller than 20 kHz.

For 48 kHz it is much easier to make filters that pass 20 kHz and whose output falls gradually until 24 kHz, but it is still not easy.

Modern audio equipment circumvents this problem by sampling at much higher frequencies, e.g. at least 96 kHz or 192 kHz, which allows much cheaper analog filters that pass 20 kHz but which do not attenuate well enough the higher frequencies, then using digital filters to remove everything above 20 kHz that has passed through the analog filters, and then downsampling to 48 kHz.

The original CD sampling frequency of 44.1 kHz was very tight, despite the high cost of the required filters, because at that time, making 16-bit ADCs and DACs for a higher sampling frequency was even more difficult and expensive. Today, making a 24-bit ADC sampling at 192 kHz is much simpler and cheaper than making an audio anti-aliasing filter for 44.1 kHz.


You mean average human hearing?

The point is better (and stands on its own) without treading into personal attacks. Don't let a throwaway account bait you into turning it into a battle of name calling instead of sticking to readily available facts as you link at the end, it's what they often want and if they were sincere it severely diminishes the chances they'll believe your links were in good faith anyways.

Facts matter, as does telling people it is on them to understand them. Otherwise, we will spin in perpetuity refuting people who are not discussing in good faith. I stand by my assertions. I do not believe it is impolite to call out a potential lack of education, or ignoring of facts and reality. Without shared facts and reality, discussion and debate is impossible.

https://en.wikipedia.org/wiki/Brandolini%27s_law


This is every part I agree with, with none of falling for the trap of looking bad for doing so. They've already edited the comment and posted a new one. Now the insults stand rather than just what you've said here, which was perfectly even keeled and factual.

If you're going to put the energy into refuting something, why bother wasting it by using personal insults to kick it off instead? Uneducated is at least borderline, if a bit blunt, but unsophisticated just drains any value.

I appreciate you deeply for standing up to it, I just don't want to see doing so made to look bad when the facts presented were so solid and good.


I don’t believe asserting that someone is uneducated or unsophisticated is an insult (if true); it is simply a fact and description, and stands regardless of the content of the post. Where you see malice, I see honesty and truth. “If this, then that.”

There are educated people, uneducated people, sophisticated people, and unsophisticated people (and overlap amongst). You will need to tailor your approach accordingly when dealing with each persona.


Hmm, I suppose people can see words in very different ways. If you're mother asked why the printer never works for her would you tell her she's uneducated and unsophisticated for not knowing before sending a link to the manual? It sounds like perhaps you would, but I wonder how many would really agree that's a neutrally worded approach.

Not for me to decide alone any more than anyone else alone I suppose. Thanks for sharing your perspective on it.


Yes, and she would understand why, but that is certainly different than a throwaway account making antagonistic, inflammatory political statements without citations and ignoring facts, no? Context, intent, and nuance, like facts, matter (imho).

It seems likely she would, and we are often similar to our kin I guess, but I still wonder if that's what the average person would consider neutral. I have no good way to answer that absolutely more than the next person though.

I tend to think that's because it doesn't matter who it is, it's always most productive to reply in a way which focuses on substance alone when one can't otherwise be positive. Particularly in pure text, it's so easy for things to come off worse than intended (something which has hit me quite well in the past as much as any). I've always assumed that's why the comment guidelines are so universally worded, mentioning what throwaways should be used for but with no mention of how they should be exempted from the usual approach. I.e. it's very easy for two people to feel like they are being neutral in text as the conversation escalates.

I've got to hop off for to get ready for work tomorrow. Thanks again for both taking the time to share your perspective as well as taking the time to respond to mal-intended throwaways with solid facts - it matters (thumbs up).


Thanks for telling me to do better. It matters too, and does not fall on deaf ears.

The more obvious something seems the more valuable steelmanning becomes, precisely because if the only steelman arguments you get (if any) are propaganda at best (instead of reasons you just hadn't considered) then you can be that much more confident your outrage is based in reason rather than feelings. My guess is there won't be many coming up with steelman arguments for this one though anyways.

Inviting propaganda is good, let the obviously weak arguments come front and center to be logically considered and ridiculed rather than put in small private group chats where they seem to grow and grow. This only works, in any way, if people stop saying things aren't worth having consideration about because it's obvious to them.


I understand the theory of steelmanning, but in cases like this it's just an high-brow version of the "both sides" style of journalism where you pretend like both sides are similarly plausible and deserve equal consideration. At the extremes, the steelmanning can turn into a game of giving the other side more consideration.

> Inviting propaganda is good, let the obviously weak arguments come front and center to be logically considered and ridiculed

That's literally what I'm doing: Ridiculing the obviously weak arguments.

And do you know what's happening? My ridicule and dismissiveness are being talked down, while you invite someone to "steelman" the argument instead. This pattern happens over and over again in spaces where steelmanning is held up as virtuous: It's supposed to be a tool for bringing weak arguments into the light so they can be dismissed, yet the people dismissing are told to shush so we can soak up the propaganda from the other side.


An invitation for steelmanning isn't about setting aside equal consideration regardless of the quality of the arguments brought, it's about giving equal chance to even consider other ideas when one cannot seem to find any on their one. When the merits are poor, that leaves far less than equal consideration of them in the end. Making the consideration brought forth equal in total time spent regardless of quality has nothing to do with the steelmanning process. A weak steelman argument is precisely a confirmation the opposing view is not worth much consideration, if any.

> That's literally what I'm doing: Ridiculing the obviously weak arguments. > > And do you know what's happening? My ridicule and dismissiveness are being talked down, while you invite someone to "steelman" the argument instead. This pattern happens over and over again in spaces where steelmanning is held up as virtuous: It's supposed to be a tool for bringing weak arguments into the light so they can be dismissed, yet the people dismissing are told to shush so we can soak up the propaganda from the other side.

So far all I've seen in this chain is complaint of the possibility other arguments may be brought up for fear we'd have to consider them if they were. At no point is the goal supposed to be everyone ends up agreeing with how one particular person sees things, it's supposed to be that what everyone believes they understand is openly put on the table a given appropriate consideration for the merits of the points presented. There will always be someone upset their position receives ridicule, that's neither here nor there for those wanting to strengthen their understanding of the situation instead of demand any other discussion can only ever be propaganda and should not be given a single thought. Again, a lot of the time the steelman idea is still bad - and that's still a good signal which doesn't require one give that position equal weight in the end.


Have you considered that maybe you just want to live in an echo chamber?

Official written statement (same as the speech given) for those who prefer https://www.federalreserve.gov/newsevents/speech/powell20260...

It's the way the internet was meant to work but it doesn't make it any easier. Even when everything is in containers/VMs/users, if you don't put a decent amount of additional effort into automatic updates and keeping that context hardened as you tinker with it it's quite annoying when it gets pwned.

There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585

I agree maintaining wireguard is a good compromise. It may not be "the way the internet was intended to work" but it lets you keep something which feels very close without relying on a 3rd party or exposing everything directly. On top of that, it's really not any more work than Tailscale to maintain.


> There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585

This incident precisely shows that containerization worked as intended and protected the host.


It protected the host itself but it did not protect the server from being compromised and running malware, mining cryptocurrency.

Containerizing your publicly exposed service will also not protect your HTTP server from hosting malware or your SMTP server from sending SPAM, it only means you've protected your SMTP server from your compromised HTTP server (assuming you've even locked it down accurately, which is exactly the kind of thing people don't want to be worried about).

Tailscale puts the protection of the public portion of the story to a company dedicated to keeping that portion secure. Wireguard (or similar) limit the protection to a single service with low churn and minimal attack surface. It's a very different discussion than preventing lateral movement alone. And that all goes without mentioning not everyone wants to deal with containers in the first place (though many do in either scenario).


I just run an SSH server and forward local ports through that as needed. Simple (at least to me).

I do that as well, along with using sshd as a SOCKS proxy for web based stuff via Firefox, but it can be a bit of a pain to forward each service to each host individually if you have more than a few things going on - especially if you have things trying to use the same port and need to keep track of how you mapped it locally. It can also a lot harder to manage on mobile devices, e.g. say you have some media or home automation services - they won't be as easy to access via a single public SSH host via port forwarding (if at all) as a VPN would be, and wireguard is about as easy a personal VPN as there is.

That's where wg/Tailscale come in - it's just a traditional IP network at that point. Also less to do to shut up bad login attempts from spam bots and such. I once forgot to configure the log settings on sshd and ended up with GBs of logs in a week.

The other big upside (outside of not having a 3rd party) in putting in the slightly more effort to do wg/ssh/other personal VPN is the latency+bandwidth to your home services will be better.


> and wireguard is about as easy a personal VPN as there is.

I would argue OpenVPN is easier. I currently run both (there are some networks I can’t use UDP on, and I haven’t bothered figuring out how to get wireguard to work with TCP), and the OpenVPN initial configuration was easier, as is adding clients (DHCP, pre-shared cert+username/password).

This isn’t to say wireguard is hard. But imo OpenVPN is still easier - and it works everywhere out of the box. (The exception is networks that only let you talk on 80 and 443, but you can solve that by hosting OpenVPN on 443, in my experience.)

This is all based on my experience with opnsense as the vpn host (+router/firewall/DNS/DHCP). Maybe it would be a different story if I was trying to run the VPN server on a machine behind my router, but I have no reason to do so - I get at least 500Mbps symmetrical through OpenVPN, and that’s just the fastest network I’ve tested a client on. And even if that is the limit, that’s good enough for me, I don’t need faster throughput on my VPN since I’m almost always going to be latency limited.


How many random people do you have hitting port 22 on a given day?

Dozens. Maybe hundreds. But they can't get in as they don't have the key.

change port.

After years of cargo-culting this advice—"run ssh on a nonstandard port"—I gave up and reverted to 22 because ssh being on nonstandard ports didn't change the volume of access attempts in the slightest. It was thousands per day on port 22, and thousands per day on port anything-else-i-changed-it-to.

It's worth an assessment of what you _think_ running ssh on a nonstandard port protects you against, and what it's actually doing. It won't stop anything other than the lightest and most casual script-based shotgun attacks, and it won't help you if someone is attempting to exploit an actual-for-real vuln in the ssh authentication or login process. And although I'm aware the plural of "anecdote" isn't "data," it sure as hell didn't reduce the volume of login attempts.

Public key-only auth + strict allowlists will do a lot more for your security posture. If you feel like ssh is using enough CPU rejecting bad login attempts to actually make you notice, stick it behind wireguard or set up port-knocking.

And sure, put it on a nonstandard port, if it makes you feel better. But it doesn't really do much, and anyone hitting your host up with censys.io or any other assessment tool will see your nonstandard ssh port instantly.


Conversely, what do you gain by using a standard port?

Now, I do agree a non-standard port is not a security tool, but it doesn't hurt running a random high-number port.


> Conversely, what do you gain by using a standard port?

One less setup step in the runbook, one less thing to remember. But I agree, it doesn't hurt! It just doesn't really help, either.


I've tried using a nonstandard port but I still see a bunch of IPs getting banned, with the added downside of if I'm on the go sometimes I don't remember the port

Underrated reply - I randomize the default ports everywhere I can, really cuts down on brute force/credential stuffing attempts.

or keep the port and move to IPv6 only.

Also to Simon: I am not sure about how Iphone works but in android, you could probably use mosh and termux to then connect to the server as well and have the end result while not relying on third party (in this case tailscale)

I am sure there must be an Iphone app which could probably allow something like this too. I highly recommend more people take a look into such workflow, I might look into it more myself.

Tmate is a wonderful service if you have home networks behind nat's.

I personally like using the hosted instance of tmate (tmate.io) itself but It can be self hosted and is open source

Once again it has third party issue but luckily it can be self hosted so you can even have a mini vps on hetzner/upcloud/ovh and route traffic through that by hosting tmate there so ymmv


>It's the way the internet was meant to work but it doesn't make it any easier. Even when everything is in containers/VMs/users, if you don't put a decent amount of additional effort into automatic updates and keeping that context hardened as you tinker with it it's quite annoying when it gets pwned.

As someone who spent decades implementing and securing networks and internet-facing services for corporations large and small as well as self-hosting my own services for much of that time, the primary lesson I've learned and tried to pass on to clients, colleagues and family is:

   If you expose it to the Internet, assume it will be pwned at some point.
No, that's not universally true. But it's a smart assumption to make for several reasons:

1. No software is completely bug free and those bugs can expose your service(s) to compromise;

2. Humans (and their creations) are imperfect and will make mistakes -- possibly exposing your service(s) to compromise;

3. Bad actors, ranging from marginally competent script kiddies to master crackers with big salaries and big budgets from governments and criminal organizations are out there 24x7 trying to break into whatever systems they can reach.

The above applies just as much to tailscale or wireguard as it does to ssh/http(s)/imap/smtp/etc.

I'll say it again as it's possibly the most important concept related to exposing anything:

   If you expose it to the Internet, assume that, at some point, it will be 
   compromised and plan accordingly.
If you're lucky (and good), it may not happen while you're responsible for it, but assuming it will and having a plan to mitigate/control an "inevitable" compromise will save your bacon much better than just relying on someone else's code to never break or have bugs which put you at risk.

Want to expose ports? Use Wireguard? Tailscale? HAProxy? Go for it.

And do so in ways that meet your requirements/use cases. But don't forget to at least think (better yet script/document) about what you will do if your services are compromised.

Because odds are that one day they will.


This may not be the exact one you were thinking of but very similar kind of collection: https://attrition.org/misc/ee/protolol.txt

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: