Extreme Codesign Across NVIDIA Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU and Spectrum-6 Ethernet Switch Slashes Training Time and Inference Token Generation Cost
> But sound quality in general is absolutely objectively measurable
I'm not sure it's straightforward to objectively measure sound quality. I went down this rabbit hole a while ago because I was confused why my 1950s Leak Stereo 20 sounded more pleasant to me than far superior modern equipment. To me the old amplifier makes every track sound like it's being played on a good day, while my more analytical equipment is revealing of shortcomings and becomes harsh over long listening periods. From what I understand, part of the reason for this is that not all harmonic distortion is equally offensive with lower-order and even harmonics being far less psychoacoustically unpleasant, and while the old amp produces much more total harmonic distortion it's of a less fatiguing nature than the lesser distortion present in modern amps.
I think it's quite hard to measure 'sound quality' objectively because it depends so much on quirks of human perception, I think the most accurate measure would be double blind tests of actual amplifiers and speakers. I'd be fascinated to see how my Leak stacks up in a double blind test against modern amplifiers for example, for better or worse.
I've followed the research a little bit. The general sense I get is that, specifically vehicle control at the edge of traction, software in the lab has far outperformed normal humans for over a decade. The problem is that delivering the "boring" point A to B reliably in all conditions is still unsolved. Relative safety is also a moving target because all the advances in the first bucket are directly applicable to human-driven cars as driver aids.
The new model Y AWD non-performance does 0-60 in 4.6 seconds! The performance version is ~3.3; Ioniq 5 AWD is 4.6 and the N is 2.9 (!!!). For comparison the latest Corvette does 0-60 in 2.9 and the Z06 in 2.6.
These cars are insanely, incredibly fast. My G70 (gasoline, ~370 HP) does the jaunt in 4.5. That used to be considered a fast car, now it's just average (though the warranty is almost over, and I'll be modifying it to ~450HP).
TBH, electric cars 100% broke auto enthusiast circles. When a highly modified, very fast car just gets stomped by an electric car hauling a family of 4 it smashed that world to pieces. Especially in the early days, when EV enthusiasts were mostly Tesla techbro fanboys - who didn't really mix well with the oil, grease, and gasoline culture that was there before.
This is a great PR move for Bose in a market that doesn't care about name brands like it used to. Maybe they can win some customers back and be considered cool again.
Really? Are you using many multiple agents a time? I'm on Microsoft's $40/mo plan and even using Opus 4.5 all day (one agent at a time), I'm not reaching the limit.
People have allowed themselves to become so dependent on mobile phones that I'm frankly disgusted. You're talking about a scenario where you're worried about being illegally arrested by the secret police -- aided by their tracking of your phone, but it's still not enough to consider using your phone less. It's no different that a rat starving to death but continuing to push the lever for the cocaine hit.
> We’re making our technical specifications available so that independent developers can create their own SoundTouch-compatible tools and features. The documentation is available here: SoundTouch API Documentation (https://assets.bosecreative.com/m/496577402d128874/original/...).
AFAIK, the soundtouch web API was already accessible via some bose developer portal. It doesn't seem like they are open sourcing anything. This API just allows you to make basic requests to do things like change volume on the speaker.
To support the smart features of the SoundTouch speakers, we would the soundtouch user management service. Speakers connect to this very frequently and its where refresh tokens for music services and presets are stored. The speaker firmware itself has lots of source code, including the bit to handle music services and playback. There is an abstraction layer for music service APIs. There is a process on the speaker that reaches out to a music service registry, which is an index of bose music service adapters. Each of these adapters essentially proxies a music service like tunein, pandora, and even the "stream a custom station" feature.
If bose open-sourced the speaker firmware, we could make a firmware build which talks to a 3rd party user management service, and reaches out to a 3rd party music service registry. Then we could add and maintain music service playback for the community. But there is no open sourcing of any actual code here and this soundtouch web api cannot change the URLs on the existing firmware of the user management service or the music service registry.
So to my eye this story seems misleading and just some PR nonsense. Disclaimer: I used to work at Bose.
I've been playing with the idea for a bit, can you give me an order of magnitude for "entry-level HiFi"? Even if that's an oxymoron, how many zeroes does it take to get an experience that's noticeably superior to, say, default car speakers or built-in Smart TV speakers?
Is there any evidence that "NSA can turn on your phone even if they're off" or "location toggles on phones don't actually do anything" conspiracy theories are true? Even if the NSA has such capabilities, is there any reason to believe that they'll burn it to go after some ICE protester? That's the type of stuff you'd save so you can use it to go after bin laden, not burn on some run of the mill protester.
Thanks! Yes, as the sibling noted, if you limit this to PLP drives it makes sense, but that is also a special case. Outside of the latency hit (which is significant in some cases), FLUSH is also nearly free on those though.
It's been building for a long time; it's not recent per se, just accelerated.
2025 showed that you can't just go "ok, it's over now, we'll go back to business as usual" (like I know the limp-wristed Dems will want to do) or it'll repeat after every other election until it's successful. You just cannot have this many people constantly being convinced they live in this alternate reality for much longer without civilization collapse.
But I think it's gone too far and we're witnessing the fall of the empire in real-time. I'm just hoping that fall won't screw up the rest of the world too much, but I'm pretty sure it will.
They have been catching up pretty fast though. Since US have banned Nvidia chips export upto a certain extent to them they are coming up with more optimized training at least which is why US should be wary of them. China does more with less
The simplest thing is not to do anything but defer to caller to handle it, in languages like Python, Java, JavaScript. Often that is also the only realistic way to an error, especially for library code.
Your view of the world we live in is extremely different from mine. I don't think it is remotely correct, but hey, that's just, like, my opinion. I hope you find some peace in the midst of your experience.
Stuxnet v2? Speculation I know, but wow, IPv4 came back up, but IPv6 is completely out, looks like 48 million devices? Compared to IPv4's 47 thousand (wow that's insane).
for context: There is a call to action from an opposition leader for people to join the protests today. They normally cutoff internet infrastructure on purpose in these cases so people cannot communicate
Do not use devices that can be trivially tracked through the cell network, or that can be surveilled by big tech. This means a device bought anonymously, a free/libre OS like Graphene, no Google/Facebook/Apple spyware apps, and an anonymous SIM paid for with cash or crypto. This should be done by everyone to avoid the possibility of mass surveillance, not only people who have something to hide from a three-letter agency. If you really have something to hide, then the cellular network shouldn't be used at all.