> [std::move silently copies const values, because] If something is const, you can’t move from it by definition.
Whoever wrote that definition should have a thing or two to learn from Rust. Different language I know, but it proves that it wasn't needed to cause so much confussion and collectively so much time and performance lost.
Also, who writes rules like that and ends the day satisfied with the result? It seems unlikely to feel content with leaving huge footguns and being happy to push the Publish button. I'd rather not ship the feature than doing a half-assed work at it. Comparing attitudes on language development and additions, it makes me appreciate more the way it's done for the Go lang, even though it also has its warts and all.
By 2011 Rust has a logo (basically its current logo), and it has a compiler written in Rust (a distant ancestor of today's main Rust compiler). It's approaching Rust 0.1 (released January 2012 apparently) which is a very different language from Rust 1.0 -- but that's a long way from "there was no Rust in 2011" to my mind.
The point is not a comparison with Rust per-se, but the fact that a better implementation of the idea was mathematically and/or technically possible; and the personal opinion that such huge footguns that the language accumulates over the years are maybe signals of having needed more thought to them before they were considered ready.
e.g. if something as simple of a inconspicuous std::move in the wrong place can break the whole assumption about move semantics, then make that impossible to do, or at least do not make it the default happy path, before you consider it production ready. What the heck, at the very least ensure it will become a compiler warning?
Hence the mention to Go and how they follow exactly this path of extending discussion as long as needed, even if it takes 10 years, until a reasonable solution is found with maybe small gaps, but never huge ones such as those explained in this article (plus tens of others in any other text about the language)
A bit more if we consider the "bugfixing" release that was C++14 :)
But yeah it makes sense, given how that was the jumpstart of the whole modernization of the language. I believe it was a big undertake that required the time it took. Still years have passed and footguns keep accumulating... it wouldn't hurt to have a mechanism to optionally drop the old cruft from the language. Otherwise everything stacks on top in the name of backwards compatibility, but at this pace, how will C++36 look like?
a member of the c++ committee (herb sutter) is writing an compiler for an alternative c++ syntax [0], to c++, with the intent to restrict some semantics of the language for less UB, surprises, etc. i think less implementation-defined behavior is incredibly important; rvo vs std::move, dynamic function call optimization, i wish i didn't have to search asm to check for...
While I share the sentiment, compare C#14 with C# 1.0, Java 25 with Java 1.0, Python 3.14 with Python 1.0.
While C++ might be worse, when you have 300+ doing proposals every three years, others aren't safer from similar churn, even if on smaller volume, and trying to keep backwards compatibility going.
And we all know what happened in Python.
Also Rust editions contrary to what many think, only cover a specific set of language evolution scenarios, it isn't anything goes, nor there is support for binary libraries.
As for a better C++, contrary to C, where it is business as usual, there are actually people trying to sort things out on WG21, even if isn't as well as we would like to.
"Making C++ Safe, Healthy, and Efficient - CppCon 2025"
This is a weird call-out because it's both completely incorrect and completely irrelevant to the larger point.
Rust absolutely supports binary libraries. The only way to use a rust library with the current rust compiler is to first compile it to a binary format and then link to it.
More so than C++ where header files (and thus generics via templates) are textual.
Cargo, the most common build system for rust, insists on compiling every library itself (with narrow exceptions - that include for instance the precompiled standard library that is used by just about everyone). That's just a design choice of cargo, not the language.
The story is that it must not matter which edition a library was compiled with - it's the boundary layer at which different editions interoperate with eachother.
Provided everything is available in source code, there are no semantic changes on the boundary level, or standard library types being used on the library public API that changed across editions.
What’s the problem? It makes perfect sense to me that a const object cannot be moved from, since it violates the constness. Since constness goes hand in hand with thread safety you really don’t want that violation.
There are cases where you would not want to reject such code, though. For example, if std::move() is called inside a template function where the type in some instantiations resolves to const T, and the intent is indeed for the value to be copied. If move may in some cases cause a compiler error, then you would need to write specializations that don't call it.
It's weird that they made a mistake of allowing this after having so many years to learn from their mistake about copies already being non-obvious (by that I mean that references and copies look identical at the call sites)
My first entry-level job just freshly coming out of the University was writing C++ with Qt for a computer vision app. And that was my actual first contact with C++ (had seen C and Java in Uni).
It was no biggie, just joining the low level of C with the class notions from Java. Pair that with the C++FAQ website, and it was easy.
Are entry-level devs generally not able to do that nowadays? I do not believe people are generally more stupid or less capable, so, is education so much worse or what's going on?
That's a reasonable argument for businesspeople, but it doesn't apply for the greater public. Because chances are that except in a minority of situations, they are on holidays and during that saved time they wouldn't be working at all anyways.
People who could perfecty afford a $2,000 plane ticket still fly with $400 ones (as long as they are within reasonable standards), for example because they have a desired budget for a given trip, and the expensive option would blow it away, so they don't mind the extra time.
Even most businesspeople aren't really that hyper-scheduled on trips--especially the ones that can't book whatever class they want.
And to your latter point, I can afford higher-class tickets but it comes back to what I could do with the money instead like a nice dinner. I don't tend to have a budget per se but I do recognize tradeoffs.
Have you ever picked a slightly more expensive nonstop flight instead of one with a layover for a vacation?
This is similar. 3.5 hours vs 7 hours is a pretty good difference.
You can take a 3.5 hours flight in the morning and have energy to see a city the whole day after that. Maybe not after a 7 hour flight unless you are a pretty experienced and motivated traveler who can sleep the entire flight and have the mental energy to enjoy new things after that.
I do actually think you're right, but the counterpoint is that airlines have slowed down all their flights to save money, and no one has come in offering a faster flight in exchange for more money.
Maybe the delta just isn't enough to matter? Or maybe people aren't willing to pay for it.
We know the tech is there. It used to take 45 minutes to fly from LAX to SFO. Now it's 70 minutes. That's not a tech problem, it's a logistics/fuel problem. But if people really valued the difference, they would offer a 45 minute flight for more money.
Or when I leave from Boston to go to the San Francisco, and we leave an hour late but we still arrive on time, it's because they were able to go faster. We certainly have the tech to go faster.
So why can't I buy a BOS->SFO flight that is one hour shorter for more money? Probably because of a lack of willingness to pay.
> Or when I leave from Boston to go to the San Francisco, and we leave an hour late but we still arrive on time, it's because they were able to go faster. We certainly have the tech to go faster.
Catching favorable winds and burning more fuel. It is in the airlines best interest to have the plane in position for the next flight, so they will burn the fuel when they need to. However, committing to a tighter schedule would cause a lot of problems if they were late too often, kinds of problems that means they would make less money than with the current schedule.
> However, committing to a tighter schedule would cause a lot of problems if they were late too often, kinds of problems that means they would make less money than with the current schedule.
There is always a price where this isn't the case. My overall point is that that price is still too high and people aren't willing to pay, and we don't really know if that's the case (but maybe the airlines probably do).
That depends entirely on how much "slightly more expensive" is. For the vast majority of the travelling public, they'll choose the cheaper option and we know that because that's what they choose already.
Most major airports are at their physical limit in terms of both airfield and gate traffic and are charging extremely high gate fees. I'm not in airline logistics but I would bet my bottom dollar that is the true constraint in having more traffic fly into hubs.
> The biggest missing piece in Zed for my workflow right now is side-by-side diffs.
> It’s pretty wild how bloated most software has become.
It's a bit ironic to see those two in the same message but I'd argue that right there is an example of why software becomes bloated. There is always someone who says "but it would be great to have X" that in spirit might be tangentially relevant, but it's a whole ordeal of its own.
Diffing text, for example, requires a very different set of tools and techniques than what just a plain text editor would already have. That's why there are standalone products like Meld and the very good Beyond Compare; and they tend to be much better than a jack of all trades editor (at least I was never able to like more the diff UI in e.g. VSCode than the UI of Meld or the customization features of BC).
Same for other tangential stuff like VCS integration; VSCode has something in there, but any special purpose app is miles ahead in ease of use and features.
In the end, the creators of an editor need to spend so much time adding what amounts to suplemental and peripheral features, instead of focusing on the best possible core product. Expectations are so high that the sky is the limit. Everyone wants their own pet sub-feature ("when will it integrate a Pomodoro timer?").
This is a sharp observation, and it goes even further: BeyondCompare easily allows one to hop into an editor at a specific location, while Total Commander, with its side-by-side view of the world, is n excellent trampoline into BeyondCompare.
In this kind of ecosystem (where visual tools strive to achieve some Unix-like collaboration), the super power of editors (and IDEs) is their scripting language, and in this arena it is still hard to beat Emacs (with capabilities that were present maybe 40 years ago).
He's applying kind emotions to his customer relationship with a company, because that gives him an emotional leverage from which to feel attacked and be right in the mind of the reader.
Not that he needed to do any of this. He wouldn't be a tiny bit less right if the text was a sterile and objetive list of facts about what happened.
Oh, not at all. I appreciate the rethoric, although it didn't work with me because I've never experienced Google as childish or even amicable. So the author's excessive predisposition to think of Google as a marvelous friend was a bit jarring for me (and that's why I personally felt the tone a bit on the side of emotional manipulation). But, to each their own. Expressing commentary and emotions is good, I do prefer it to cold facts all the time :-)
If that is really the case (I don't know numbers about React), in projects with a sane criteria of security, they would either only jump between versions that have passed a complete verification process (think industry certifications); or the other option is that simply by having such an enormous amount of dependencies would render that framework an undesirable tool to use, so they would just avoid it. What's not serious is living the life and incorporating 15-17K dependencies blindly because YOLO.
(so yes, I'm stating that 99% of JS devs who _do_ precisely that, are not being serious, but at the same time I understand they just follow the "best practices" that the ecosystem pushes downstream, so it's understandable that most don't want to swim against the current when the whole ecosystem itself is not being serious either)
A great power of Firefox are its add-ons, aren't they? Sidebery [1] has been a solid implementation of vertical tabs since a long time ago. Begore that got popular, Tree Style Tabs [2] was also a very comprehensive solution.
But nowadays, vertical tabs are native since Firefox v136 [3][4], so at least for the basics you won't need an add-on.
That's a whole lot of people being jerks and victim-blaming. He was deceived by a supposedly reputable company into trusting and using a product that in reality offers nothing more than amateurish levels of reliability.
I'd love to see more reasoning about the decision process to select one static site generator in particular. There are a ton of them, and for sure a bunch of them that we could call "the big ones" so anyone deciding to migrate will probably go through the aame process of evaluating and choosing. i.e. Hugo, Eleventy (11ty), Jekyll, and a couple more are the most known. Seeing Jeff's decision process could be interesting.
Hugo is very well established, but at the same time it's known for not caring too much about introducing breaking changes; I think any given project with that age should already respect its great userbase and provide a strong guarantee of backwards-compatibility with the inputs/outputs that it decides to draw for itself, not revolve in an eternal 0.x syndrome calling itself young enough to still be seeking its footing in terms of stability but I digress... and in fact, Hugo hasn't been great in that regard. Themes and well functioning inputs do break with updates, which here in this house of mine, is a big drawback.
> We did a complete overhaul of Hugo’s template system in v0.146.0. We’re working on getting all of the relevant documentation up to date, but until then, see this page.
I don't mind breaking changes, but it'd sure be nice if the documentation reflected the changes.
reply