Powell's education left him (like all of his peers) unable to comprehend an economy that is not in labor surplus. His one thought is to get back to the familiar ground that was tacitly assumed in all of his textbooks. The known methods of causing a labor surplus will no longer work, but he knows nothing else, and neither does anyone who might be a candidate to replace him.
Silicon Valley used to have engineering managers who managed engineering.
As the money got bigger we got more grifters / professional manager types. First thing they do is rebrand middle management as “leaders” and the other thing they do is make management non technical.
This has even bled into making higher level IC engineering roles being “above” coding. “Staff engineers don’t code, they set high level architecture “.
This is toxic to an engineering org in many ways. Firstly you now have a bunch of highly paid technical employees completely removed from how things actually work. But what’s worse is you created a culture where you’re incentived to follow - a senior engineer who wants to get promoted should write less code because coding is associated with being a low level employee.
The fundamental root cause is a misunderstanding of code as low level factory work and not intrinsically tied to the design and architecture. But it’s one of many ways in which traditional business structures and software engineering do not mesh and you need an extremely strong engineering leader to keep software culture on track, which very few organizations have.
There's another big factor at play: it's become increasingly easier to film yourself in combat and now encouraged by leadership for propaganda reasons.
I've been going to r/combatfootage everyday for the past ~3 years; it has exploded in popularity because like a lot of things in life, war is now recorded and disseminated wide. The ukraine war, specifically with drones outfitted with explosives, has shown this. GoPros seem almost standard for combat troops.
I have to imagine Israelis are learning about the conflict in realtime partly through the footage Hamas is releasing. The fact that footage is so unprecedented (fighters air gliding in, a Hamas reporter reporting from inside Israel) probably adds to the demand for this content.
I'll summarize this for people who don't want to read through the PDF, or aren't familiar with the jargon.
It's a discussion between two people, Jerry and Anil. Anil seems to be representing the Chrome team, and Jerry the Ads team and/or sales.
7 months prior, some feature related to Chrome's omnibox (url/search box) was rolled out, leading to reduced searches ("SQV"). Jerry is asking for this feature to be rolled back (undone) to restore lost revenue. Anil is trying to keep the feature by finding other ways to make up the lost revenue.
Anil is opposed to rolling back the feature because it is a user-visible change that was approved by all parties and launched months ago, so it will be frustrating to users to lose the feature and to developers to see their work canned.
Anil accelerates the launch of some other features to improve revenue, but Jerry is not satisfied. After some back and forth he sends the final email laying out his case: the revenue impact is too severe, sales is going to miss quota, quarterly earnings will be below forecast, stock price will decline, employees will lose out on stock-based compensation. This last email is cc'ed widely so I think Jerry was trying to build more pressure on Anil/Chrome team.
edit: do read the pdf though, there's a lot more detail in it.
Walter Benjamin wrote about this all the way back in the 1930s. He observed that early art like frescos painted on walls and sculptures in temples require the viewer to travel to them, but they gave way to paintings on canvas and busts that could travel to cities to meet audiences where they were.
Technology continued to push this trend, reproducing art through photography and printing in books and newspapers let it move even further to meet people in their own homes.
These current patterns you are seeing are an extension of this, the relationship between art and viewer has inverted, art is now expected to come to us, the focus has moved to within ourselves.
Marshall McLuhan also expanded on this and the idea of technology as extensions of us with his work "Understanding Media: The Extension of Man" if you'd like to read more.
It kind of had to get sent back to the drawing board to work well (avoid all the absurd one-at-a-time limitations of generators), but async-iteration-helpers I think will be an incidental huge help in normalizing/nudging the consumer api into "just an async iterable". Updating/writing state might be more complex, speak to the implementation, but seeing things change should have a semi-standard api.
https://github.com/tc39/proposal-async-iterator-helpers
One huge sadness I have is that when you have a callback in .then or .next, your handler doesn't get a gratis reference to what you were listening to. I'd love to have this be a better record. DOM is so good about having thick events that say what happened, and it's so helpful, sets such a rich base, but js snubbed that path & left passing data in 100% to each event producer to decide as it will. Consumers can craft closures to seal some data into their handlers but it sucks compared to having an extensible data flow system. We're kind of sort of finally fixing this crime the hard way by standardizing async context/async_hooks, which is a whole new ambient way to create context that we can enrich our async events with, since there was no first class object to extend for having omitted passing in the promise to the promise handler. https://github.com/tc39/proposal-async-context
Also worth pointing out, maybe arguable as a hazard tale, observables was proposed a long long time ago. There's still a ton of adoption. But it also feels extremely similar yet notably apart & worse ergonomics than async iteration. It's imo a good thing it didn't get formalized. https://github.com/tc39/proposal-observable
Alas one reactivity I really wish we had had is object.observe (note: different API than observables), or something like it. Undoing good specification work & saying, 'yeah, if users want it, they can implement it themselves with proxies' has lead to no one doing it. And you can't just observe anything; whomever is the producer has to up front make the decision ahead of time for all consumers, which turned a great capability into something first boutique then quickly forgotten. Alas! https://esdiscuss.org/topic/an-update-on-object-observe
I ack your ask, and there's no shortage of options that are pretty damned good. But I feel like we are still in a discovery not deployment phase, still delay competing because we have a lot of terrain and idea space to consider. Before we pick, standardize & ship one.
In the early 90's, Apple had big plans for end-user programming. Larry Tesler, the head of Apple's Advanced Technology Group, gathered people from ATG and elsewhere at Apple who were working on aspects of end-user programming for a project code-named Family Farm. The idea was that most of the pieces that were needed for end-user programming were already working or close, and that with a few months of work they could integrate them and finish the job.
The project sputtered when (1) it became clear that it was going to take more than a few months, and (2) Tesler was largely absent after he turned his attention to trying to save the Newton project.
AppleScript was spun out of the Family Farm project, and William Cook's paper [1][pdf] includes some of the history, including its relationship to Family Farm and HyperTalk, HyperCard's scripting language.
AppleScript shifted the focus from creating something that end users without previous programming experience could easily use to application integration. I was a writer who worked on both Family Farm and AppleScript, and I was both surprised and hugely disappointed when AppleScript was declared finished when it was obviously not suitable for ordinary users. I assumed at the time that there had been no usability testing of AppleScript with ordinary users, but Cook's paper claims there was. All this was even more disappointing in light of the fact that so much more could have been learned from the success of HyperCard and HyperTalk, and that the HyperCard developer and champion Kevin Calhoun was just around the corner.
The Wikipedia article on HyperCard [2] gives the history of the product, including the pettiness of Steve Jobs in cancelling it.
The other anomaly is that Jensen talks all the time with ICs doing the work. I was only a couple of months into working at the company before I got to have a face to face discussion with him about a project I was working on. I have seen many mid-level engineers (IC4-IC5) give him deep dives in these group meetings. It can be very stressful being under Jensen's microscope, but it dramatically reduces the "let's show pretty slides to the CEO to show him everything is good" BS. I was previously at a startup 1/100th of this size where the CEO was far less connected to rank and file engineers, so it has been a really nice change.
DisplayPort has the huge advantage of being incredibly easy to passively convert into the ultra-legacy-weird-difficult HDMI, where-as the legacy-centric-pita-gross HDMI requires active absurd adapters to turn into DisplayPort.
100% more usb-c please, with alt modes. USB4 mandates every port be able to do DisplayPort output.
I really hope we start to see phones and tablets which have >1 USB port. Lenovo has an absurd beast phone, a Legion phone, with both dual batteries and dual USB-c (USB3) ports. If we get to 2030 and phones can't plug in to GPUs something is f-ed and the system is broken, tech has ossified grossly. Hopefully happens sooner, and hopefully we see dual ports emerge midway to then too. Would be such a great capability set.
From what I understand reading the article it's not about Meta respecting Israel, but Israel actively using the Meta rules to target criticism. This is also what I experienced as a moderator for interreligious dialogue. Certain groups were more aggressive in asserting their truth, not by using arguments, debate and dialogue, but using straw man tactics, meta communication and loopholes in the moderation rules to get their way.
It's a crying shame that it's the EU that upholds a remnant of competition and consumer rights while the “free market” in the US gets more corrupt by the day and the power of the big monopolies gets ever more cemented.
Qualcomm employee here - I agree that we have fallen short of expectations but I don't think it's fair to say "none". I know several folks whose job is kernel development and upstreaming is part of that. It would be great if we could shift from a focus on Android to a focus on Linux instead, though.
I have seen significant changes in the policies established only just weeks ago that should enable Qualcomm to better participate in developer communities. There's a clear message from the CEO and several VPs that these changes are necessary.
The whole document is good, but in particular, my favorite part (that I reference not infrequently in conversations) is the priority of constituencies:
> In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.
The exultation of consumeristic convenience & consumption above all else that surrounds this topic is such a powerful lesson in desire & influence & what gets voiced.
Externally, seeing chaos & churn is easy, takes no effort. Yes, a lot of screenshots & device emulation protocols were left undefined. But that doesn't seem like a weakness to me. The bazaar finds interesting good approaches, slowly, over time, by way of it's many voices. X extensions also evolved out of some organic growth as well, but gee it sure seems like everyone remotely involved says there's basically no more juice to squeeze, that the limitations of the architecture have come & are hard & fast (and that getting many modern necessities has been hell after hell).
Wayland's minimalism is such a great guard against running into deadends. It invents less, uses the OS more (rather than inventing it's own controllers), relies on protocols more. X had to be a perfect cathedral, and it got really far & severed many well, but issues like hidpi, multiple refresh rates, screen tearing a video seemed basically unsolvable. No one had purchase on the monolith to keep things moving forward, and something had to be done.
There really wasn't any choice. And it's almost certain a new monolith that tried to be X, that tried to do everything, would have fizzled. Letting the compositors figure stuff out slowly made success possible. Success, progress, has to mature. And that's hard and takes patience. And maybe people don't appreciate the wins, and this being unfrozen; but we were stuck where we were, we couldn't really budge at all, and we found a really smart way to start. Wayland is much less and that's huge. The bazaar iterating forwards is glorious & great, an exchange where we prevent crufting ourselves in like we had in the past.
Whatever the technical decisions are here (and I for one see huge wins), theres so many other concerns. Compositors can be remarkably complete for being absolutely tiny, because we actually use the OS now; that's a stunning win that doesn't affect users of a particular env, but actually means so much. Keeping the door open to new futures matters, and Wayland bestows that possibility, something of great importance. The political decision/merit to let protocols work things out over time, to find agreement & allow divergence is as important and even greater a win, a way to insure adaptability forwards, a colossal jump over X monocultures & monolith fiddling. Wayland's merit is that it makes sense in the world. That it is a basis for creation. It's easy as an end user to underrate that, to feel upset over a proprietary hardware vendors lagged adoption and your videoconferencing software's lazy negligence. A viable open ended future isn't much comfort to present suffering. It's still stunning to me the vehemence of such avid consumerdom, such strong consumeristic expectation, such unbroked unwillingness to face difficulty, and such staunch tying oneself to the past. I'm so excited for this future - difficulties and discovery and all - and I want to much to see some respect & appreciation for the technical & political merits of Wayland, of the bazaar, of figuring stuff out, of improving. It matters.
> Mark Lemley observed this happening nearly 20 years ago, in his prescient, seminal article, “Terms of Use.”: The problem is that the shift from property law to contract law takes the job of defining the Web site owner’s rights out of the hands of the law and into the hands of the site owner.
With "contracts" of adhesion proliferating, and how impossible it has become to exist in the modern world without acceding to them (something as simple as buying a new SSD involves agreeing to one), this problem is getting worse by the day.
The law is becoming increasingly irrelevant, and more and more we are ruled by one-sided "contracts" from giant companies that are in a position to push them on us.
Based on my notes from reading The New World: Volume 1, 1939-1946 by Hewlett and Anderson, approximately 1/6th of the Uranium used for the Manhattan Project came from Canada, 1/7th from the US, and the rest came from the Belgian Congo (most of that was actually shipped across the Atlantic before the US entered the war- it was stored on Staten Island for safekeeping before US entry and the creation of the Manhattan Project). I don't know much about the US uranium, but I do know that the Belgian Congo uranium mining was also done under a racist colonial system that exploited the locals and gave them inadequate protection or understanding of the risks they were taking. Postwar US Uranium mines in Arizona and Utah also used Dine (Navajo) and other native peoples with similar levels of racism, exploitation, leaving thousands of people sick etc.
There are surprisingly strong parallels to the current 'Race for Lithium' and 'Race for Rare Earths'- hard-rock mining is a terrible business all the way around.
I’m not talking about the AfD here, but trying to quickly answer the question how banning political parties can seen as justified in principle:
Banning a political party is an important instrument in a wider philosophy known in Germany as “wehrhafte Demokratie” (defensive democracy). This philosophy states that democratic states should have legal tools with which they can defend themselves against people, who want to attack the democratic order itself.
Wehrhafte Demokratie is a very well established and accepted concept, here, partially because of a wish to avoid repeating the mistakes of the Weimarer Republik. It’s also justified by the belief that democracy is not just a dictatorship of the majority, but that even a majority of voters is limited in what they can do and that democracy also includes for example the protection of minorities.
> biased actors who should already be constrained by the rule of law
Banning a political party works by the rules of law.
The legal barriers for banning a political party are quite high in Germany. Basically for a ban it must be proven that the party as a whole, not just single member, have the goal to attack key elements of the democratic order itself and that there is a real danger that they could succeed.
The last condition can also be a legal reason to only ban a party once it actually gets popular: As long as it is unpopular, judges don’t see the condition fulfilled, that the party presents a real danger, so they won’t ban the party. This happened with Nationaldemokratische Partei Deutschlands (NPD), which was ruled to be verfassungsfeindlich (an enemy of the constitution), but not banned because it was so ineffective und unpopular.
People harp on the "NVIDIA pushing upmarket" but the reality is that NVIDIA is running their lowest operating margins since ~2011 right now, and that's with a secular shift towards high-margin enterprise products being rolled into those topline numbers.
(cool bug facts: for the Q1 numbers, AMD's gaming division actually had a higher operating margin than NVIDIA as a whole including enterprise sales/etc! 17.9% vs 15.7%)
The new generations of goods simply are that expensive to produce and support - TSMC N4 is something like 3-4x the cost per wafer of Samsung 8nm, even with the smaller die sizes and smaller memory buses these products are actually some thing on the order of twice as expensive as Ampere to produce. Tapeout/validation costs have continued to soar at an equal rate, and this is simply a matter of physics, not something TSMC controls, therefore not something that can be changed by pushing profits around on a sheet.
MCM doesn't really affect this either - this is about how many mm2 of silicon you use, not how well it yields. Even if you yield at 100%, 4x250mm2 chiplets is still 1000mm2 of silicon. And the cost of that silicon is increasing 50-75% for every node family you shrink. Memory and PCIe PHYs do not shrink, and cache has only shrunk modestly (~30%) at 5nm and will not shrink at 3nm at all. So at the low end you are getting an additional crunch where the design area tends to be dominated by these large fixed, unshrinkable areas.
This is the fundamental reason that has followed NVIDIA on pricing for years now with products like 5700XT, 6800XT, Vega, Fury X, etc. TSMC N7 was already around twice as expensive per wafer as Samsung 8nm or TSMC 16FF/14FF. NVIDIA was using cheap nodes and AMD was being forced to use expensive leading-edge nodes just to remain competitive (and despite failing to take a commanding lead). They didn't follow because their cost-of-goods was higher - which is also the reason AMD cut memory bus width in RDNA2, along with PCIe width and things like video encoders on certain products. They were on the leading nodes, so they took the hit to area overhead sooner, and started looking for solutions sooner. It's not oligopoly, consumers just haven't adapted to the reality of cost-of-goods going up.
AMD being willing to cut deeply on inventory at the end of the generation is one thing, but you also have to remember that the Gaming Division (RTG+consoles) is barely in the green right now and that's even with AMD signing a big deal for next-gen consoles that is more or less straight profit for them. They're losing money on RDNA2 hand-over-fist right now, this is a clearance sale at the end of the generation, not a sustainable business model.
When they can undercut they have undercut - like 7900XT/XTX. The 4080 is fundamentally unsaleable at its current price, and 4070 Ti isn't exactly great either, and when AMD saw the sales numbers they cut the prices. That's the counterexample to oligoploy - they are willing to do it when they can. When they match pricing, or do a token undercut, it's usually because they can't afford to do drastically less than NVIDIA themselves, because they're affected by the same costs. For example Navi 32 (7700XT/7800XT) is going to be pretty unexciting because MCM imposes a big area/performance overhead and they simply can't go way cheaper than NVIDIA, despite the 4060 Ti also being an awful price.
As such, the premise of your post is fundamentally false. AMD generally costs about the same as NVIDIA, and that's part of the reason they've failed to take marketshare over the last 10 years. It's generally a less attractive overall package with less features, other than sometimes having more VRAM (but not always, Vega had less than 1080 Ti, Fury X had less than 980 Ti, 5700XT had the same as 2070/2070S and less than 1080 Ti, etc). And this is because AMD is fundamentally affected by the same economics in the market as NVIDIA. And Intel will be too, when they stop running loss-leader products to build marketshare.
Consumers don't like it but this is the reality of what these products cost to design and manufacture today. And you don't have to buy it, and if those product segments don't turn a profit they will be discontinued, like the $100 and $150 segments before this (where are the Radeon HD 7750s and 7850s of yesteryear?). That's how capitalism works, the operating margins have already been reduced for both companies, they're not going to operate product segments at a loss just so gamers can have cheap toys. Even if they do it for a while, they're not going to produce followups to those money-losing products.
Nor is NVIDIA leaving the gaming market either. They'll continue to produce for the segments that it makes sense to produce for. The midrange (which is now $600+) and high-end ($1000+) will both continue to see good gains, because they're less affected by the factors killing the low end. And MCM will actually be incredibly great in the high end - imagine two or four AD102-sized chiplets working together. But it's going to cost a ton - probably the high-end will range up to $4k-8k within a decade or two.
The low end will have to live with what it's got. AMD holding the 7600/7500 back on N6 (7nm family) is the wave of the future, and it seems like NVIDIA probably should have done the same thing with the 4060. A 3060 Ti 16GB for $349 or $329 would probably have been a more attractive product than a 4060 8GB at $299 on 4nm. Maybe give it the updated OFA block so it can use the new features, and call it a day.
If $300 is your budget for a GPU, buy a console - the APU design is simply a more efficient way to go. A $300 dGPU is about 90% of the hardware that needs to go into a console, and if you just bolt on some CPU cores and an SSD you're basically there. The manufacturing/testing/shipping costs don't make sense for a low-end product (and $300 is now entry-level) to be modularized like this anymore, integration brings down costs. The ATX design is a wildly inefficient way to build a PC, and consoles eschew it for a reason. A Steam Console could bring costs down a lot, but it still will involve soldered memory and other compromises the PC market doesn't like (but will be necessary for GDDR6/7 signal integrity etc). Sockets suck, they are awful electrically and ruin signal integrity. The future is either console-style GDDR or Apple-style stacked LPDDR.
Went to his website.
There was an interview of him from 2022.
The answer that really stroke me was not about Vim, but about software craftmanship vs professional programming:
« I have been working for a company where quite a few managers, educated in physics and mechanics, thought the software was just the same as what they knew and they could decide how to make it. That company went downhill and was eventually taken over. The same happens in places where decision-makers can get away with failure, such as in government. The people writing the code probably just make sure they get paid and then run away from the crime scene. On the other end of the scale are people who want to write beautiful code, spend lots of time on it, and don’t care if it actually does what it was intended to do or what the budget was. Somewhere in between, there is a balance. »
I am not so sure about the last sentence. But the rest is SO true!
The gig economy has worn many out... The promise of services like social media, Uber, Political Parties, and airBnB that promised to create wealthy entrepreneurs fell flat on it's face after making the people at the top of the pyramids very wealthy.
I think that social media really tipped the balance of fairness in the working world... With social media, suddenly Trust fund Babies could fake success, and promote schemes that helped them to profit. The Social Media model was set up to raise Trust Funders and Popular individuals far above everyone else, and it killed hopes of upward mobility for people who didn't fall into any fame or wealth category unless they became famous for negative reasons or for ridicule.
It's not that no one wants to work anymore, it's that people are tired of weak work schemes, and being used and then thrown away in order to elevate others. It's not until real opportunity for growth, entrepreneurship, and excellence returns that things will begin to get back to normal.
Talking about is as "Nobody wants to work anymore" is an injustice... Millions of people are working very hard every day on content creation that rivals TV programming and others are regularly posting their best and fully composed and edited work on Internet sites daily, most without any pay in HOPES of being discovered for their work, as proof of that.
In a similar vein, this poem by Pedro Pietri:
Telephone Booth (number 905 1/2)
woke up this morning
feeling excellent,
picked up the telephone
dialed the number of
my equal opportunity employer
to inform him I will not
be into work today
Are you feeling sick?
the boss asked me
No Sir I replied:
I am feeling too good
to report to work today,
if I feel sick tomorrow
I will come in early
For better or worse, the wealthy are the thought leaders of society. Their values trickle down to the middle and lower classes. When we see them consume at a high rate (per capita), it sends a message that “what I can afford to consume is ok to consume”. So the masses end up throwing away hoards of cheap single use plastics because it’s convenient and not expensive.
The issue with mass consumption by the wealthy is how it influences the behavior of the more impactful middle and lower classes.
In the Steve Jobs biography I was originally shocked that he and his wife had hours long debates about whether or not to use a drying machine (vs hanging laundry outside). For a long time I never saw what he was fighting for, but now it makes sense to me.
Pretty much a few months in the pandemic, I realized I would be 100% well informed if I generally just read 3 sources: Ed Yong, Zeynep Tufekci, and Katelyn Jetelina (Your Local Epidemiologist).
Pretty much everything else (not counting public health websites) felt like sensationalist garbage that wasn’t aiming to inform us but rather just get clicks.
I know if these folks write something, it will be well researched, well thought about, and when they make mistakes (like getting caught up in any political debates), they genuinely care about fixing them in search of knowledge, truth, and better understanding an issue.
I’m a huge fan of Ed Yong’s work and I know just about any issue he writes about, I will come away truly learning something new and useful.
> "I don't want to live in place with lots of Muslims, they mess things up".
Devil's advocate: this actually may have more to do with culture generally than religion specifically. Muslim countries may still have more clan-based / tribal social structures:
> Henrich’s ambition is tricky: to account for Western distinctiveness while undercutting Western arrogance. He rests his grand theory of cultural difference on an inescapable fact of the human condition: kinship, one of our species’ “oldest and most fundamental institutions.” Though based on primal instincts— pair-bonding, kin altruism—kinship is a social construct, shaped by rules that dictate whom people can marry, how many spouses they can have, whether they define relatedness narrowly or broadly. Throughout most of human history, certain conditions prevailed: Marriage was generally family-adjacent—Henrich’s term is “cousin marriage”—which thickened the bonds among kin. Unilateral lineage (usually through the father) also solidified clans, facilitating the accumulation and intergenerational transfer of property. Higher-order institutions—governments and armies as well as religions—evolved from kin-based institutions. As families scaled up into tribes, chiefdoms, and kingdoms, they didn’t break from the past; they layered new, more complex societies on top of older forms of relatedness, marriage, and lineage. Long story short, in Henrich’s view, the distinctive flavor of each culture can be traced back to its earlier kinship institutions.
[…]
> Why, if Italy has been Catholic for so long, did northern Italy become a prosperous banking center, while southern Italy stayed poor and was plagued by mafiosi? The answer, Henrich declares, is that southern Italy was never conquered by the Church-backed Carolingian empire. Sicily remained under Muslim rule and much of the rest of the south was controlled by the Orthodox Church until the papal hierarchy finally assimilated them both in the 11th century. This is why, according to Henrich, cousin marriage in the boot of Italy and Sicily is 10 times higher than in the north, and in most provinces in Sicily, hardly anyone donates blood (a measure of willingness to help strangers), while some northern provinces receive 105 donations of 16-ounce bags per 1,000 people per year.
There's a lot that I like about this book. Splitting up the mandate between platform and products teams, and eliminating friction, letting each team be good at their thing is I think an efficiency many companies indeed could benefit from.
But I've also seen this book promoted heavily within an org, and the one core strength kept feeling like a core weakness that mades me incredibly sad, about how isolated it made work.
It doesn't insist it has to be so, but the org I saw that was so excited for TEam Topologies loved how it continually stressed independence of teams. And as a direct result, I've seen cooperation, coordination, & cross-team planning plummet. In ways that keep having suboptimal plans get put into action with bad outcomes. Stream aligned teams would turn into complicated subsystem teams, after they created something complicated and gnarly while being stream/product aligned, and unchecked.
I think the topologies here are quite useful framings, and as Fowler says the idea of trying to reduce cognitive complexity is an interesting one we haven't heard well represented before. And is probably due given how impractical making each team truly full stack devops has become, as the CI/CD/observability stack complexity has expanded. But I caution so much against the messages this book gives management, which is that stream/product aligned teams just need to be racehorses with blinders on & interference is bad. The book glories the stream aligned team, the value creator, and makes everyone else auxiliary, which is sort of true & great. But the book's glorification of execution speed doesn't leave much space for how and where cross-team wisdom happens, what kind of processes you have there. Broader cross-team architecture reviews, brainstorming, systems planning, systems coordination aren't well captured here: most teams need a strong collaboration mode to build good software that fits the architecture well. But the book only really regards a single collaboration need: those of platform teams to get feedback to ease their Developer's Experience.
The missing element was ubuntu. If you want to go fast, go alone. If you want to go far, go together. - African Proverb
If by "solving" you mean "refuse to do anything at all unless you have the exact schema version of the message you're trying to read" then yes. In a RPC context that might even be fine, but in a message queue...
I will never use Avro again on a MQ. I also found the schema resolution mechanism anemic.
Avro was (is?) popular on Kafka, but it is such a bad fit that Confluent created a whole additional piece of infra called Schema Registry [1] to make it work. For Protobuf and JSON schema, it's 90% useless and sometimes actively harmful.
I think you can also embed the schema in an Avro message to solve this, but then you add a massive amount of overhead if you send individual messages.
Sigrok support will ensure you have a well supported open specification device that already has alternative software available in case whatever software is bundled with the device turns out to be substandard or eventually unsupported.
Soon there will be a Plaza Web, for which you'll need an approved device for, like a Chromecast with Google TV, and the Old Web of communities, enthusiasts, and the like.
Good question! Unfortunately, the answer to your first question is no. Higher temperature air can "hold" more moisture than cooler temperature air. And for a fun double whammy, higher temperatures also increase ground evaporation, drying out the soil and holding that water vapour in the air.
Now, you could generally assume that all that air eventually has to condense somewhere, but the problem is that we've now raised the potential threshold of what precipitation entails. So we've got drier soil (which is worse at holding moisture when it rains), more potential for precipitation, and a longer buildup time between precipitation threshold events. We've just created feedback loop that encourages flash flooding, making it more difficult to farm all areas, both old farmland and new potentially arable land.
Its fun to learn about the complex system dynamics of climate physics, it'd just be more fun if it didn't come with a side serving of impending catastrophe.