I've never done much with Bluetooth under desktop Linux, but that sounds like a woeful pain in the ass compared to the usual steps for Android or Windows:
1. Pair headphones in a couple of clicks/taps; sound comes out.
Since there's a lot of discussion of it feeling expensive for "just push notifications", I think it's probably worth my addressing that directly as a top-level comment.
Zulip Business costs $6.67/user/month. While the only Zulip feature that can't work without purchasing a plan (or doing a huge amount of work to publish your own mobile apps) is mobile push notifications for businesses with 10+ users, https://zulip.com/plans/#self-hosted details dozens of features for which Zulip Business includes expert support.
Zulip is 100% open source. In open core products, those specific features just don't exist at all in the open source version. For example, Mattermost requires the proprietary Mattermost Professional [1] at $10/user/month if you want to use SSO, LDAP sync, user groups, read receipts, etc.
(Note also that $10/user/month is the minimum price at which Mattermost offers push notifications "for use in production environments" [2].)
If you compare Zulip's pricing options to Mattermost's, our offering is substantially more generous.
It would be even if we went open core and moved a few dozen features of Zulip to a proprietary version. But we're not going to do that: we really do care about being an 100% open source project.
If you instead compare Zulip Business to using Zulip for your business's mission-critical communications between paid team members without paying for something, I think that's the wrong question. Skilled humans' time is incredibly valuable, and people spend a LOT of time in team chat tools. It is rational for businesses to pay a tiny fraction of the fully-loaded cost of their employees for the insurance that comes with support and mobile push notifications for their main collaboration tool.
(If you're using Zulip for something other than a server full of paid employees and contractors, see the Community plan and discounts for other use cases).
Finally, on the topic of cost to us: Supporting a business that is self-hosting complex, mission-critical software like Zulip is more expensive for us than having that business using Zulip Cloud. Hosting is cheap compared to humans who can debug anything, and there are big economies of scales in terms of human time for managing a large multi-tenant cloud installation.
Helping thousands of different people with varied skill levels self-hosted your application successfully is not cheap, even for a project like Zulip that is very focused on making self-hosting Just Work.
People are taking the piss out of you everyday. They butt into your life, take a cheap shot at you and then disappear. They leer at you from tall buildings and make you feel small. They make flippant comments from buses that imply you’re not sexy enough and that all the fun is happening somewhere else. They are on TV making your girlfriend feel inadequate. They have access to the most sophisticated technology the world has ever seen and they bully you with it. They are The Advertisers and they are laughing at you.
You, however, are forbidden to touch them. Trademarks, intellectual property rights and copyright law mean advertisers can say what they like wherever they like with total impunity.
Fuck that. Any advert in a public space that gives you no choice whether you see it or not is yours. It’s yours to take, re-arrange and re-use. You can do whatever you like with it. Asking for permission is like asking to keep a rock someone just threw at your head.
You owe the companies nothing. Less than nothing, you especially don’t owe them any courtesy. They owe you. They have re-arranged the world to put themselves in front of you. They never asked for your permission, don’t even start asking for theirs.
> Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time.
> By comparing the treatment arm to control 1 arm, we observed that removing third-party cookies while enabling the Privacy Sandbox APIs led to -20% and -18% programmatic revenue for Google Ad Manager and Google AdSense publishers, respectively.
For the mysterious "new experience in Chrome" they mention, I'll be keeping an eye on their public planning repositories, but there's no guarantee that the project they're mentioning is related to any of these:
> OPFS doesn’t come with graceful handling of concurrency out of the box. Developers should be aware of this and design around it.
There's a multiple readers and writers proposal [0]. It's been "position: positive" by Firefox [1], implemented in Chrome [2], and ignored by Webkit [3] (of course).
Love the shoutout to Roy Hashimoto. He's been writing VFSs for SQLite-on-the-browser and perf testing them. He's recently wrote "IDBMirrorVFS", which "is a new example VFS that keeps all SQLite files in memory while persisting to IndexedDB". It has remarkable performance, of course. https://github.com/rhashimoto/wa-sqlite/discussions/189
This is one of the reasons I'm always singing the praises of Kanban over Scrum. Scrum encourages busy work on too many things. Kanban encourages ruthless focus on a few things that need to be done next, and what is blocking it. Scrum encourages "agile cosplay" with folks taking on things they shouldn't be doing now to make the right (imaginary) story point total for the sprint (which of course gets gamed as points get estimated to fit). Kanban encourages teaming up to get the hard stuff out of the way now.
I have led a team in a transition from an over-done scrum to minimal Kanban process and talked to many others who did the same, from small startups to AAA game companies, and they all loved it. I've never heard one dev say they thought it was more productive to have scrum. As far as I can see, Scrum makes middle managers, "scrum masters" and people who don't care how much work actually gets done happy. Kanban actually helps development go faster.
If anyone's interested, Microsoft press has a great light book on. The WIP limits are a key part.
Impressions from last week’s CVPR, a conference with 12k attendees on computer vision - Pretty much everyone is using NVIDIA GPUs, and pretty much everyone isn’t happy with the prices, and would like some competition in the space:
NVIDIA was there with 57 papers, a website dedicated to their research presented at the conference, a full day tutorial on accelerating deep learning, and ever present with shirts and backpacks in the corridors and at poster presentations.
AMD had a booth at the expo part, where they were raffling off some GPUs. I went up to them to ask what framework I should look into, when writing kernels (ideally from Python) for GPGPU. They referred me to the “technical guy”, who it turns out had a demo on inference on an LLM. Which he couldn’t show me, as the laptop with the APU had crashed and wouldn’t reboot. He didn’t know about writing kernels, but told me there was a compiler guy who might be able to help, but he wasn’t to be found at that moment, and I couldn’t find him when returning to the booth later.
I’m not at all happy with this situation. As long as AMDs investment into software and evangelism remains at ~$0, I don’t see how any hardware they put out will make a difference. And you’ll continue to hear people walking away from their booth, saying “oh when I win it I’m going to sell it to buy myself an NVIDIA GPU”.
I was part of the Mozilla team that kickstarted the WebVR spec that predates WebXR. I’ve been also maintaining A-Frame (Framework to develop Web based AR/VR experiences) for more than 8 years. This announcement makes me so happy. It’s been a decade long effort.
I went to school out East and work out west, and consumer tech is way more of a NYC thing than a Bay Area thing because the personas and professional networks are out there.
West Coast is also enterprise or business tech driven, but those founders aren't as media friendly or sexy despite being the majority (hence the Musks and increasingly Altmans hogging the limelight).
Boston has potential, but it honestly isn't leveraging it. The elitism is rife to a level unlike in California. A NU, BU, or UMass Amherst founder isn't going to be in the same circles as the Harvard and MIT founders who can leverage the I-Lab or Engine and HBS+Sloan resources, but in the Bay Area, a UCB, UCSC, SJSU, and Stanford kid will all be in the same professional circles. CIC tried, but they are trash. At one point, most startups in Greater Boston were basically Israeli companies using it as a US HQ because of the El Al direct and the large Israeli diaspora (throw a rock and you'll hit a Cafe Landwer).
Everything is tied up to elitism and old structure institutions out East (where did you study) while out West it's much more output driven (where do you work).
Works well for it's biotech innovation space though, which Boston is known for.
(ironically, I liked DC except the humidity - way less stick up their butt, but they also have a bohemian streak)
The issue is not lack of raw bandwidth, it's getting the hardware, software, drivers and OS "to do the right thing".
PCIe gives you the building blocks of posted transactions and non-posted transactions but doesn't help you use them effectively. There is no coordinated or designated DMA subsystem to help move data between the root-complex("host") and end-point("device)".
So, if you have to design a new PCIe end-point (target in original PCI terms) using an FPGA or ASIC then trying to actually sustain PCIe throughput in either "direction" isn't trivial.
Posted transactions ("writes") are 'fire and forget' and non-posted transactions ("reads") have a request/acknowledgement system, flow-control, etc.
If you can get your "system" to use ONLY posted writes (fire and forget) with a large enough MPS (payload size), usually >128 Bytes, then you can get to 80%-95% of theoretical throughput (1).
The real difficulty is if you need to do a PCIe 'read' this breaks down into a read-request (MRd) and a Completion with Data (CplD). The 'read' results in a lot of back and forth traffic and tracking the MRds/CplDs becomes a challenge (2).
Often an end-point can use 'posted writes' to blast data to the PCIe root-complex (usually the CPU/host) maximizing throughput since a host usually has hundreds of MegaBytes of RAM to make use of for buffers. Unfortunately to transfer data from the root-complex(host) to the end-point(device), the host usually will have the device's DMA controller initiate a 'read' from the host's memory which results in these split transactions since end-points don't often carry hundreds of MB of RAM. This also means bespoke drivers, tying into the OS PCIe subsystems and hopefully not loosing any MSI-X interrupts.
To re-iterate in the modern "Intel way" the CPU houses the PCIe root complex but does not house ANY DMA controller. So to get "DMA" working means each PCIe end-point's implementation has some kind of DMA "controller" which is different than the DMA controller of all other end-points, rather than Intel having spec'd out an "optional" centralized a DMA controller in the root complex.
This is such a good post. I'm pretty humbled by your words about us being "everywhere that's of interest" and that "we're highly respected." It's hard to see that when you're in the weeds, so I just wanted to say I appreciate it.
Regarding proprietary…I get it. I was the CEO of BlazingSQL, and we were fully OSS with an open-core model. The number of Fortune 500 customers that were deploying us at scale but not paying us in money, feedback, or testimonials was honestly heartbreaking.
When Josh (our CEO) and I were in the early days of Voltron Data, we thought maybe we could hold ourselves accountable to the open-source community with a new model, which we now call open-periphery, where, as you said, the interchanges, standards, and protocols are open, allowing companies and developers to build resilient, evolvable data stacks.
Open-periphery also means we don't have to debate what goes back to the community and what goes into the proprietary code because there is such a clear delineation. Open-periphery is our way of thinking about OSS business models, and it's the solution we came up with to ensure we can continue to invest in open-source and next-generation query engines.
Not playing everywhere is an existential risk. Because IT is so integrative, if anyone gets a huge leg up in any sector, Apple risks getting shut out in a lot of other sectors.
Spatial Computing/iVision is, for example, a claim in vr. It gives them some exposure to the market, some ability to extend & integrate their existing application/computing ecosystem into this medium. Ditto for all the other pieces; they're all integrative. Smart speakers work with airplay. iWatch works with iPhone works with iOS. The close integration lets them rebuff innovation in any other field: no one small can come along and build the next VR headset or the best watch to compete with Apple in any of these sectors, because no one can integrate like Apple. No one else has all the products. You have to have complete over the horizon horizontal control to keep your intense market power, and Apple is invested above all in there never being a chink in that armor, in making sure they can completely dictate the shape of all products by producing & owning all the products themselves. This is Apple.
Hence, Apple has to dabble everywhere. It maintains the most, it prevents real competition from forming, and it earns them a couple % revenue here and there to boot.
Expressing an entirely different brand of cynicism, I'd also ask: where else could Apple look to expand into? None of these sectors has been a runaway success. But it's not like Apple's missing the boat on some massive new tech that a huge new Total Addressable Market. It's been absent from Crypto and AI but generally it's just expanding wherever there's any opportunity, and why not when you have the cash & when any sector could become huge?
This adds more to the evidence that Vulkan / DX12 seems like a failed API design, when too many graphics engineers are reinventing the wheel by building their render-graph API on top of these (propietary triple-A game engines, Unreal Engine, and now Godot...) Instead of real-time graphics APIs providing all these low-level manual synchronization primitives and cumbersome PSOs, maybe they should just provide an official Render Graph API instead? Provide all of the operations and its dependencies in an acyclic graph up-front, and the driver handles synchronization automatically in the most performant manner tailored to the hardware.
I guess there needed some trial-and-error in the gamedev world for about a decade to really nail down a nice to use but also performant graphics API design. Vulkan being originated from the Mantle API from AMD didn't help - since it was a low-level console API mainly accustomed to AMD's GPUs and really didn't seem like it would fit for a more "general-purpose" API spanning a huge range of hardware and can stand the test of time. And with Microsoft's DX12 hastely copying from AMD's initial design it also has all the same issues (The irony is that DX11 is still the best graphics API you can use in gamedev in Windows in terms of ergonomics and even performance - seeing many trying to dauntingly build a DX12 backend and end up performing worse than DX11...)
Nowadays I'm obversing that the industry has known these issues for a while and are experimenting with alternative API designs... there are some experiental render-graph extensions available in both DX12 / Vulkan (albeit in a limited fashion):
Others have already mentioned The Early History of Smalltalk, highly recommended. You'll probably want to read it a couple of times, revisit from time to time.
"The key in making great and growable systems is much more to design how its
modules communicate rather than what their internal properties and
behaviors should be."
"I think I recall also pointing out that it is vitally important not just to
have a complete metasystem, but to have fences that help guard the crossing
of metaboundaries."
" I would say that a system that allowed other metathings to be done
in the ordinary course of programming (like changing what inheritance
means, or what is an instance) is a bad design. (I believe that systems
should allow these things, but the design should be such that there are
clear fences that have to be crossed when serious extensions are made.)"
"I would suggest that more progress could be made if the smart and talented
Squeak list would think more about what the next step in metaprogramming
should be -- how can we get great power, parsimony, AND security of meaning?"
The idea of computing as the shared stage to reflect our own intelligence is really what sticks out to me as the best way to frame what interacting with a computer means. It's not new but Alan did a great job of motivating and framing it here. Thanks for posting this great reminder that what we use as computers today are still only poor imitations of what could truly be done if we can transport our minds to be more directly players on that stage. It's interesting to reflect the other way as well. If we are the actors reflecting a computer to itself. An AGI has to imagine and reflect in a space created of our ideas. To be native the AI needs better tools, the "mouse" of it's body controlling the closed loop of it's "graphics", how do we create such a space that is more directly shared? Dynamically trading been actor and audience in an improvisational exchange? This is the human computer symbiosis I seek.
The argument is in the premise, a powerful tool used by people sophisticated in public messaging:
The premise here is that the status quo beliefs are knowledge, ideas that disagree are 'activism' - political activity - and are not knowledge. It's fundementally a higly conservative framing that, obviously and intentionally, protects the status quo power.
Once you accept the premise, everything else follows - assuming X is political activity, and school is for learning, not so much for political actitivty, then obviously .... The effective means is not to make an argument directly for the premise - that makes the premise an issue on the table, something to debate. Instead, assume it in your argument: Stop this politicized liberal activism! See how that works? Even people who disagree have to figure out what they disagree about and construct an argument.
In reality, neither idea is more political than the other, and the university, of all places, is where to explore and develop new ideas. Teaching the status quo idea is just as much indoctrination as any other idea - maybe more, because it doesn't raise the question of challenging itself. The real question is, how are they taught. And regardless, I trust the students at Yale are smart enough not to be so easily manipulated - just like the people on HN, of course, who know so well what's good for the students.
It looks cursed because it is cursed. It's the classic "back to the basics" web technology nonsense that is hello world optimized and completely unsuited to any real world business purpose.
These seemingly simple template languages that promise an escape from the "unnecessary" bloat and complexity of the modern web are doomed to fail. Any level of adoption will drive deeper usage, which will drive more complex use cases, which will force more bolted-on general programming language features. Eventually your users discover (the hard way) why all the complexity in modern web frameworks exists, but now they're stuck programming in a shitty template language that is underbaked, awkward to use, has no ecosystem, shitty tooling and some rancid expression DSL you have to memorize. Cursed indeed.
The Tao Te Ching starts with this line, which I adore:
> The tao that can be told is not the eternal Tao
“The Tao” here means “the rules / way to live your life”. Essentially, the point is that if you try to write down a set of complete rules for how to act - either in life generally or at work, well, that ain’t it.
This is always my problem with OKRs. They’ll always inevitably lead you away from what you actually should / need to do in a given moment if you were in tune with your wisdom. Instead of trying to codify what leaders do, we should practice staying present to what’s actually going on and practice wisdom - for whatever that means in the current context.
I had such a great company social night, talking with an engineer who got his start here ~5 year ago.
They were taking about how they just want people to have a well formed idea of what to build, to be able to hand off clear expectations, and let him roll. For curiosity sake I started asking about other times: has there been a time where you've felt on the line, been the one who has to figure out what to do?
They paused for a bit & then said yeah, actually... They had been battlefield promoted after two higher ups on a team had left & it was just them running this product. They said they had little idea what they were doing but the company trusted them & let them hack through it. They loved that time. Finding out what to doz being given problems and the freedom to solve it was a highlight of their life, they said.
A lot of people don't want to play the game. Especially when we are forced to collaborate with non-technicals, it's incredibly hard to justify and explain ourselves & to share power & compromise with these people who lack competency to judge, assess, negotiate.
The title here omits the gods truth as an option: we the engineers know & can assess & you the business/product don't have the technical chops to debate, nor do you understand what is to be built. The premise presented is "debate vs do" as they say it, as though product and product alone understands do. But I think most engineers live in pain and dissonance and sadness, feel an incredible impedance and struggle, because most businesses/product have only the faintest fragmentary fake propped idea of what do is. It's a fiction. And it's up to engineers to cobble together some vaguely competent rendition of the fairy tale nonsense product tells itself it's come up with and that engineers need to just do.
The phrasing here could not be more slanted. Run, engineer, run, from the dented terrors that would think their product sensibilities have fully flushed out the idea, that think there is no cause for "debate" or sussing out how really to do a things that think only to "do" what the master product says is necessary.
> but purist progressives in the Obama and now Biden admin pushed them away
Autocratic rulers like MBS deciding to cut up journalists/opposition political figures into tiny pieces with bone saws inside Saudi consulates didn't help matters. The whole Khashogghi incident really illustrated exactly what the Saudi regime thinks of rule of law and human rights of their own citizens when it's boiled down to the the barest essentials. US senators, congressmen, foreign service career people have taken note.
It's still worth noting that the Saudi military/air force/other armed forces are extremely large customers of US/NATO spec equipment and UK origin equipment.
It would be worth remembering that something like 85% of the 9/11 hijackers came from Saudi Arabia and there were very clear financial/funding connections from wealthy persons within the Kingdom to the pre-9/11 training program. Highly reputable journalists and intelligence sources have also extensively documented the Saudi funding sources that supported (and still support to this day) wahabbist madrassas in Pakistan and Afghanistan, the "V1.0" of the Taliban in the 1990s, and other fundamentalist salafist jihadi groups.
Invading saudi arabia for regime change instead of iraq in 2003 would have been much more logical if anyone in the US and UK had the fortitude to do it. It would have also been vastly more messy.
It's well known in people who study foreign affairs that Iran funds and arms Shia and shia-adjacent armed groups (Houthis, Hezbollah, etc). But this doesn't happen in a vacuum - to some extent this is the IRGC and Iran's reaction to the well documented and widely known Saudi support for salafist jihadism.
It's also well known and documented that the saudis have been investing vast amounts of their oil wealth in the US stock market, real estate and other equities since the mid 1960s, so the financial and interconnected realtionship between the US and Kingdom would be extremely difficult if not impossible to dis-entangle at this point in 2024.
Despite the Khagoggi affair and other problems descrived above, I think it's pretty clear that US decision makers still consider saudi arabia a much more trustworthy regional "partner" compared to Iran. Ongoing US/UK contractor support of all of their armed forces (and US/UK relationship with Saudi Aramco) and ongoing exports of munitions to saudi arabia back up this theory.
Fundamentally the 'three round' model was about the risks, first was execution risk (can the team actually build what they say they can?), the second was market risk (will the market see enough value in the product to pay enough for it to give the company at least 33% net margins for R&D/growth?), and then round three was scaling risk (what is the addressable market for this product? How much has been reached so far? How much more could be reached?)
After those three rounds you are a going concern with hopefully 10 - 20% market share and an IPO will repay the investors, give you a regular way to raise capital on the open markets, Etc.
Way too many 21st century start ups were investors scamming other investors :-)
Great conversation. The best part IMO was Patrick's statement about the importance of the Internet. Agree 100% and never heard anyone express that view so clearly.
"I have an ebullience and love for it in a way that people in our social class in the United States are aggressively socialized out of having ebullience and true love for anything. I think that the internet is the capital G, capital W, Great Work of the human race in a lot of respects, that it is magical. It is an encapsulation of the best things about our society that is also tremendously, instrumentally useful in making all the good things better and ameliorating all the problems over sufficiently long time scales.
It seems naturally to me that this is extremely important. This is extremely valuable, and it seems extremely underrated by almost everyone, including people who would consider themselves great fans of the internet but say, “Oh, I’m a great fan of the internet, but I’m a great fan of penicillin, too.
I think, in aggregate, the internet is obviously more important than penicillin by many orders of magnitude. Is it more important than medicine? I will bite that bullet. The internet is more important than medicine, the entire institution of medicine, from time immemorial to reasonable extrapolations of what we can do right now.
Is it more important than writing? You couldn’t have the internet without writing, so writing was very important to get to the development of the internet. That might be one of the most important things about writing, that writing got us to the internet. That sounds like a little bit of . . . I know people will take that full quote and say, “Oh, this crazy, nonintellectual person,” but I think that there is a reasonable case for it."
It reminds me of an obsession I had when I was young (maybe 12 or 13) where I kept iterating on a design for a mini-sub I had hoped to build. I must have checked out books on the history of the submarine about that time and became obsessed with the simplicity of the original Turtle submarine — operated with hand screws (propellers).
Likely too I saw a homemade sub or scuba tow on the odd Popular Mechanics cover....
I had read enough to incorporate a lead ballast that could be released from inside the sub. I imagined props and motors based around those electric trolling motors you can get for a small fishing boat. I therefore incorporated a car battery into the design. Front and rear ballast tanks allowed me to control the pitch trim. I imagined a small electric automotive tire pump would suffice to force the water out of the ballast tanks.
I obsessed over a mechanism to allow each trolling motor to be gimbaled from a pair of joysticks in the sub. I built mechanical models with paper drinking straws and toilet paper rolls to test the mechanics.
I played with different seating configurations to minimize the size of the sub but keep it "operatable".
It was a weird and impossible fantasy that never had a chance of moving beyond the drawing board stage. You know, especially for a kid with a single mother who was a secretary. But perhaps there was some intellectual and creative stimulation that I was feeding off at the time that made the effort worth it.
Thinking about it now though, how obsessive I was, it might also have spoke to a boredom, isolation and maybe sadness I felt at the time. The sub might have been an escape for me.
To see someone build a sub for real is kind of cool. But it also makes clear how likely my design would have just collapsed right away at about 10 feet depth. I mean, I planned on using plywood for the hull, ha ha.
"The Making of the Atomic Bomb", by Richard Rhodes, best nonfiction book I ever read. Covers German and Japanese efforts, as well as giving a history of modern physics, beginning way back in the 19th century.
Did you know that Einstein was strikingly muscular, and at one point in his life had been deeply religious, until deciding that much of religion was "lies"?
There is also a frightening history of World War I, of Jews in Europe, and biographies all the scientists involved in the US nuclear effort. Totally amazing.
I also read "Dark Sun: The Making of the Hydrogen Bomb", and "Masters of Death: The SS-Einsatzgruppen and the Invention of the Holocaust". The latter is deeply, profoundly, sickeningly graphic, and contains more information than you ever knew existed.
Oh boy, an opportunity to flex my AgSci degree oh HN! The invention of the HB process is more akin to a planetary credit card loan. It has done absolutely nothing to increase the Earth's *genuine* carrying capacity and (IMO) trapped humanity in insurmountable ecological debt. It has directly accelerated the mass-depletion of countless other finite resources such as water, healthy top soil, micronutrients, and more. The fact that it has allowed humanity to grow so much without consideration for other finite factors has also contributed to several secondary "loans" which have further trapped us in debt. Most notably, humanity's dependence on mono-culture farming which ravages biodiversity, creates super pests, and zaps soil health. You simply cannot feed 8 billion people with organic farming methods and the current energy cost of controlled-climate hydroponics makes it impractical at scale. As you mentioned, all of this growth also contributes to humanity's collective appetite for, well, everything. Plus the HB process is responsible for 1.4% of global emissions, giant oceanic dead-zones caused by runoff, increased acid rain, less overall nutritious foods, and so so much more. It's a planetary catch-22. Without it billions will starve, but with it we continue to charge towards a mass-extinction event.
1. Pair headphones in a couple of clicks/taps; sound comes out.