This article conveniently omits the reason for the gigantic A-pillars - Other safety regulations that enforce a certain coverage of airbags for the passengers. We can't magically regulate this one away. These kinds of higher order consequences tend to be a really painful, gradual realization.
I would gladly purchase a new vehicle with zero airbags in it if I were allowed to. Especially if the tradeoff is a 50% buff to visibility in the corners. I would also happily sign a form that locks up my vehicle's title for all eternity and prohibits any form of resale to satisfy the safety-at-all-costs extremists who caused this mess in the first place.
Whatever they are using, it is absolutely necessary.
The reason nobody has used high-power electrostatic motors is that they require high electric fields, which would cause the electric breakdown of air and of most fluids. In contrast, the normal electromagnetic motors use high magnetic fields, which do not cause the breakdown of air, so they do not need immersion in an insulating fluid.
It is likely that the fluid used by them is some kind of fluorinated hydrocarbon, as those have high breakdown fields. Therefore leaks from such a motor are undesirable, so it would be interesting to know how do they prevent leaks between the rotating axle and its bearing. Rotating seals can never be perfect, as the users of Wankel motors must be aware. The main reliability problem of the Wankel motors has also been the rotating seals.
I assume that nobody has tried before to make such motors because nobody has found a way to prevent the leaks until now.
Perhaps the motors are intended to work only with the axle pointing upwards, in which case gravity would prevent the leaks.
Nice! I'd suggest embedding the simulation in the blog. I had to scroll up and down for a while before finding a link to the actual simulation.
(You might want to pick a value that runs reasonably well on old phones, or have it adjust based on frame rate. Alternatively just put a some links at the top of the article.)
See https://ciechanow.ski/ (very popular on this website) for a world-class example of just how cool it is to embed simulations right in the article.
(Obligatory: back in my day, every website used to embed cool interactive stuff!)
--
Also, I think you can run a particle sim on GPU without WebGPU.
Reminds me of a classic story that makes a good programming parable:
Back in 2011, my girlfriend was working at a catering company that announced shifts via a webpage and workers had to sign up for them. Other workers tended to pick them up very quickly, so it was hard to get too many shifts.
I wrote a quick web scraper to automatically accept any shift offered and email her.
For a couple weeks it was great, suddenly she had all the work she needed.
Then one day she woke up late to find a voicemail telling her she was fired.
Earlier that morning the script had detected a last minute job announced just an hour before start time and immediately accepted it, resulting in her not showing up to it. I had not accounted for the possibility they would announce a job so last minute, since it had never happened before.
Different types of stretching for different situations. Dynamic (controlled) movements are what you want to use for activities that require you to perform similar movements (e.g. martial arts).
Static stretching is generally not a great way to gain static flexibility. Yes, if you do a lot and hold your stretches for prolonged times you will gain static flexibility but there are much faster methods (e.g. PNF). An important point though is that you need a base of strength as well. I.e. if your muscles are weak you're either not going to gain flexibility or you will increase your chance of injury.
I think the "root" of lack of knowledge about stretching goes to high school phys ed classes. The sport science has been there for a long time, though it keeps getting refined, but things like doing quick static stretches as a "warm up" routine (does nothing and maybe increases chance of injury in the following activities) is just people that don't know the science passing on something they've learnt from people that don't know the science.
I remember reading a book that stated human socialization evolved when the primary interactions were between family groups and small tribal communities. Everyone knew everyone. Socialization with outsiders was formalized and much more time-consuming and awkward.
In the modern era, we are constantly interacting with strangers or people we barely know. Some gregarious people are very good at this, but otherwise it can be a challenge to find common ground for socializing. I don't know if any of this is true, but it strikes me as intuitively true.
Early in my career, with the bravado of youth, I sold myself as a hacker/computer whiz. Now I sell myself as someone who solves interesting problems.
I did game development, then embedded development, more game development, back to embedded, then robotics, then machine vision, then deep learning, then virtual reality, then back to game development (with machine learning), then embedded software in healthcare, then game development again, then vr & computer vision, and back to game development (back-end). And there's probably a few segues into other areas in there I have forgotten to mention.
I maintain multiple online profiles, that sell me as a specialist in a specific area. From blogs to single page websites. And why do I do this? Because when you go to a steak restaurant, you shouldn't expect their pizza selection to be great.
In Feynman's words "specialization is for insects." And I agree, but when selling yourself, specialization closes the deal. Specialize to win the bid, generalize when you've won their trust.
Customers, like patients, usually identify a pain point they have, and they want a specialist to take away that specific pain, be it software or medical. You sell the specialization, you keep that customer coming back with the ability to solve all of their problems.
I liken software development, and especially contract software development to follow the first principle of improv (also something I've done): You never so "no", you always follow with "yes, and..."
Yes and it doesn’t make sense to me. They had what would be considered moonshot goals for a car company, a compensation package attached to them, and he hit the numbers. Pay the man!
How they arrived at those numbers is nowhere near as relevant as coming to an agreement and meeting the targets.
It’s the complete opposite of all the situations where the CEO burns down the office tower and jumps out the window with a golden parachute.
In C99 and later you can use a similar workaround as JS/TS with designated initialization for functions which take a struct by value or pointer.
E.g. for a function:
void my_func(const bla_t* bla);
It would look like this:
my_func(&(bla_t){ .x = 123, .y = 542 });
Designators can be in any order and can also be optional (missing items are zero-initialized, so the function should be aware of that and replace them with defaults.
C++20 has a much more restricted designated init feature (basically useless for complex structs), but one nice thing in C++ is that default values can be declared right in the struct declaration.
> It's not just Twilio, you would have experienced this with nearly every SMS API provider. Across the different vendors I work with, I received numerous emails from each one advising me to come into compliance or risk my messages being rejected.
From what I can tell, the telcos aren't performing any enforcement, even tho the deadlines and extensions are long past. Not to say they won't but the endeavor seems to have a stall vibe going on.
Some of that might be related to having major mass-campaign requirements placed on small biz who occasionally send a handful of texts (often non-sales) from their workstation phone apps.
At least I hope this is why it has stalled. Sending a few texts during a customer service session shouldn't be regulated as if it were a blast of 1M SMS ads.
This is likely because of industry regulations around A2P (application-to-person ) messaging that had a due date of August 31, 2023.
It's not just Twilio, you would have experienced this with nearly every SMS API provider. Across the different vendors I work with, I received numerous emails from each one advising me to come into compliance or risk my messages being rejected.
The decline of Twilio's position has been as pretty clear for the last 18 months now, and has been a topic of conversation longer than that. Twilio never had the margins or control of their environment to nearly the degree you need to have in order to maintain software monopoly levels of growth over time.
Most of Twilio's business has been built on reselling network access purchased via SMS consolidators. These are companies who, decades ago, got their hardware installed inside the networks of the major phone carriers. This allows them direct access to send/receive SMSs. Twilio never really tried to own the network layer and these companies continued to demand higher and higher tolls for access. Short codes are a very good example of this. Twilio spent a lot of time and money trying to sew up access to those short codes.
On top of all that, high volume customers would move directly to these consolidators. Sometimes Twilio would keep some portion of the business if the sender used a round robin model to distribute sends, but often they didn't. OpenMarket and a handful of others were the goto providers at scale.
Add on to this the overall decline of the utility of SMS. Even as SMS volumes increased, they have not increased at the same rate as messaging overall. ie: other channels like iMessage and WhatsApp continue to pull volumes.
Cue Twilio's attempt to go up market. Flex and the purchase of Sendgrid were the best examples of this. Could CPaaS work for Twilio as a business model? I think we've seen so many fits and starts with the Flex product line and the integration of Sendgrid that it seemed like perhaps buying their way in to that market wasn't panning out.
Finally, as seen in the last year, the culture just seems to have gone off the rails at some point. [1] I don't know Jeff, but my respect for him is sky high. I've read all his writing over the years and he's been hugely influential for an entire generation of founders.
Unfortunately this seems like a bit of a hasty departure. I hope it isn't, but the choice of replacement CEO screams caretaker-CEO rather than shrewd strategic move.
Another commenter mentioned Stripe [2] and how they get a lot of credit for advancing DevTools. I agree that Twilio did it first and I think paved the way. Jeff Lawson and Twilio deserve more credit. "Ask your developer" was not a small thing at all.
I also think Stripe very likely is in a very similar position, but they are earlier in their cycle and they've avoided the scrutiny of being a public company. Credit card processors are gatekeepers and fully dominate in their markets. Stripe serves a purpose for them and dramatically improves access to their networks via great tooling and abstractions, but there is still a fixed cost toll to pay and what that toll is cannot be static for interminable time. Is Stripe in the "Commerce" market like Twilio is in the "Communications" market, or are they in the "payments" market, like Twilio is in the "SMS" market? (or somesuch)
In business, I have four "bad words": just, only, simply, and obviously.
Used in a work context, they are nearly always used in an attempt to diminish the perceived effort of something, so I get very sketched out when anyone (even/especially other programmers) starts throwing them around.
There is one programming language that fascinated me (maybe it was Ada) where it tried to have some basic tests inline with the code, by defining basic guidelines for legitimate results of the function.
For example, you could make a function called `addLaunchThrusterAndBlastRadius` (I know it make no sense, but bear with me), and then right alongside declaring it was an integer, you could put a limit saying that all results that this function can return must be greater than 5, less than 100, and not between 25 and 45. You could also do it when declaring variables - say, `blastRadius` may never be greater than 100 or less than 10, ever, without an exception.
I wish we could go further that direction. That's pretty cool. Sure, you can get that manually by throwing exceptions, but it was just so elegant that there was just no reason not to do it for every function possible.
I'm actually rather negative on automated testing.
Every project, that has gotten rid of the Q/A team to rely on automated tests, hasn't exactly gone well. Windows being the obvious example, but there are others. I'm not seeing the quality improvement that should be there. And even when there are tests, elementary mistakes (like the time Windows 10 would delete your Documents folder) slip through the pipeline and screw everything up. How is it that some of the most reliable software in the world (like the moon landing suite, the IRS tax system, or stock trading mainframes, or the Windows NT kernel) were written before automated tests; yet some of the buggiest software in the world is the most well-tested (like Windows 10, or Google Drive, or just about every SPA)?
In a team setting, they certainly seem to catch bugs. But is that because the tests have caught so many bugs that would've slipped into production; or is it that the team can afford to be careless now and writes sloppier code to begin with? I'm increasingly suspecting it's the latter, and while the tests catch most of the sloppiness, sometimes the slop passes...
Alternate title: the graph that broke HN's brain. You'll notice that 1) sugar consumption peaked around Y2K and declined after 2) the decline was driven by a decline in consumption of High-Fructose Corn Syrup (the most vilified sugar) specifically, and 3) the average American now consumes about as much added sugar as the average American did in 1970--yet their waistlines are not remotely comparable.
Technically the graph is of per capita added sugar availability and isn't adjusted for loss (due to spoilage, plate waste, etc.), but it meshes with NHANES survey data: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9434277/
>In conclusion, over the 18 year time span, from 2001 to 2018, added sugars intake declined significantly among younger adults (19–50 years) in the U.S., regardless of race and ethnicity (i.e., similar for Black and White individuals), income level, physical activity level or body weight status, and declines were mainly due to reductions in added sugars intake from sweetened beverages (primarily soft drinks and fruit drinks). These trends coincide with the evolving emphasis in the DGA on reducing added sugars intake and the increasing focus on population-level interventions aimed at reducing intakes.
A couple commenters have disagreed with the author saying “That’s one reason for the TCP three-way handshake”. But I’ve been implementing uTP, the UDP protocol that torrents use (https://www.bittorrent.org/beps/bep_0029.html) and it seems to avoid amplification attacks thanks to a three-way handshake.
So the author seems correct to say it’s one (good) reason for the three-way handshake.
(It turns out when you want to spider libgen for all epubs for training data, the most straightforward way to do this, apart from downloading >40TB, is to write your own torrent client that selectively recognizes epub files and fetches only those.)
The uTP spec (BEP 29, linked above) also gives a good overview of why UDP is sometimes the correct choice over TCP in modern times. uTP automatically throttles itself to yield excess bandwidth to any TCP connection that wants it. Imagine trying to write an app that uses all the bandwidth on your network, without impacting anyone else on the network. You’d find it quite hard. uTP does it by embedding time deltas in the packets and throttling itself whenever the timestamp goes over ~100ms, which indicates either a connection dropout or bandwidth saturation.
I.e. if your ping suddenly spikes, it’s because someone is hogging all the bandwidth. Normally you have to track down who’s doing it, like a detective hunting a murderer. But uTP knows that it’s the murderer, so it throttles itself back. Presto, perfect bandwidth utilization.
But why bother with UDP? Why can’t you do this with a TCP connection? Just measure the time deltas and throttle yourself if it spikes, right? Good question, and I don’t have a good answer. Perhaps one of you can give a persuasive one, lest you agree that ISPs should just throttle UDP by design.
It’s certainly simpler to solve this at a protocol level, but one could imagine a BEP that adds time deltas to the torrent spec and prevents sending pieces too quickly (“if the deltas spike, send fewer blocks per second"). It might even be simpler than bothering to reimplement TCP over UDP. But perhaps there’s a good reason.
One idea that comes to mind is that the goal is to throttle sends and receives. You control your sends, but you’re at the mercy of the counterparty for receives. You’d need to keep throttle info for every peer, and notice when their pings spike, not just yours. Then you’d send fewer packets. But that’s what uTP does, and that doesn’t answer “Why do it in UDP instead of TCP?”
My two cents, your LinkedIn is pretty bare (~130 connections, few skills/words/fluff). If you want to play the LinkedIn game you need to get 500+ connections (connect with a bunch of recruiters to game the numbers).
Don't advertise you are looking for a job (recruiters don't like to hire people looking for a job, that means they can't find a job and are inferior to people already employed). Instead, frame yourself as employed and happy.
Message recruiters and let them know you're passively looking at opportunities in the market and wonder what they are looking for.
Beef up the LinkedIn with some recommendations, skills, etc.
Your personal site can use some updates.. get a headshot/basic whois/bio on it pointing to your LinkedIn/Github/Email and a rundown of some of your projects.
Ageism could be part of the problem as well. A clean shave/haircut/dye wouldn't hurt.
Starfighter was a blast to play, and taught me a lot about the back end of stock trading.
I did a lot of silly things after winning the initial play through.
- Solving the final challenge using a basysian spam filter on trade directions triples.
- Live 3D market visualization in Minecraft
- Buddy of mine built a stock exchange stack, speaking the same protocol, and we had PvP contests on who could market make the best on it. I won with a PHP powered marketing making bot with live code reloading.
- Built a cardboard box out of scrap electronics from adafruit that would let me adjust a market maker bots net position by tilting the box from side to side.
- Was either #2 or #3 to solve all the challenges
On the business side, if I recall correctly, most companies wanted to just shove the winners down their existing bad hiring pipeline process, and a lot of successful contestants who showed up already had great jobs, and so weren't actually looking.
You know, I've been contemplating whether I should lose the pseudonyms and start using my real name for all of my online activity. This story has convinced me that it's something I need to do. That's such a crazy sequence of events that would have never happened if you had been playing under a pseudonym that's hard or impossible to connect back to your resume. All these years, I've been sabotaging my "luck surface area" with my stubborn insistence of online anonymity.
I have a system I use to communicate with about 14,000 connections on LinkedIn. It takes me about six months to work my way through the list, at which point, I start again. I spend about 15 to 20 minutes per day on the outreach. 30 minutes at the outside.
I tend to build little side projects, about once every four months or so, side projects that I can show off to people that are interesting in some way, then use that as my launching point for reaching out. "Hey, I built this interesting thing, it uses the following technologies. This isn't a pitch, I'm not trying to sell you a service, I just thought you might be interested. What are you up to these days? Building anything interesting?"
Leads to an awful lot of potential work, job interviews, availability checks, and coffee meetings. A lot of people ping back, many never respond. This technique has gotten me jobs and work for the past 15+ years, ever since LinkedIn was launched.
My grandfather used to do this. He had notebooks filled with what he did every day of his career. It's really cool to be able to go back and look through his notebooks and see what he was doing in April of 1971.
My notetaking is not as rigorous as my Grandfathers, but I do something similar. Every month I make a new manilla folder with the month and year on the tab. Each week I write down every project I'm working on and every meeting I have. If someone brings me a new project I put it on the sheet. Every note I take a meetings and during projects goes in that folder.
At the end of each week I go through the notes from that week and write down anything interesting on that week's note sheet. At the end of the month I go through all the notes and write what I accomplished on the front of each manila folder.
It makes it really easy to keep track of everything you've done, without consciously keeping a journal in the moment. It's also a really easy organizational system. When someone asks, "What did we do for [x] in the past", all I need to do is flip through my folders looking at the front for anything that rings a bell, rather than trying to keep up with a tagged organizational system.
This probably doesn't work if you don't have a file cabinet though.
Side note on escrow - we get asked this from time to time. We respond, we're happy to provide it on certain plans and quote them for it. No one has ever taken us on it.
In b2b you never say no, always have an option for what they're asking and charge more for it.
It's an illusion that 2 can happen without 1. Unless you have financial and entrepreneurial freedom, you will never change anything anywhere.
Whatever social structure you imagine you might navigate, be it business, politics, public opinion, a charitable organization, the world of art, literature or academia; you will always find a pre-existing, entrenched power structure of people calling the shots, controlling key decisions and very unwilling to cut you in, because they either have their own vision to put in practice, or... they simply like the power, status and nice amenities that come with them.
The business of changing the world is the business of power. You either have capital, name recognition, the largest lab, a huge social network of other powerful people in your debt, a massive amount of luck and/or first mover advantage etc. Otherwise, the powerful people of the world, often particularly apprehensive to world changing plans, will just crush you and move on.
Yep, every time I've ever worked for a manager who wants regular 1:1 meetings "just to touch base" I pretty quicky just start feeding them platitiudes that I know they want to hear. They are without exception holding these meetings because of their own self-confidence issues, if they were concerned about me getting work done they wouldn't keep pulling me away from it.
Yeah I'm honestly amazed at how many software people don't get that the optimal strategy is cheap fixed price and incredibly high margin change requests.
That being said you need a good spec to do this, and that definitely isn't normally the case in software.
Game development adds another dimension. A musician I know helps small indie artists with music as a side business. His contract states a fixed number of hours producing the music and effects is free, until earning some 100k revenue / month, when it starts costing 0.1% of revenue royalties / month. He has fun with it, it's usually free, but if someone makes the new Minecraft with his music, he will get his share.
You interpret a free opinion article that addresses a general topic as directed at you personally, take offense, then ad hominem question the author’s motives. He’s doing a little self-promotion, as you do and we all do online, but he’s not trying to “change the current consulting landscape” to benefit himself. Whom do you think reads his articles? None of my customers will.
It should go without saying to non-amateurs that exceptions happen, no general advice fits every situation, and your mileage may vary.
Then you boast about your own success. You don’t add anything to the conversation, offer no meaningful critique.
Good for you with the high earnings and luxury SUV. For most freelancers (or consultant if you prefer, clients don’t care how you style yourself), especially the amateurs just starting out, the article addresses a very real set of problems. Those less experienced freelancers do face a race to the bottom as commodities. The article might help them think about how they position themselves and structure their projects so they aren’t struggling on freelancer marketplaces.
If readers take nothing else away from the article they might think about adding business value versus selling their time, and having some skin in the game (assuming some risk) as a way to build better relationships with clients and improve their technical and business skills.
Regulators are the reason for this.
This article conveniently omits the reason for the gigantic A-pillars - Other safety regulations that enforce a certain coverage of airbags for the passengers. We can't magically regulate this one away. These kinds of higher order consequences tend to be a really painful, gradual realization.
I would gladly purchase a new vehicle with zero airbags in it if I were allowed to. Especially if the tradeoff is a 50% buff to visibility in the corners. I would also happily sign a form that locks up my vehicle's title for all eternity and prohibits any form of resale to satisfy the safety-at-all-costs extremists who caused this mess in the first place.