Maybe it's spite-driven development, but I'd love to hear about someone who, upon learning that LLMs are suggesting endpoints in their API that don't exist, implements them specifically to respond with a status code[0] of "421: Misdirected Request". Or, for something less snarky and more in keeping with the actual intent of the code, "501: Not Implemented". If the potentially-implied "but it might be, later" of 501 is untenable, I humbly propose this new code: "513: Your Coding Assistant Is Wrong"
It's really more about how when I say "I am a teapot", I want people to think "Oh, he's a teapot!" and not "He might be a teapot, or he might be chiding me for misusing llms or he might be signaling that the monkey is out of bananas or [...]"
What would be an appropriate response code for "He might be a teapot, or he might be chiding me for misusing llms or he might be signaling that the monkey is out of bananas or [...]"?
Each of those should have a clear, unique response code. There should be no "maybe it's this, maybe it's that". A real-world example is login forms that tell you something like "Invalid e-mail or password".
Are you joking around with me or is my point just not as obvious as I believed it to be?
Edit: Not sure if that last bit sounds confrontational, please know that it's a genuine question.
So we've gone down a bit of a path here, and thats cool :-)
Thank you for taking the time to respond and ask. My original 418 message was very much intended as a light hearted joke, in the spirit of "if we wanted to return cheeky responses to previously nonsense APIs that AI invented" I actually like this idea of subverting AI in inventive ways.
Now to the point where we've got here, yes I 100% agree in real-world, production applications, you should return response codes which accurately represent the actual response. But there also a place for fun, even in production, and 418 represents that for me.
I have this little bookmarklet in my bookmarks bar that I use constantly. It removes all fixed or sticky elements on the page and re-enabled y-overflow if it was disabled:
Same here. Right-click the page and choose Inspect (or Inspect Element). Click the Console tab, paste this code, and press Enter:
document.getElementById("presence")?.remove();
If you want to know why this is happening in your brain, it's likely a prey/predator identification thing. I would like to think that being so distracted by this just means I have excellent survival instincts :)
Reminded me so much of a game called Chess Royale that I used to play, the avatars and the flags (screenshot [1]). It was really good too; and then Ubisoft being Ubisoft, they killed it even though the game had bots and could have been made single-player.
isn't this the page that used to have cursors everywhere in the background? I think the distracting design is some intentional running joke at this point
Same here. I don't have the time or patience to hack the page like the siblings comments suggest. There are more articles on the web than I will ever be able to consume in my lifetime, so I just close the tab and move on when the UX is aggressively bad.
Maybe if the background color on all pages was a heatmap of the current top line of the page, so that you could see where people were reading and how many were reading, it would be better?
Also, what if it played slow and brooding music when fewer people were reading and epic action adventure music when many people were reading it?
How about if the page mined bitcoin and the first person to enter a page made a percentage higher percentage of the next person’s bitcoin and less of the next one, like a multi-level marketing mining strategy?
i literally opened the developer console to delete that element from the page. no surprise somebody who has no idea how to make a readable website is getting bullied by a chatbot.
> We see the same at Instant: for example, we used tx.update for both inserting and updating entities, but LLMs kept writing tx.create instead. Guess what: we now have tx.create, too.
Good. Think of all the dev hours that must’ve been wasted by humans who were confused by this too.
> for example, we used tx.update for both inserting and updating entities, but LLMs kept writing tx.create instead. Guess what: we now have tx.create, too.
If a function can both insert and update, it should be called "put". Using "update" is misleading.
Implement all of them, with slightly different edge cases that result in glaringly obvious RCE when two or three of them are misused in place of each other.
(New startup pitch: Our agentic AI scans your access and error logs, and automatically vibe codes new API endpoints for failed API calls and pushes them to production within seconds, all without expensive developers or human intervention! Please form an orderly queue with your termsheets and Angel Investment cheques.)
I wrote some joke code long ago for a library that does everything. You wrap all function calls in a try that submits the entire script as a bug report. The animatedBackgroundWithUnicorns() is then implemented.. eh i mean the bug is fixed.
It might actually work if the subscription is expensive enough.
... it depends on how the given server defines PUT of course (also how we define upsert, and does upsert make sense outside a DB setting? well, of course some DBs have a HTTP/ReST API.)
that said, usually PUT is meaningful for a URL (or URI), and in these circumstances it's like upsert, and it cannot blow away other keys, as it operates on one key, given by the URL
of course if we assume a batch endpoint, sure, it then can do non-upsert-like things
Sorry, we will reach the heat death of the universe before I alter a single line of code simply because some LLM somewhere extruded incorrect synthetic text. That is so bonkers, I feel offended I even need to point out how bonkers it is.
Recently i had an interesting chat with my team around coding principles of the future.
I think the way people will write code will not be around following solid principles or making sure your cyclometric complexity is high or low, nor it would be about is your code readable or not.
I think future coding principles would be around whether your agentic ide can index it well to become context aware, does it fix into the context window or not. It will be around the model you use and thr code it can generate. We will index on maintainability of the code, as code will become disposable as rate of change will increase dramatically. It will be around whether your vibed prompts matches the code thats already generated to reach some accuracy or generate enough serendipity.
If it were somehow a human that was consistently and confidently handing out made up programming advice about one's products, would companies still respond by just adding whatever imagined feature and writing a vaguely bemused blog post about it?
Maybe I can start pretending I’m an LLM and see if that gets me a pass when I make silly mistakes or hallucinate in entirely the wrong direction. As long as I look confident doing so.
This feels like the beginning of a wonderful friendship between me and the LLMS. I work as a fractional CTO. One of the things that frustrate me is when my clients have various idiosyncratic naming conventions on things, eg there’s a ”dev” and a ”prod” environment on AWS, but then there’s a ”test” and ”production” environment in Expo. It just needlessly consumes brain cycles, especially when you’re working with multiple clients. I guess it’s the same for the LLMs, just on a massive scale.
In general I think it’s great whenever some weight / synapse strength bits can be reallocated from idiosyncratic API naming / behavior towards real semantics.
As the old joke goes: there are two hard problems in computer science - cache invalidation, naming things and off-by-one errors
Naming things doesn’t get easier just because you bring an LLM to do it based on an incoherent stochastic process.
Have you asked why those environments have not been renamed to align? As a former CTO I’d see it immediately as a signal of poor communication, poor standards adoption, or both. It’s this low hanging stuff that you can fix relatively easily where you’re actually using that work to make the culture better and make people care more.
Don’t outsource things you should care about a lot. Naming things is something you shouldn’t be hand waving away to a model.
Sure, I can spend my days doing that. But I appreciate the help (from the LLMs). And I think we actually have the same goal function: we want to make naming more compressible, less unexpected. You can call that culture (and it is) but you can also see it as pure information theory.
In postmodern societies, reality itself is structured by simulation—"codes, models, and signs are the organizing forms of a new social order where simulation rules".
The bureaucratic and legal apparatus you invoke are themselves caught up in this regime. Their procedures, paperwork, and legitimacy rely on referents—the "models" and "simulacra" of governance, law, and knowledge—that no longer point back to any fundamental, stable reality. What you serve, in effect, is the system of signification itself: simulation as reality, or—per Baudrillard—hyperreality, where "all distinctions between the real and the fictional, between a copy and the original, disappear".
"The spectacle is not a collection of images but a social relation among people, mediated by images." (Debord) Our social relations, governance, and even dissent become performances staged for the world's endless mediated feedback loop.
In this age, according to Heidegger, "everything becomes a 'picture', a 'set-up' for calculation, representation, and control." The machine is not just a device or a bureaucratic protocol—it is the mode of disclosure through which the world appears, and your sense of selfhood and agency are increasingly products (and objects) within this technological enframing.
Is there a general name and framing we could apply to these “AI” that is equally as accurate but sheds all of the human biases associated with the terms?
Like… it’s just a really, really, really good autocomplete and sometimes I find thinking of it that way cleans up my whole mental model for its use.
I like something related to "interns" (artificial interns?) because it keeps the implication that you still always have to double-check, review and verify the work they did.
Does that actually clean up your mental model though? At some number of "reallys" that autocomplete starts to sound like intelligence. Like, what is "taking customer requirements and turning them into working code" if not just really really really really really really really good autocomplete with this mental model?
A lot of people are just doing the job of a really good autocomplete, not being asked to make many, if any, nontrivial decisions in their job.
Taking requirements and making working code is something some models are adequate at. It’s all the stuff around that, which I think holds the value, such as deciding things like when the requirements are wrong.
It's really difficult because many of the task types we use AI for are those that are linguistically tied to concepts of human actions and cognition. Most of our convenient language use implies that AI are thinking people.
I rented a car recently for a trip to Arizona that had lane keeping on by default. The highway I was traveling on was undergoing extensive repair. Not only did the car sound audible alarms with some frequency, since the highway had been rerouted in places using traffic cones, it also constantly tried to veer the car back into “the lane.” Since the lane was in some places just a hole, the consequences would have been bad. I ended up pulling over and fishing through the menus until I found a way to turn it all off.
It appears that there’s a very long tail of exceptional circumstances that must be handled with autonomous driving.
imho lane keep is a misfeature. I own one car where it is impossible to turn off without also turning off lane departure warning (arguably a somewhat useful feature).
Yep, i'd not like it too - changing lanes requires increased attention and now during the maneuver you steering wheel starts to vibrate out-of-the-blue.
That isn't to argue about using of the blinker, it is about the way the assist is implemented in this case - it doesn't help directly with the blinker, instead it punishes you and thus stress-injects-and-conditions you for the instinct to use blinker next time. Net positive probably for the driver and society thus demonstrating again that forcing individual submission is an effective way to social harmony.
And blinker is just very mild use case. LLMs can already today in some cases and will be more and more tomorrow able to recognize when your behavior isn't legal and/or isn't very moral (like it would hear that you say and see what you text on the phone and would for example recognize a drug buying - pardon such a primitive simplicity, it is just a caricaturish exampl for illustration purposes only - and we've already established a tendency of LLMs to rat you out to authorities) and thus LLM can act to warn you about or even prevent your actions and/or report you to authorities, probably even before you actually commit anything.
No you're most often breaking traffic laws and increasing a chance of a collision, than the off chance of needing to make such a maneuver to avoid an accident. The societal cost of collisions is worth more than your freedoms. Or you should pay higher premiums for turning those safety features off.
> No you're most often breaking traffic laws and increasing a chance of a collision, than the off chance of needing to make such a maneuver to avoid an accident
For all you know I need to exit my lane in a hurry to avoid a collision. The car doesn't have the same context that the driver has. It only cares about staying between two painted lines, it might not have any idea about a truck coming straight at me going the other direction
> The societal cost of collisions is worth more than your freedoms
If a semi is in my lane barrelling toward me I'm not obligated to just accept death so I don't endanger anyone else by accident by swerving to avoid it
The fact is that human drivers have a lot more information and awareness than a handful of sensors installed by idiot engineers that think the only bad thing that ever happens when driving is that someone changes lanes without signalling
It vibrates and tries to gently guide you. It will absolutely not overpower you if you are swerving in an emergency. You are talking hypothetical nonsense.
You are literally too lazy to move a single finger. You are a bad driver. Being "in a hurry" makes no sense either, turning on your blinkers should be ingrained in your muscle memory and take no additional effort.
And I say that makes no sense. If you use your blinkers, lane assist doesn't get in the way. So do what you should be doing anyway and use your blinkers.
To those that still believe that a bunch of data loaded into memory, where data can be anything from a scientific article to a message between two lovers, getting triggered for an output with input and a basic for loop can represent anything intelligence, i have some bad news for you like damn ya'll don't you know git(hub) & huggingface? Ofcourse the drawback of that is that you are not contributing to AGI KEK!
[0]: https://en.wikipedia.org/wiki/List_of_HTTP_status_codes