Hacker Newsnew | past | comments | ask | show | jobs | submit | ripe's commentslogin

What does "readability" mean? It's mentioned in several of these jokes: "binary readability", etc.

It is a formal process via which it is confirmed that you know enough of X to submit code to the codebase, where X can be c++, java, python, etc. If you don't have X readability, then, in addition to your main code reviewer, you need to have a readability reviewer look at your code, who will be focusing only on X, not the logic of your code.

readability grants a code reviewer the power to approve changes in a specific language

A bit more detail: long ago, before the time of the Great Brace War, people wrote code using all sorts of styles: tabs/spaces, short or long variable/function names, Hungarian notation or not, comments or not, long blocks or short, and so on. You could often tell which of your teammates wrote a block of code simply by newline usage. This made it hard to read and contribute code in a less familiar part of the codebase.

Google mitigated this problem at scale by introducing the concept of "readability" for each language (C++, Java, and Python, simpler times). If you had readability, you could approve another person's change for that language. If you didn't, you could still review the code, but the reviewee also had to go find someone else with readability.

After you'd accumulated a certain number of commits in a language, you could try to get your own readability by bundling up your CLs (changelists) and submitting them to the readability queue, where a certain group of senior people with readability would evaluate whether your code was sufficiently idiomatic. This process could take months, or even years if you didn't write much in a particular language. In any event, it felt like a real achievement to get readability.

The upshot of all this was that most code at Google felt authorless in a good way: if you knew and expected Google style, all the codebase felt the same, and you could concentrate on the logic flow rather than the regional dialect. And needless to say all the energy wasted on whose style was best was squelched in the readability queue.

This is all past tense because I last wrote code for Google more than 10 years ago, and I'm sure the process has changed since then. Code formatters, shared editor defaults, and presubmit checks have surely automated away a lot of the toil, and there's much less of a monorepo culture these days, so there are probably more style dialects (in addition to many, many more permitted languages).


I left in 2023. From memory at that time:

1) go kinda eliminates a lot of the reasons for style guides to exist w/ `go fmt` and separately, I think the Go readability team made a conscious decision to approach readability differently anyway,

2) ... once you had a couple people on your local team w/ readability, if your team wanted to do things a certain way, well, style guides and formal readability wasn't really an issue anymore.


I was wondering if someone was going to ask. It's the most bizzare aspect of code reviews at Google.

And "Readability" doesn't mean you are good at a language, it means you are good at it in the way Google uses it. C++ readability is the poster child of this. Borgcron, not so much.


What a comprehensive, well-written article. Well done!

The author traces the evolution of web technology from Notepad-edited HTML to today.

My biggest difference with the author is that he is optimistic about web development, while all I see is shaky tower of workarounds upon workarounds.

My take is that the web technology tower is built on the quicksand of an out-of-control web standardization process that has been captured by a small cabal of browser vendors. Every single step of history that this article mentions is built to paper over some serious problems instead of solving them, creating an even bigger ball of wax. The latest step is generative AI tools that work around the crap by automatically generating code.

This tower is the very opposite of simple and it's bound to collapse. I cannot predict when or how.


I was also impressed and read the whole thing and got a lot of gaps filled in my history-of-the-web knowledge. And I also agree that the uncritical optimism is the weak point; the article seems put together like a just-so story about how things are bound to keep getting more and more wonderful.

But I don't agree that the system is bound to collapse. Rather, as I read the article, I got this mental image of the web of networked software+hardware as some kind of giant, evolving, self-modifying organism, and the creepy thing isn't the possibility of collapse, but that, as humans play with their individual lego bricks and exercise their limited abilities to coordinate, through this evolutionary process a very big "something" is taking shape that isn't a product of conscious human intention. It's not just about the potential for individual superhuman AIs, but about what emerges from the whole ball of mud as people work to make it more structured and interconnected.


I really like this author's summary of the 1983 Bainbridge paper about industrial automation. I have often wondered how to apply those insights to AI agents, but I was never able to summarize it as well as OP.

Bainbridge by itself is a tough paper to read because it's so dense. It's just four pages long and worth following along:

https://ckrybus.com/static/papers/Bainbridge_1983_Automatica...

For example, see this statement in the paper: "the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have."

This summarizes the first irony of automation, which is now familiar to everyone on HN: using AI agents effectively requires an expert programmer, but to build the skills to be an expert programmer, you have to program yourself.

It's full of insights like that. Highly recommended!


I think it's even more pernicious than the paper describes as cultural outputs, art, and writing aren't done to solve a problem, they're expressions that don't have a pure utility purpose. There's no "final form" for these things, and they change constantly, like language.

All of these AI outputs are both polluting the commons where they pulled all their training data AND are alienating the creators of these cultural outputs via displacement of labor and payment, which means that general purpose models are starting to run out of contemporary, low-cost training data.

So either training data is going to get more expensive because you're going to have to pay creators, or these models will slowly drift away from the contemporary cultural reality.

We'll see where it all lands, but it seems clear that this is a circular problem with a time delay, and we're just waiting to see what the downstream effect will be.


> All of these AI outputs are both polluting the commons where they pulled all their training data AND are alienating the creators of these cultural outputs via displacement of labor and payment

No dispute on the first part, but I really wish there were numbers available somehow to address the second. Maybe it's my cultural bubble, but it sure feels like the "AI Artpocalypse" isn't coming, in part because of AI backlash in general, but more specifically because people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator.

I think a similar idea might be persisting in AI programming as well, even though it seems like such a perfect use case. Anthropic released an internal survey a few weeks ago that was like, the vast majority, something like 90% of their own workers AI usage, was spent explaining allnd learning about things that already exist, or doing little one-off side projects that otherwise wouldn't have happened at all, because of the overhead, like building little dashboards for a single dataset or something, stuff where the outcome isn't worth the effort of doing it yourself. For everything that actually matters and would be paid for, the premier AI coding company is using people to do it.


I guess I'm in a bubble, because it doesn't feel that way to me.

When AI tops the charts (in country music) and digital visual artists have to basically film themselves working to prove that they're actually creating their art, it's already gone pretty far. It feels like the even when people care (and they great mass do not) it creates problems for real artists. Maybe they will shift to some other forms of art that aren't so easily generated, or maybe they'll all just do "clean up" on generated pieces and fake brush sequences. I'd hate for art to become just tracing the outlines of something made by something else.

Of course, one could say the same about photography where the art is entirely in choosing the place, time, and exposure. Even that has taken a hit with believable photorealistic generators. Even if you can detect a generator, it spoils the field and creates suspicion rather than wonder.


Is AI really topping the charts in country music?

https://youtu.be/rGremoYVMPc?si=EXrmyGltrvo2Ps8E


> more specifically because people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator.

Look at furniture. People will pay a premium for handcrafted furniture because it becomes part of the story of the result, even when Ikea offers a basically identical piece (with their various solid-wood items) at a fraction of the price and with a much easier delivery process.

Of course, AI art also has the issue that it's effectively impossible to actually dictate details exactly like you want. I've used it for no-profit hobby things (wargames and tabletop games, for example), and getting exact details for anything (think "fantasy character profile using X extensive list of gear in Y specific visual style") takes extensive experimentation (most of which can't be generalized well since it depends on quirks of individual models and sub-models) and photoshopping different results together. If I were doing it for a paid product, just commissioning art would probably be cheaper overall compared to the person-hours involved.


> people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator

Businesses which don't want to pay money strongly prefer AI.


Yeah but if they, for example use AI to do their design or marketing materials then the public seems to dislike that. But again, no numbers that's just how it feels to me.


After enough time, exposure and improvement of the technology I don’t think the public will know or care. There will be generations born into a world full of AI art who know no better and don’t share the same nostalgia as you or I.


Then they get a product that legally isn't theirs and anyone can do anything with it. AI output isn't anyone's IP, it can't be copyrighted.


What's hilarious is that, for years, the enterprise shied away from open source due to the legal considerations they were concerned about. But now... With AI, even though everyone knows that copyright material was stolen by every frontier provider, the enterprise is now like: stolen copyright that can potentially allow me to get rid of some pesky employees? Sign us up!


Yup, there's this angle that's been a 180, but I'm referring to the fact that the US Copyright Office determined that AI output isn't anyone's IP.

Which in itself is an absurdity, where the culmination of the world's copyrighted content is compiled and used to then spit out content that somehow belongs to no one.


No difference from e.g. Shutterstock, then?

I think most businesses using AI illustrations are not expecting to copyright the images themselves. The logos and words that are put on top of the AI image are the important bits to have trademarked/copyrighted.


I guess I'm looking at it from a software perspective, where code itself is the magic IP/capital/whatever that's crucial to the business, and replacing it with non-IP anyone can copy/use/sell would be a liability and weird choice.


Art is political more than it is technical. People like Banksy’s art because it’s Banksy, not because he creates accurate images of policemen and girls with balloons.


I think "cultural" is a better word there than "political."

But Banksy wasn't originally Banksy.

I would imagine that you'll see some new heavily-AI-using artists pop up and become name brands in the next decade. (One wildcard here could be if the super-wealthy art-speculation bubble ever pops.)

Flickr, etc, didn't stop new photographers from having exhibitions and being part of the regular "art world" so I expect the easy availability of slop-level generated images similarly won't change that some people will do it in a way that makes them in-demand and popular at the high end.

At the low-to-medium end there are already very few "working artists" because of a steady decline after the spread of recorded media.

Advertising is an area where working artists will be hit hard but is also a field where the "serious" art world generally doesn't consider it art in the first place.


Not often discussed is the digital nature of this all as well. An LLM isn't going to scale a building to illegally paint a wall. One because it can't, but two because the people interested in performance art like that are not bound by corporate. Most of this push for AI art is going to come from commercial entities doing low effort digital stuff for money not craft.

Musicians will keep playing live, artists will keep selling real paintings, sculptors will keep doing real sculptures etc.

The internet is going to suffer significantly for the reasons you point out. But the human aspect of art is such a huge component of creative endeavours, the final output is sometimes only a small part of it.


Mentioning people like Banksy at all is missing the point though. It makes it sound like art is about going to museums and seeing pieces (or going to non-museums where people like Banksy made a thing). I feel like, particularly in tech circles, people don’t recognize that the music, movies and TV shows they consume are also art, and that the millions of people who make those things are very legitimately threatened by this stuff.

If it were just about “the next Banksy” it would be less of a big deal. Many actors, visual artists, technical artists, etc make their living doing stock image/video and commercials so they can afford rent while keeping their skills sharp enough to do the work they really believe in (which is often unpaid or underpaid). Stock media companies and ad agencies are going to start pumping out AI content as soon as it looks passable for their uses (Coca Cola just did this with their yearly Christmas ad). Suddenly the cinematographers who can only afford a camera if it helps pay the bills shooting commercials can’t anymore.

Entire pathways to getting into arts and entertainment are drying up, and by the time the mainstream understands that it may be too late, and movie studios will be going “we can’t find any new actors or crew people. Huh. I guess it’s time to replace our people with AI too, we have no choice!”


> I think "cultural" is a better word there than "political."

Oh. What is the difference?


I’d say in this context that politics concerns stated preferences, while culture labels the revealed preferences. Also makes the statement «culture eats policy for breakfast» make more sense now that I’ve thought about it this way.


I'd distinguish between physical art and digital art tbh. Physical art has already grappled with being automated away with the advent of photography, but people still buy physical art because they like the physical medium and want to support the creator. Digital art (for one off needs), however, is a trickier place since I think that's where AI is displacing. It's not making masterpieces, but if someone wanted a picture of a dwarf for a D&D campaign, they'd probably generate it instead of contracting it out.


Right, but the question then is, would it actually have been contracted out?

I've played RPGs, I know how this works: you either Google image search for a character you like and copy/paste and illegally print it, or you just leave that part of the sheet blank.

So it's analogous to the "make a one-off dashboard" type uses from that programming survey: the work that's being done with AI is work that otherwise wouldn't have been done at all.


> AND are alienating the creators of these cultural outputs via displacement of labor and payment

YES. Thank you for these words. It's a form of ecological collapse. Thought to be fair, the creative ecology has always operated at the margins.

But it's a form of library for challenges in the world, like how a rainforest is an archive of genetic diversity, with countless application like antibiotics. If we destroy it, we lose access to the library, to the archive, just as the world is getting even more treacherous and unstable and is in need of creativity


> So either training data is going to get more expensive because you're going to have to pay creators, or these models will slowly drift away from the contemporary cultural reality.

Nah, more likely is that contemporary cultural reality will just shift to accept the output of the models and we'll all be worse off. (Except for the people selling the models, they'll be better off.)

You'll be eating nothing but the cultural equivalent of junk food, because that's all you'll be able to afford. (Not because you don't have the money, but because artists can't afford to eat.)


> I think it's even more pernicious than the paper describes as cultural outputs, art, and writing aren't done to solve a problem, they're expressions that don't have a pure utility purpose. There's no "final form" for these things, and they change constantly, like language.

Being utilitarian and having a "final form" are orthogonal concepts. Individual works of art do usually have a final form - it's what you see in museums, cinemas or buy in book stores. It may not be the ideal the artist had in mind, but the artist needs to say "it's done" for the work to be put in front of an audience.

Contrast that with the most basic form of purely utilitarian automation: a thermostat. A thermostat's job is never done, it doesn't even have a definition of "done". A thermostat is meant to control a dynamic system, it's toiling forever to keep the inputs (temperature readings) within given envelope by altering the outputs (heater/cooler power levels).

I'd go as far as saying that of the two kinds, the utilities that are like thermostats are the more important ones in our lives. People don't appreciate, or even recognize, the dynamic systems driving their everyday lives.


Yes! One could argue that we might end up with programmers (experts) going through a training of creating software manually first, before becoming operators of AI, and then also spending regularly some of their working time (10 - 20%?) on keeping these skills sharp - by working on purely education projects, in the old school way; but it begs the question:

Does it then really speeds us up and generally makes things better?


This is a pedantic point no longer worth fighting for but "begs the question" means something is a circular argument, and not "this raises the question"

https://en.wikipedia.org/wiki/Begging_the_question


No it doesn’t. The meaning of that phrase has changed. Almost nobody uses the original meaning anymore. Update your dictionary.


>skills, which later generations of operators cannot be expected to have.

You can't ring more true than this. For decades now.

For a couple years there I was able to get some ML together and it helped me get my job done, never came close to AI, I only had kilobytes of memory anyway.

By the time 1983 rolled around I could see the writing on the wall, AI was going to take over a good share of automation tasks in a more intelligent way by bumping the expert systems up a notch. Sometimes this is going to be a quantum notch and it could end up like "expertise squared" or "productivity squared" [0]. At the rarefied upper bound. Using programmable electronics to multiply the abilities of the true expert whilst simultaneously the expert utilized their abilities to multiply the effectiveness of the electronics. Maybe only reaching the apex when the most experienced domain expert does the programming, or at least runs the show.

Never did see that paper, but it was obvious to many.

I probably mentioned this before, but that's when I really bucked down for a lifetime of experimental natural science across a very broad range of areas which would be more & more suitable for automation. While operating professionally within a very narrow niche where personal participation would remain the source of truth long enough for compounding to occur. I had already been a strong automation pioneer in my own environment.

So I was always fine regardless of the overall automation landscape, and spent the necessary decades across thousands of surprising edge cases getting an idea how I would make it possible for someone else to even accomplish some of these difficult objectives, or perhaps one day fully automate. If the machine intelligence ever got good enough. Along with the other electronics, which is one of the areas I was concentrating on.

One of the key strategies did turn out to be outliving those who had extensive troves of their own findings, but I really have not automated that much. As my experience level becomes less common, people seem to want me to perform in person with greater desire every decade :\

There's related concepts for that too, some more intelligent than others ;)

[0] With a timely nod to a college room mate who coined the term "bullshit squared"


> By the time 1983 rolled around

That early? There were people claiming that back then, but it didn't really work.


>people claiming that back then, but it didn't really work.

Roger. You could also say that's true today.

Seems like there was always some consensus about miracles just around the corner, but a whole lot wider faith has built by now.

I thoroughly felt like AI was coming fast because I knew what I would do if I had all that computer power. But to all appearances I ran the other way since that was absurdly out-of-reach, while at the same time I could count on those enthusiasts to carry the ball forward. There was only a very short time when I had more "desktop" (benchtop) computing power to dedicate than almost any of my peers. I could see that beginning to reverse as the IBM PC began to take hold.

Then it became plain to see the "brain drain" from natural science as the majority of students who were most capable logically & mathematically, gravitated to computer science of some kind instead. That was one of the only growth opportunities during the Reagan Recession so I did't blame them. For better or worse I wasn't a student any more and it was interesting to see the growth money rain down on them, but I wasn't worried and stuck with what I had a head start in. Mathematically, there was going to be a growing number of professionals spending all their time on computers who would have otherwise been doing it with natural science, with no end in sight. Those kind of odds were in my favor if I could ante up long enough to stay in the game.

I had incredible good fortune coming into far more tonnes of scientific electronics than usual, so my hands were full simply concentrating on natural science efforts, by that time I figured if that was going to come together with AI some day, I would want to be ready.

In the '90's the neural-net people had some major breakthroughs, after I had my own company they tried to get a fit, but not near the level of perfection needed. I knew how cool it would be though. I even tried a little sophomore effort myself after I had hundreds of megabytes but there was an unfortunate crash that had nothing to do with it.

One of the most prevalent feelings the whole time is I hope I live long enough to see the kind of progress I would want :\

While far more people than me have always felt that it already arrived.

In the mean time, whether employed or as an entrepreneur, doing the math says it would have been more expensive to automate rather than do so much manual effort over the decades.

But thousands of the things I worked on, the whole world could automate to tremendous advantage, so I thought it would be worth it to figure out how, even if it took decades :)


I kinda fear that this is an economic plane stall, we're tilting upward so much, the underlying conditions are about to dissolve

And I'd add, that recent LLMs magic (i admit they reached a maturity level that is hard to deny) is also a two edged sword, they don't create abstraction often, they create a very well made set of byproducts (code, conf, docs, else) to realize your demand, but people right now don't need to create new improved methods, frameworks, paradigms because the LLM doesn't have our mental constraints.. (maybe later reasoning LLMs will tackle that, plausibly)


The author's conclusion feels even more relevant today: AI automation doesn’t really remove human difficulty—it just moves it around, often making it harder to notice and more risky. And even after a human steps in, there’s usually a lot of follow-up and adjustment work left to do. Thanks for surfacing these uncomfortable but relevant insights


Sanchez's Law of Abstraction comes to mind: https://news.ycombinator.com/item?id=22601623


>the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have.

But we are in the later generation now. All the 1983 operators are now retired, and today's factory operators have never had the experience of 'doing it by hand'.

Operators still have skills, but it's 'what to do when the machine fails' rather than 'how to operate fully manually'. Many systems cannot be operated fully manually under any conditions.

And yet they're still doing great. Factory automation has been wildly successful and is responsible for why manufactured goods are so plentiful and inexpensive today.


It's not so simple. The knowledge hasn't been transferred to future operators, but to process engineers who are kow in charge of making the processes work reliably through even more advanced automation that requires more complex skills and technology to develop and produce.


No doubt, there are people that still have knowledge of how the system works.

But operator inexperience didn't turn out to be a substantial barrier to automation, and they were still able to achieve the end goal of producing more things at lower cost.


I mean how did you get an expert programmer before ? Surely it can’t be harder to learn to program with ai than without ai. It’s written in the book of resnet.

You could swap out ai with google or stackoverflow or documentation or unix…


The same argument was there about needing to be an expert programmer in assembly language to use C, and then same for C and Python, and then Python and CUDA, and then Theano/Tensorflow/Pytorch.

And yet here we are, able to talk to a computer, that writes Pytorch code that orchestrates the complexity below it. And even talks back coherently sometimes.


Those are completely deterministic systems, of bounded scope. They can be ~completely solved, in the sense that all possible inputs fall within the understood and always correctly handled bounds of the system's specifications.

There's no need for ongoing, consistent human verification at runtime. Any problems with the implementation can wait for a skilled human to do whatever research is necessary to develop the specific system understanding needed to fix it. This is really not a valid comparison.


There are enormous microcode, firmware and drivers blobs everywhere on any pathway. Even with very privileged access of someone at Intel or NVIDIA, ability to have a reasonable level of deterministic control of systems that involve CPU/GPU/LAN were long gone, almost for a decade now.


I think we're using very different senses of "deterministic," and I'm not sure the one you're using is relevant to the discussion.

Those proprietary blobs are either correct or not. If there are bugs, they fail in the same way for the same input every time. There's still no sense in which ongoing human verification of routine usage is a requirement for operating the thing.


No, that is a terrible analogy. High level languages are deterministic, fully specified, non-leaky abstractions. You can write C and know for a fact what you are instructing the computer to do. This is not true for LLMs.


I was going to start this with "C's fine, but consider more broadly: one reason I dislike reactive programming is that the magic doesn't work reliably and the plumbing is harder to read than doing it all manually", but then I realised:

While one can in principle learn C as well as you say, in practice there's loads of cases of people getting surprised by undefined behaviour and all the famous classes of bug that C has.


There is still the important difference that you can reason with precision about a C implementation’s behavior, based on the C standard and the compiler and library documentation, or its source or machine code when needed. You can’t do that type of reasoning for LLMs, or only to a very limited extent.


Maybe, but buffer overflows would occur written in assembler written by experts as well. C is a fine portable assembler (could probably be better with the knowledge we have now) but programming is hard. My point: you can roughly expect an expert C programmer to produce as many bugs per unit of functionality as an expert assembly programmer.

I believe it to be likely that the C programmer would even writes the code faster and better because of the useful abstractions. An LLM will certainly write the code faster but it will contain more bugs (IME).


>And yet here we are, able to talk to a computer, that writes Pytorch code that orchestrates the complexity below it.

It writes something that that's almost, but not quite entirely unlike Pytorch. You're putting a little too much value on a simulacrum of a programmer.


For others who might be as confused as me:

GNU Unifont is a bitmap font. It provides a fixed glyph for every code point in the BMP. It also covers additional code points in other planes.

I am guessing this is useful for writing editors that can edit Unicode text without knowing anything about various languages and their conventions. Authors who try to use this font to compose documents in (say) devanagari will have to learn the Unicode characters "in the raw", because I don't see a shaper for devanagari, so they won't get feedback that looks like real text.

If anyone can explain this better, please do!


and BMP in this context is not BitMap, but Unicode Basic Multilingual Plane (BMP) of the first 65,536 code points of the Unicode


Amusingly, here it is also BitMap [1]. Why they use an obsolete noncompressed proprietary format instead of PNG I don't know.

Edit: looks like it's because BMP supports 1-bit packed pixels and ~~PNG doesn't~~ (Edit to edit: this is wrong). The file sizes are almost identical; the 8x difference in the number of bits is exactly balanced by PNG compression! On the other hand, PBM [2] would've been a properly Unixy format, and trivial to decode, but I guess "the browser knows how to render it" is a pretty good argument for BMP. macOS Preview, BTW, supports all the NetPBM formats, which I did not expect.

[1] eg. https://unifoundry.com/pub/unifont/unifont-17.0.03/unifont-1...

[2] https://en.wikipedia.org/wiki/Netpbm


Maybe they set everything up before png was popular and never changed the workflow since then (or didn't care about the website to adjust anything)? After all, the PNG is only about 2 years younger than the font


That's plausible. Or maybe they just liked the BMP vs. BMP coincidence.


> Edit: looks like it's because BMP supports 1-bit packed pixels and PNG doesn't. The file sizes are almost identical

That's nonsense, PNG supports 1-bit pixels just fine, and the resulting file is a lot smaller (when using ImageMagick):

    $ file unifont-17.0.03.bmp 
    unifont-17.0.03.bmp: PC bitmap, Windows 3.x format, 4128 x 4160 x 1, image size 2146560, resolution 4724 x 4724 px/m, 2 important colors, cbSize 2146622, bits offset 62
    $ magick unifont-17.0.03.bmp unifont-17.0.03.png
    $ file unifont-17.0.03.png 
    unifont-17.0.03.png: PNG image data, 4128 x 4160, 1-bit grayscale, non-interlaced
    $ wc -c unifont-17.0.03.*
    2146622 unifont-17.0.03.bmp
     878350 unifont-17.0.03.png
    3024972 total


Thanks! I definitely should've double-checked. Apparently it was just the image viewer that didn't bother converting the 1-bit BMP to 1-bit PNG.


Does that mean there is a separate file for each point size?

I'm realising I know very little about fonts.


Nah, there’s just one size in this case (16x16).


Your points and examples are valid. However, when you say:

> I wouldn’t agree at all that wealthy people are inherently more resilient to stress

I beg to differ.

I think the OP is talking about growing up in poverty. Repeated stress with no relief, which is the condition of poor people in society, has been shown to affect their resilience. (Sorry, I don't have references handy, but these should be easy to find).


I'd say they are not more resilient but just less exposed and thus less likely to spiral out of control.

The more you are exposed to stress the more you have to actively deal with it to be successful.

Dealing with it is hard thus not many people do it correctly.


And the pendulum swings back toward representation. It is becoming clear that the LLM approach is not adequate to reach what John McCarthy called human-level intelligence:

Between us and human-level intelligence lie many problems. They can be summarized as that of succeeding in the "common-sense informatic situation". [1]

And the search continues...

[1] https://www-formal.stanford.edu/jmc/human.pdf


> It is becoming clear that the LLM approach is not adequate to reach what John McCarthy called human-level intelligence

Perhaps paradoxically, if/as this becomes a consensus view, I can be more excited about AI. I am an "AI skeptic" not in principle, but with respect to the current intertwined investment and hype cycles surrounding "AI".

Absent the overblown hype, I can become more interested in the real possibilities (both immediate, using existing ML methods; and the remote, theoretical capabilities follow from what I think about minds and computers in general) again.

I think when this blows over I can also feel freer to appreciate some of the genuinely cool tricks LLMs can perform.


> That’s not justice. That’s legal extortion.

If you made it your business to publish a newsletter containing copied NYT articles, then wouldn't they have the right to go after you and discover your sent emails?


Exactly, they wouldn't even need all of the emails in gmail for that example, just the ones from a specific account.

The real equivalent here would be if gmail itself was injecting NYT articles into your emails. I'm assuming in that scenario most people would see it as straightforward that gmail was infringing NYT content.


> Frontier models are all profitable.

This is an extraordinary claim and needs extraordinary proof.

LLMs are raising lots of investor money, but that's a completely different thing from being profitable.


You don't even need insider info - it lines up with external estimates.

We have estimates that range from 30% to 70% gross margin on API LLM inference prices at major labs, 50% middle road. 10% to 80% gross margin on user-facing subscription services, error bars inflated massively. We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.

The only source of uncertainty is: how much inference do the free tier users consume? Which is something that the AI companies themselves control: they are in charge of which models they make available to the free users, and what the exact usage caps for free users are.

Adding that up? Frontier models are profitable.

This goes against the popular opinion, which is where the disbelief is coming from.

Note that I'm talking LLMs rather than things like image or video generation models, which may have vastly different economics.


what about training?


I literally mentioned that:

> We also have many reports that inference compute has come to outmatch training run compute for frontier models by a factor of x10 or more over the lifetime of a model.


thank you, I literally can't read


Dario Amodei from Anthropic has made the claim that if you looked at each model as a separate business, it would be profitable [1], i.e. each model brings in more revenue over its lifetime than the total of training + inference costs. It's only because you're simultaneously training the next generation of models, which are larger and more expensive to train, but aren't generating revenue yet, that the company as a whole loses money in a given year.

Now, it's not like he opened up Anthropic's books for an audit, so you don't necessarily have to trust him. But you do need to believe that either (a) what he is saying is roughly true or (b) he is making the sort of fraudulent statements that could get you sent to prison.

[1] https://www.youtube.com/watch?v=GcqQ1ebBqkc&t=1014s


He's speaking in a purely hypothetical sense. The title of the video even makes sure to note "in this example". If it turned this wasn't true of anthropic, it certainly wouldn't be fraud.


Excellent compendium! Thanks for writing this up.

Suggestion: separate out the fiction writing advice from the nonfiction. Today they're mixed together, and their audiences are often non-overlapping.


Smalltalk-80 was also good for graphics programming.

Around 1990, I was a graduate student in Prof. Red Whittaker's field robotics group at Carnegie Mellon. In Porter Hall, I was fortunate to have a Sun 3/60 workstation on my desk. It had a Smalltalk-80. I learned to program it using Goldberg & Robson and other books from ParcPlace Systems.

The programming environment was fantastic, better than anything I have seen before or since. You always ran it full screen, and it loaded up the Smalltalk image from disk. As the article says, you were in the actual live image. Editing, running, inspecting the run-time objects, or debugging: all these tasks were done in the exact same environment. When you came into the office in the morning, the entire environment booted up immediately to where you had left it the previous day.

The image had objects representing everything, including your screen, keyboard, and mouse. Your code could respond to inputs and control every pixel on the screen. I did all my Computer Graphics assignments in Smalltalk. And of course, I wrote fast video games.

I used the system to develop programs for my Ph.D thesis, which involved geometric task planning for robots. One of the programs ran and displayed a simulation of a robot moving in a workspace with obstacles and other things. I had to capture many successive screenshots for my papers and my thesis.

Everybody at CMU then wrote their papers and theses in Scribe, the document generation system written by Brian Reid decades earlier. Scribe was a program that took your markup in a plain text file (sort of at a LaTeX level: @document, @section, etc.) and generated Postscript for the printer.

I never had enough disk space to store so many full screen-size raster images. So, of course, instead of taking screenshots, I modified my program to emit Postscript code, and inserted it into my thesis. I had to hack the pictures into the Postscript generation process somehow. The resulting pictures were vector graphics using Postscript commands. They looked nice because they were much higher resolution than a screenshot could have been.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: