I dunno about this. The problem mainly affects low-income families and residents of food deserts, and now the government is trying to put everyone on a keto diet. It just seems like they're not fixing the problems where they happen.
I never really considered this too deeply, because I've never studied "Agentic AI" before (except for natural language processing). Stallman is making a really good point. ChatGPT doesn't solve the intelligence problem. If ChatGPT was actually able to do that it would be able to make ChatGPT 2.0 on request.
I guess that proves that there are zero intelligent beings on the planet since if humans were intelligent, they would be able to make ChatGPT 2.0 on request.
What you're talking about is "The Singularity", where a computer is so powerful it can self-advance unassisted until the entire planet is paperclips. There is no one claiming that ChatGPT has reached or surpassed that point.
Human-like intelligence is a much lower bar. It's easy to find arguments that ChatGPT doesn't show it (mainly it being incapable of learning actively, and with there being many ways to show it doesn't really understand what it's saying either), but a Human cannot create ChatGPT 2.0 on request, so it follows to reason a human-like intelligence doesn't necessarily have to be able to do so either.
You're assigning something different to his argument. Here's from the linked page
> There are systems which use machine learning to recognize specific important patterns in data. Their output can reflect real knowledge (even if not with perfect accuracy)—for instance, whether an image of tissue from an organism shows a certain medical condition, whether an insect is a bee-eating Asian hornet, whether a toddler may be at risk of becoming autistic, or how well a certain art work matches some artist's style and habits. Scientists validate the system by comparing its judgment against experimental tests. That justifies referring to these systems as “artificial intelligence.”
This is nowhere near arguing that it should be able to make new versions of itself.
In a limited way. Where it has impacted our clients is it has made it much easier for them to get reference letters when reference letters are required. But our basic every day work is largely unaffected by AI. So far.
I think the term transpiler is ok. It’s not pedagogical or anything but most engineering jargon is like that, and this defiantly isn’t the worst one I’ve seen.
So he wants a good parallel language? What's the issue? I haven't had problems with concurrency, multiplexing, and promises. They've solved all the parallelism tasks I've needed to do.
We know in hindsight that lisp became most useful for representing computation, but what ever happened to AI? McCarthy says it's characteristic of LISP. SICP also mentions AI as being fundamental to lisp at the beginning of the book. Norvig & Russel used Common Lisp for the first edition of their book. But, then what happened? Why did it just disappear for no reason?
Lisp was ideal for reasoning systems, its homoiconic and meta-programmable nature is perfect for manipulating symbolic structures and logic.
But when AI shifted toward numerical learning with neural networks, tensors, and GPU computation, Lisp’s strengths mattered less, and Python became the new glue for C/CUDA libraries like NumPy, PyTorch and TensorFlow.
Still, nothing prevents Lisp from coming back. It would actually fit modern deep learning well if a "LispTorch" with a CUDA FFI existed. We would have macros for dynamic graph generation, functional composition of layers, symbolic inspection, interactive REPL exploration, automatic model rewriting etc.
We almost had it once: Yann LeCun’s SN (the first CNN) was built on a C core with a Lisp interpreter on top to define, develop and inspect the network. It eventually evolved into Lush, essentially "Lisp for neural networks", which in turn inspired Torch and later PyTorch.
So Lisp didn't die in AI, it's just waiting for the right people to realize its potential for modern neural networks and bring it back. Jank in particular will probably be a good contender for a LispTorch.
Norvig actually did comment on this publicly once on the Lex Friedman podcast. Basically what he said was that lisp ended up not working well for larger software projects with 5 or more people on them, and the reason why they never used lisp in any of their books again was because students didn't like lisp. Norvig doesn't seem to get why students didn't like lisp, and neither do I but somehow this is the real reason why it was abandoned.
> Basically what he said was that lisp ended up not working well for larger software projects with 5 or more people on them
I don’t think "doesn’t work for teams of 5+" is a fair generalization. There are production Clojure and Emacs (Lisp) codebases with far more contributors than that.
Language adoption is driven less by inherent team-size limits and more by social and practical factors. Some students probably don't like Lisp because most people naturally think in imperative/procedural terms. SCIP was doing a great job teaching functional and symbolic approaches, I wish they hadn't shifted their courses to Python, since that increases the gravitational pull toward cognitive standardization.
The AI winter happened. And the AI they talk about is classical, symbolic AI where you try to explicitly represent knowledge inside the computer. The new LLM stuff is all neural networks, and those benefit more from fast low-level vector implementations than high-level ease of symbolic manipulation.
So modern AI is all mostly C or even Fortran, often driven from something more pedestrian, like Python.
I'm not sure what exactly you're referring to, but one avenue to implement AI is genetic programming, where programs are manipulated to reach a goal.
Lisp languages are great for these manipulations, since the AST being manipulated is the same data structure (a list) as everything else. In other words, genetic programming can lean into Lisp's "code is data" paradigm.
As others mentioned, today everything is based on neural networks, so people aren't learning these other techniques.
I'm referring to the fundamental idea in AI of knowledge representation. Lisp is ideal for chapters 1 through 4 of AIMA, and TensorFlow has shown that NN can be solved well with a domain specific language which lisp is known to be great for.
reply