The Xbox worked as a proof-of-concept to show that you could build a console with commodity hardware. The Xbox 360 doubled down on this while the PS3 tried to do clever things with an innovative architecture. Between the two, it was clear commodity hardware was the path forward.
> The Xbox 360 doubled down on this while the PS3 tried to do clever things with an innovative architecture.
I don't think this is really an accurate description of the 360 hardware. The CPU was much more conventional than the PS3, but still custom (derived from the PPE in the cell, but has an extended version of VMX extension). The GPU was the first to use a unified shader architecture. Unified memory was also fairly novel in the context of a high performance 3D game machine. The use of eDRAM for the framebuffer is not novel (the Gamecube's Flipper GPU had this previously), but also wasn't something you generally saw in off-the-shelf designs. Meanwhile the PS3 had an actual off the shelf GPU.
These days all the consoles have unified shaders and memory, but I think that just speaks to the success of what the 360 pioneered.
Since then, consoles have gotten a lot closer to commodity hardware of course. They're custom parts (well except the original Switch I guess), but the changes from the off the shelf stuff are a lot smaller.
in the beginning general purpose computers weren't capable of running graphics like the consoles could. That took dedicated hardware that only the early Atari/NES/Genesis had. That's not to say that the Apple or IBM clones didn't have games, they did, but it just wasn't the same. The differentiation was their hardware, enabling games that couldn't be run on early PCs. Otherwise why buy a console?
So the thinking was a unique architecture is what a console's raison d’être was. Of course now we know better, as the latest generation of consoles shows, butthat's where the thinking for the PS3's cell architecture came from.
This leaves out an important step. When 3D graphics acceleration entered the broader consumer/desktop computing market, it was also a successor to the kind of 2D graphics acceleration that consoles had and previous generations of desktop computers generally didn't. So I believe that it's fair to say that specialized console hardware was replaced by general purpose computing hardware because the general purpose hardware had morphed to include a superset of console hardware capabilities.
GPUs evolved from graphics workstations (Pixar Image Computer, various SGI products...) rather than game consoles. Especially SGI pioneered a lot of the HW accelerated rendering pipeline that trickled into consumer graphics chips.
Agreeing with all my siblings' comments, computers took cues from a lot of places and evolved to be general-purpose. Something similar happened on the GPU side, and at some point, the best parts of bespoke graphics hardware got generalized–plus 3D upended the whole space. By the PS3 era, there were multiple GPU vendors and multiple generations of APIs, so everything had settled down and standardized. The era of gaining a competitive advantage through clever hardware was over, and Sony, a hardware company, was still fighting the last war.
That has been and I’d say still is a huge problem for them. They are not a software company at all and it hurts many thing.
PlayStation is the one exception. But they learned the wrong lesson from the PS2 (exotic hardware is great! No one minds!) and had to get a beating during the PS3 era for the division to get back on track.
This is the thing that people don't realize about middle-era consoles. It was the shift where commodity PC hardware was competing well with console hardware.
Today in 2025 the only possible advantage is maybe in a specific price category where the volume discount is enough to justify it. In general, consoles just don't make technological sense.