There was some work done to add a S3 storage backend for ZFS[1], precisely with the goal of running PosgreSQL on effectively external storage.
A key point was to effectively treat S3 as a huge, reliable disk with 10MB "sectors". So the bucket would contain tons of 10MB chunks and ZFS would let S3 handle the redundancy. For performance it was coupled with a large, local SSD-based write-back cache.
Sadly it seems the company behind this figured it needed to keep this closed-source in order to get ROI[2].
But it also sounds like a dream if it could actually work. If you have enough local, performant disk that you are sharing with the cluster you should be able to get good performance and rely on the system to provide resilience and extra space.
In practice you can't get high-availability this way without additional logic and circuit breakers. Running multiple postgres with postgres-aware replication and failover is safer, faster, and more performant (though harder to set up).