It isn't necessary to purely emulate every instruction, you can cache translated code segments like a JIT compiler does. It has been done before for fast x86 emulation on a VLIW chipset: http://en.wikipedia.org/wiki/Code_Morphing_Software
It's not just about translating instructions. Console games are extremely optimized and fine tuned for the processor they run on. A block of 40 PowerPC instructions may take 40 clock cycles to run on a PowerPC Processor, but 120 clock cycles to run on an x86 processor once they're blindly emulated. Programmers (and compilers) will change the type of instructions they're using to best fit the situation and the instruction set they will be using.
Then there's the problem where PowerPC is big endian and x86 is little endian so add potentially additional processing for network and file system code as well (models, textures, sounds, etc), in addition to any magic numbers[0] that may be used in the codebase.
While it is possible to emulate, the performance would be abysmal. Just take a look at GameBoy emulators for the PC. They use massive amounts of CPU due to the necessary overhead to emulate the processor, graphics, sound, etc. Trying to play a game like Call of Duty or Grand Theft Auto emulated from PPC to x86 would just be sluggish at best.
I'm surprised the 360 used big-endian mode (almost all PPC implementations are bi-endian, after all!), given the prior Windows NT PPC release was one of the few to use little-endian for the OS (note that one could change endianness on a per-thread basis! — anyone know if anyone did this on the 360?), and using big-endian makes it inconsistent with everything else Windows.
Don't know how it is with the modern console games but at least in the old days the developers where relying on all kinds of tricks that made really accurate emulators difficult to make. See for example the story about "perfect snes emulator" [1]
More details: http://en.wikipedia.org/wiki/Binary_translation#Dynamic_bina...