Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you target consoles, you’re already on the lowest end hardware you’ll run on. It’s extremely rare to have much (if any) headroom for graphically competitive games. 20% is a stretch, 3x is unthinkable.


Then I would do most of the development work that doesn't directly touch console APIs on a PC version of the game and do development and debugging on a high-end PC. Some compilers also have debug options which apply optimizations while the code still remains debuggable (e.g. on MSVC: https://learn.microsoft.com/en-us/visualstudio/debugger/how-...) - don't know about Sony's toolchain though. Another option might be to only optimize the most performance sensitive parts while keeping 90% of the code in debug mode. In any case, I would try everything before having to give up debuggable builds.


It doesn’t matter. You will hit (for example) rendering bugs that only happen on a specific version of a console with sufficient frequency that you’ll rarely use debug builds.

Debug builds are used sometimes, just not most of the time.


That only kind of works when not using console specific hardware.

Naturally you can abstract them as middleware does it, however it is where tracking performance bugs usually lies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: