Like all things, test and create metrics against your use case... If your application works well enough for your users, worry more about features and less about bare-bones performance. If you're working in node, maybe consider both. If you're writing a game library, or game, then the constraints are tighter. Most applications are not games and perform well enough for most users (beyond initial download size. Caching techniques would be a better impact for most. Beyond that on the server (node), you're better off concentrating on eliminating bottlenecks, and designing for horizontal scaling. It really depends on your use case... I recently rewrote part of an application that needed to run in an AWS Lambda, processing 1M CSV records into SQS requests in the lambda window of 5 minutes... the prior method could only hit about half the desired. The rewrite can now do about 1.5-2M records in the window (110-180 seconds for 1M). Good enough, though wouldn't mind if I could hit more. Some of the checks for legacy processing have some overhead, but in general, I wasn't able to get any more requests into SQS, it tops out around 150-200 simultaneous connections to SQS in flight. It comes down to knowing where your bottlenecks are, designing appropriately and refactoring appropriately. Sometimes the next largest blocker isn't what you might think it is.