Echo JS 0.11.0

<~>

tracker1 2521 days ago. link 1 point
Like all things, test and create metrics against your use case...  If your application works well enough for your users, worry more about features and less about bare-bones performance.  If you're working in node, maybe consider both.  If you're writing a game library, or game, then the constraints are tighter.

Most applications are not games and perform well enough for most users (beyond initial download size.  Caching techniques would be a better impact for most.  Beyond that on the server (node), you're better off concentrating on eliminating bottlenecks, and designing for horizontal scaling.

It really depends on your use case... I recently rewrote part of an application that needed to run in an AWS Lambda, processing 1M CSV records into SQS requests in the lambda window of 5 minutes... the prior method could only hit about half the desired.  The rewrite can now do about 1.5-2M records in the window (110-180 seconds for 1M).  Good enough, though wouldn't mind if I could hit more.  Some of the checks for legacy processing have some overhead, but in general, I wasn't able to get any more requests into SQS, it tops out around 150-200 simultaneous connections to SQS in flight.

It comes down to knowing where your bottlenecks are, designing appropriately and refactoring appropriately.  Sometimes the next largest blocker isn't what you might think it is.
anywhichway 2520 days ago. link 1 point
Agree, although perhaps I was not sufficiently clear in the article when I spoke to libraries. The challenge for libraries is they have to address a whole slew of use cases, e.g. intersecting just a few arrays of 20 items each (which will show almost no performance difference across implementations) or dealing with 10 arrays of 10,000 items each.

Like others, I have found that micro-optimization of applications is rarely useful ... its the architecture that counts. However, frameworks and libraries are different.
anywhichway 2521 days ago. link 1 point
Very true on functions calls, suppose I could have been more explicit. Personally I was surprised at some of the cost for some of core functions, particularly when they can sometimes be re-written in JavaScript itself to run faster. I made a couple of article edits to highlight this. 

Also, the variability across browsers is at times quite surprising. 

Finally,take a look at a whole slew of commonly used libraries that contain map and forEach all over the place for things that need to be fast like set operations, cross products, large matrix traversal, etc.

A down compiler really would be nice, write the code using map or forEach and have is automatically converted ...

Think I'll wait for WASM rather than go back to C++. Personally, I wish LISP had been chosen as the programming language of the web ... which should make it obvious why I like JavaScript.
MaxArt 2521 days ago. link 1 point
I hardly understand the point of this article. Methods like forEach and map have a great value when it comes to readability and maintainability. But if you need performances, you don't use a callback function, period: you use a good-ol' for loop, because function calls are expensive.

But then again, if you need performances you might not want to do that in JavaScript to begin with. At most, you do that in C++ and convert it with Emscripten.