Echo JS 0.11.0


anywhichway comments

anywhichway 61 days ago. link 1 point
What are the specs of your test platform? I just installed locally Windows 10 64bit, 8GB, i7, 2.6GZ, node v8.1.3. Very different results. It also seems I can't create an Issue in your github respository. Do you have Issues turned off?:

Single param top 4:

 moize         │ 30,522,105 
fast-memoize  │ 23,692,264
micro-memoize │ 15,164,454
iMemoized     │ 9,512,034 (by anywhichway, not in your test suite)

Multiple pram:

 moize       | 17,628,577 
micro-memoize │ 9,016,574 
iMemoized     │ 8,114,019 (by anywhichway, not in your test suite)
 lru-memoize   │ 6,269,999 

Multiple param (objects - did not add iMemoized to suite)

 moize         │ 18,108,941
micro-memoize │ 10,661,604
lru-memoize   │ 6,255,101  
memoizee      │ 5,797,888 

My guess is if you take the time to build in browser tests, the results will be different yet again.
anywhichway 145 days ago. link 1 point
The core tlx parser has now been replaced with hyperx. Very thankful for you pointing it out. Saved MANY hours of development. Also added "inverted JSX". HTML can now be treated like a template directly, more like VUE or Ractive.
anywhichway 146 days ago. link 2 points
We were unaware of hyperx. Hyperx has a better architecture and will probably perform better. It uses a classic parser based approach internally. Tlx relies on the DOM for its parsing (fine for small components, but will bog down for larger ones). Tlx is currently a little smaller, has its own Vdom and h functions (i.e. no dependencies). Tlx can also bind the template literal interpolator so that components can be built. Hyperx is BSD licensed so we will probably use portions of hyperx and modify it it to add the binding capability and remove dependencies. Thanks for pointing out hyperx!
anywhichway 146 days ago. link 1 point
It will. It has its own render function. See the file in the example directory.
anywhichway 301 days ago. link 1 point
Agree, although perhaps I was not sufficiently clear in the article when I spoke to libraries. The challenge for libraries is they have to address a whole slew of use cases, e.g. intersecting just a few arrays of 20 items each (which will show almost no performance difference across implementations) or dealing with 10 arrays of 10,000 items each.

Like others, I have found that micro-optimization of applications is rarely useful ... its the architecture that counts. However, frameworks and libraries are different.
anywhichway 302 days ago. link 1 point
Very true on functions calls, suppose I could have been more explicit. Personally I was surprised at some of the cost for some of core functions, particularly when they can sometimes be re-written in JavaScript itself to run faster. I made a couple of article edits to highlight this. 

Also, the variability across browsers is at times quite surprising. 

Finally,take a look at a whole slew of commonly used libraries that contain map and forEach all over the place for things that need to be fast like set operations, cross products, large matrix traversal, etc.

A down compiler really would be nice, write the code using map or forEach and have is automatically converted ...

Think I'll wait for WASM rather than go back to C++. Personally, I wish LISP had been chosen as the programming language of the web ... which should make it obvious why I like JavaScript.
anywhichway 306 days ago. link 1 point
Yeah, we liked this. It is about the simplest thing we found. Downside is there is still no way to trap assignment, e.g.

subArray[1] = [9] can't be prevented when it is necessary to ensure all elements are numbers.
anywhichway 413 days ago. link 1 point
True, in BETA, just days away. Added a clarification to the home page.