Echo JS 0.11.0

<~>

tracker1 comments

tracker1 81 days ago. link 1 point
Cool to see.. might be nice to see a sample of how this can work with say hono on the server-side...
tracker1 92 days ago. link 1 point
Cool to see stuff like this... that said, personally I've really gotten into basing most of my validation around zod... this works with hono/zod-openapi as well as client validation integration, so I'm able to get a lot of usage out of the same tools on both the client and server usage.

For server-side, with JS/TS I've gone heavy in with hono, recently used the zod + openapi integration, which has been very nice to use.   I've also used C# with FastEndpoints library, which has also been very nice.

Client side, currently working with Mantine component library, though have used MUI a lot as well...  for forms, I've used React + React-Hook-Form and zod indegration.  Creating modest wrappers around form field elements in order to present error states and helper messages cleanly.

Not to detract from this Vue3 approach, just pointing out what I've used and that zod in particular has made things pretty nice all around, not just on the client.
tracker1 96 days ago. link 1 point
It would be nice to include references to some common utilities (Git Credential Manager, AWS CLI, etc) that use this technique for authenticating users with CLI tooling.
tracker1 96 days ago. link 2 points
Not sure how much of this article is or isn't AI in nature... the premise is interesting.

That said, using recursion in JS is asking for pain... you're best of rethinking an algorithm in such a way that removes recursion in favor of iteration.  While some algorithms are intended to work best/easiest with recursion, it's usually a bad idea for JS.

Another issue tends to come down to trying to force immutable data usage, where mutation within a function/iterable is often a better use of the language in terms of overhead and performance.  While not everything you do needs to be optimized, recursion is one of those things in JS where it's too easy to hit walls.  Most runtimes do not have proper tail call optimizations for example.

Both in terms of JS limits as well as DOM limits, such as complexities that are less common today, but still pop up now and then.  Nothing worse than a blank page that barely moves and soaks your CPU cycles.

If your timing structure is particularly tight, you might want to adapt an async iterable even... I really like that about the more modern stream designs in JS.
tracker1 96 days ago. link 3 points
I think the only thing I'd probably change would be to just use emoji instead of the png.  They're well supported enough, even if the "font-family" list gets a little weird.
[comment deleted]
tracker1 115 days ago. link 1 point
Interesting.  I'm not quite sure how I feel about this TBH, while I appreciate Cloudflare D1 (and Turso, and others), I'm not sure how I feel about locking in here.  There's also PostgreSQL, CockroachDB and other PG compatible variants.

Personally, I've worked more with MS's Azure Entra (formerly Azure AD) than most other providers as well as self-managed, Identity Server, Octa, 0Auth, etc.  The schema should support other adapters, but my skimming of the docs left me a bit wanting, with only links to the core libraries used.

YMMV.
tracker1 130 days ago. link 1 point
Not a great comparison, likely AI generated fluff.  No mention of advancements like RSPack and other options (Parcel).  While there's lip service to "developer performance" and similar, there are no hard numbers or references in the article.  No sources or citations either.
tracker1 131 days ago. link 1 point
Total aside... I've been thinking it would be nice to build a CSV parsing library that uses iterable and async iterable options... effectively bubbling a character at a time, wrapped in an iterable bubbling a field at a time... then another for a row at a time... then a final inerrable for optionally transforming string[] for each row to object of key/values for each row.

My only issue is that I'd like to bubble each "character" as a set for joined utf-8 characters represented from the read stream.  And I can't help but think that logic itself would be the most complex... effectively understanding unicode/utf8 well enough to peak/pop each "character" as a set.  Then the wrapper would take field delimiters (quptes, commas) for bubbling.

So you could use the sync interface for "very fast" ETL to line separated json, for example... and async for data ingestion ETL to a queue or FaaS intake.

Been thinking of similar for a few different languages... but I think JS/TS generators would be particularly useful for this.
[more]