So Elsas own comparison[1] says that the binary size difference is 20MB Elsa (QJS) vs 44MB Deno (V8). Looking at QuickJS benchmark[2] V8 is 28M vs 1M QJS, so Elsa adds 19M on top of the engine vs 16M for Deno. Doesn't feel that lightweight anymore
It is amusing how each of us define our own _lightweight_, for me it is the size and readability of the code. Others it is the magnitude of the executable bytes. To someone else it might mean the startup time, the number of dependencies or the build/test/debug iteration time.
The only reasonable application of "minimal" is for it to be a cross-cutting description. To describe something as "minimal" without qualification while focusing only on any single quality is not a good use of the term.
Software that achieves my goal in a focused manner is something i typically enjoy. I don't want it to do everything under the sun, i want it to do one or two things well.
These days, to me, a "lightweight" alternative to Dropbox would be one that focuses on filesharing with none of that other junk.
There are foundational aspects on knowing a component is done, that it will be the same everywhere. What do you think of Nix [1]?
The other quality of flexible composable systems is that they can be extended with something akin to an AoP system. Non-essential qualities can be woven and applied during composition so that each component doesn't have to implement various features.
A counter example is the find command, it has to implement a predicate language into the tool itself. If it omitted typed objects, a filter engine could select the subset. If find implemented polymorphic values, the predicate would dictate what facets were populated in the structures themselves but our shells don't work that way. A metashell that could define its environment and the streams of structures flowing between components could get us closer to a pervasively composable mode of system construction.
Unix pipelines [2] are awesome, but they aren't the pinnacle. I feel like all of us fall short in creating systems that compose. Most (scalable) systems should be fractal, while most composition is limited to a narrow first-order mode of operation.
That was surprising to me, too. For me the big benefit of QuickJS is the tiny runtime size. If that isn't a factor you might as well go for V8, the performance is an order of magnitude better.
Mention QuickJS Engine in the readme . I came to know that this project uses quickjs (i believe it is written by Bellard) after reading about it in "comparison with deno and node"
Ofcourse; Deno is V8-based, which is a lot more full-featured than quickjs. Perhaps this is a more informative benchmark: https://bellard.org/quickjs/bench.html
True statement, but what does that metaphor-analogy have to do with server side JavaScript runtimes? They are also not the same thing, Dragonflys are fast, but they won't run my JS either.
Our systems have absolute speeds that the client requests don't care about. They don't care if they are fast for an _x_ when that _x_ has nothing to do with the client. Minimal is a nice quality, but I don't think Elsa is fast in absolute terms or as secure as Deno.
I did an analysis in a different comment, it looks like Elsa comes in at 3500 lines of code, of which Go is less than a thousand. What this does is supply an outlet for Go programmers to contribute to a JS runtime like Deno which is mostly developed in Rust and TS. From a sociological perspective, it will be interesting to see how Elsa evolves and whether Go or TS dominate future development.
What I think is neat is that QuickJS comes in as more computationally efficient (benchmark score / executable_bytes_size). But that also makes sense, jits have to work pretty hard to extract that extra performance. One would have to do some sort of coverage analysis to compare quickjs to v8 --jitless, interpreter to interpreter. It would be interesting to track the commits in a codebase over time wrt speed for interpreters and jits across a set of languages.
# from https://bellard.org/quickjs/bench.html
# total score w/o regex
In [2]: 41576 / 28000
Out[2]: 1.4848571428571429
In [3]: 1138 / 620
Out[3]: 1.835483870967742
Given Fabrice Bellards other projects, I wouldn't be surprised if a JIT landed in QuickJS.
> True statement, but what does that metaphor-analogy have to do with server side JavaScript runtimes
Seems pretty self-evident that there are more considerations than "literally the fastest" to be made when the decision is "how do I get from point A to point B", or in this case "how do I execute my javascript". Vehicles and runtimes can be considered fast [enough] regardless of the existence of faster options.
Also, QuickJS is not a "server-side JavaScript runtime". Not all programming is web development.
> Given Fabrice Bellards other projects, I wouldn't be surprised if a JIT landed in QuickJS.
Personally I would be pretty surprised considering that the aim of the project seems in part to be as small as possible.
I would like to deescalate this convo, I used overly harsh language and I apologize.
Your use of fast, is in the definition fast enough for the task at hand. It is not fast in the absolute sense. I assert that Elsa being that much slower than Deno due to Deno having a JIT where as Elsa does not automatically puts them in different categories. But the stack that Elsa uses still needs Go and C, so long term for Elsa I could see it using a pure Go JS runtime in place of QuickJS.
I'm very dubious about whether these actually match real world performance.
I've done a bunch of work with QuickJS. It's awesome and the performance features you get in return for its tiny footprint is amazing. But compared to JIT-enabled V8 it's incredibly slow.
The one area where it shines compared to V8 is startup time, because the engine is so much lighter. So if you're profiling "time taken to start up and execute one operation" then yes, QuickJS might look better. But that isn't how people use Node/Deno, even when you're creating a CLI tool you're loading a lot of JS and performing a lot of operations.
That is a horrible way to present benchmarks, but it's quite a big difference in favor of Elsa. I was under the impression Deno was built on Rust and V8 which should bode for good performance, Elsa must be doing something very good?
Each of the benchmarks seem to take a single-digit number of milliseconds in Elsa/QuickJS.
My guess is that these benchmarks were chosen because the startup time for QuickJS is probably less than the startup time for V8, but I struggle to imagine that V8 is slower once the code has warmed up.
Any meaningful benchmark with JIT runtimes should take several seconds, at a minimum, in my opinion. 8 milliseconds is not enough.
From what I can tell, the benchmark is including the startup time, and it isn't keeping the interpreter warm -- it starts a fresh instance for each iteration of the benchmark.
V8, with or without its JIT, wins hands down. QuickJS may be small and fast to start, but fast actual computations seem not to be its strength (or intended target).
> Any meaningful benchmark with JIT runtimes should take several seconds, at a minimum, in my opinion. 8 milliseconds is not enough.
Maybe a benchmark for JIT runtimes, but you should always keep in mind what the expected use case is.
I recently ported a program from lang X to lang Y. It's a command-line program that you're expected to invoke in your terminal, and it processes some input and then spits something out (not unlike, say, a compiler or a static site generator) but it's expected to be used "interactively" i.e. no significant pauses for processing typical workloads, just like you expect from invoking mkdir or ls. Zero percent of the effort during porting was spent on optimization. In fact, the lang X version uses specialized, algorithmically-sophisticated collection classes, and the lang Y version does the dumbest possible thing. Despite this, for typical workloads, the Y version completes sooner than it takes the X version to do the same. No amount of benchmarking in the world is going to change the fact that the dumb Y version is faster than the clever X version.
For our program to have taken "several seconds, at a minimum" to run would have been a failure case, so there would have been no value in benchmarking that, let alone optimizing for it. I expect the use cases are the same for this project, which is supposed to be processing untrusted scriptlets that need to finish in the smallest amount of wall clock time possible.
> For our program to have taken "several seconds, at a minimum" to run would have been a failure case, so there would have been no value in benchmarking that, let alone optimizing for it. I expect the use cases are the same for this project, which is supposed to be processing untrusted scriptlets that need to finish in the smallest amount of wall clock time possible.
If the goal is to compare to NodeJS and Deno, which are largely used to write long-lived servers, then I disagree entirely.
Even if the main use case for Elsa were to build Lambda-esque services, I would again disagree. Services like that gain significant benefit from reusing a single instance of the program, rather than creating it from scratch over and over again. Even without a JIT, parsing the program over and over is a waste of time, and reusing an instance allows you to amortize that cost (and the cost of JIT) across a number of uses, rather than paying that price every time.
The benchmarks provided are extremely short, such as the one-liner benchmark I mentioned before, so for all we know, QuickJS is slower at parsing JavaScript than V8 is, and it would be slower at running "real" programs than V8, even though V8 has a JIT to deal with. This is without mentioning that V8 has a JIT-less mode, which the benchmarks aren't comparing against, even though it might be more comparable.
The only use case for an 8 millisecond benchmark with startup penalty every time is interactive CLI tools, as you alluded to, and this is far from the expected use case I see described in the project.
But even then... the benchmarks showed that the startup time and total performance for V8 were comparable on the human scale. The difference between 8 milliseconds and 36 milliseconds is completely irrelevant to humans. It isn't like V8 is taking 10 seconds to initialize or something. V8 is just running slower for a short period of time as it learns about what it is executing. If the CLI tool were doing more than a non-trivial amount of work, the V8 JIT would begin to accelerate past QuickJS.
So, the benchmarks are irrelevant for the CLI use case, and irrelevant for seemingly every use case that has been presented. If the benchmarks were showing that V8 could not complete these tasks in a reasonable amount of time, then they might be showing something useful.
These benchmarks are measuring the use case of interactive CLI tools that are written in a single line of JavaScript... which is a very specific use case that I don't believe is intended to be the target market of the project.
I appreciate your anecdote, but this isn't my first time thinking about benchmarks, and these benchmarks seem pretty seriously flawed to me. This is why I made my first comment, and your comment has done nothing to negate any of the reasoning that went into my first comment.
It’s amazing watching V8s JIT warm up. And eye opening on how to measure performance when profiling. A heavy call can take 200ms, then 100, then 80, 70. And suddenly where I thought the low hanging fruit is has vanished.
If you’re interested in using QuickJS yourself on the web or in node to implement untrusted JavaScript evaluation in your own (JavaScript) software, check out my library: https://github.com/justjake/quickjs-emscripten
HN thread Fast Vue SSR with Rust and QuickJS (2 months old) makes it clear that QuickJS is an alternative to V8. QuickJS is small and easy to embed but it lacks V8's JIT for fast JavaScript execution.
It seems that the comparative advantage of QuickJS based solutions is short-lived invocations that benefit from fast startup times. CLI apps/tools and Cloud Functions (e.g. AWS Lambda) seem to fit the bill.
Elsa adds Deno-like HTTP imports and TypeScript support which makes it a strong alternative to Deno for CLI apps/tools.
Deno (V8 on Rust) or the rusty_v8 bindings seems like the better choice for long-lived servers where server-side JavaScript performance is important.
Also, it’s weird how time flies. It’s been almost 6 years already since they made the version of the song in the video here linked, but it feels like just yesterday that I saw it.
The headline I believe was chosen by OP. What leaves a bad taste in my mouth is that the team thought it unnecessary to credit QuickJS and Bellard anywhere in the README file - and making it seem like original work.
Between this, what the "runtime" is currently capable of, and the benchmarks shown, I feel like this is a really poor example of a Show HN project and at best unintentionally misled me about several things...
I don't think so, it is clearly a Deno competitor in name and spirit. My first thought was around how languages allow us to choose our tribe and that Go folks couldn't contribute to Deno since it is in Rust, hence Elsa, maybe they were getting Frozen out? Or Deno is the Dinosaur and Elsa is the ice age that kills them? Who knows, ;)
I had forgotten that Fabrice Bellard also made one of his signature hacks around JavaScript [1]. This thing is amazing and he created a nicely written and formatted pdf, absolutely lovely.
Then I started doing a simple comparison of the two code bases
> Go folks couldn't contribute to Deno since it is in Rust
Deno was originally written in Go, until V8's GC was getting in the way of Go's GC, so they could always work on early versions of Deno, if they are willing to accept the problems with V8 that pushed the project to Rust.
README still needs some work and surely we will credit the awesome libraries that allowed us to build elsa
We plan on taking the deno route and requiring flags when running a js file to allow specific ops to be called. You can see a demo of this in our filesystem ops
Ok, I understand it's secure because of the deno-style permission system, but not necessarily because it can be used to run untrusted code safely (sandboxed engine, like in chromium).
Not all unicodes are equal for a screen reader. The text in case is read for me as every letter is separated from the others by a unpronouncible symbol and the whole line becomes a mess. Something like the chicken language, if you are acquainted with the concept.
Besides leaving effects to the fond, I'm not quite sure. The screen readers must be told to treat one symbol as another and this is not something easy in plain text format. In HTML that you own, you can try to do it with images and alt text attribute, or some shenanigans with aria-hidden and aria-text. Front end is not my specialty, unfortunately.
I have noticed that many cloud based TTS actually work fine with unicode which was surprising. I know google one does and it sounds way better than whatever comes in as default on Ubuntu (Orca).
NVDA on Windows. Orca does not have high-quality voice for my language as far as I know and that renders it unusable for day-to-day usage.
Keep in mind that screen reader and voices for it are different components that could be mixed. Both could be instructed to treat unicodes in whatever manner necessary, but it is labor intensive and euristics usually has some dubious results.
[1] https://github.com/elsaland/elsa/blob/master/COMPARISON.md
[2] https://bellard.org/quickjs/bench.html