Accelerating JavaScript (in the browser)
Aug 05, 2020 · 2416 words 🐦 📧

I once promised a friend I would write this years ago (I think)…. and now I have finally gotten drunk caffeinated enough to do so.

graph TD A[Is my page taking
too long to load/run?] -->|Yes| G[How big is my data?] C[Am I running
expensive
computations?] --> |Yes| P[Am I running
out of memory?] A -->|No| D{{"What are you doing
in this flowchart?"}} P --> |Yes| Q[Do I have a
memory leak?] Q --> |No| E{{"🌊 <a href='#-streaming-the-web'>Streams!</a> 🌊"}} Q --> |Yes| S{{"Plug it..."}} G --> |Small| C P --> |No| F C --> |No| R{{"🤷 Runaway
recursion?"}} F -->|Yes| I["Can they be written
as a shader?"] I -->|Yes| J{{"🔮 <a href='#-gp-unit'>GPUs!</a> 🔮"}} F -->|No| K{{"<a href='#webassembly'>Assemble the Web!</a> ☝️"}} I -->|No| N{{"<a href='#-web-workers'>Work it!</a> 💁"}} L --> |I mean... gross| K G -->|Big| L[Sir Mix-A-Lot big?] L -->|I cannot deny| F["Can the operations
be parallelized"] class D,R,S danger; class E,K,N,J exit; classDef danger fill:#fde6ea,stroke:#ec2147; classDef exit fill:#d8f6eb,stroke:#26ab79;

Javascript used to be made fun of all the time because it was the slowest kid in gym class. And it could only have one thread. But this only motivated JS even more! This is that story….

📜 A brief(er) history of Javascript

It is perhaps what used to be the biggest impediment to JavaScript bettering itself that has uniquely situated it to accel. And in those fires of incompatibility the seeds of its performance were forged.

While other languages do have other competing implementations, JavaScript is one of the few languages whose performance is driven by consumers (rather than producers). Sure, as a consumer of R, a scientist wants it to be fast. But improvements to the language often benefit everyone and commercialization instead comes in the form of professional services and products.

For JavaScript (being so essential to the web browsing experience), its performance is the product in a sense. Since slight differences can make a browser feel more responsive. And that feeling can be sold.

⚔️🛡 The Great Browser Wars

It was in the Second Browser War of the early aughts that JavaScript's fate would be irrevocably changed…..

While likely unbeknownst to all at the start (except for maybe a few industry insiders 1), by the end of the decade the writing for a high(er) performance JavaScript was on the proverbial wall.

For an unabbreviated partial history of Javascript, I highly recommend hearing it first hand from one of the greats.


CUT TO:


And by the end of the decade following the end of that decade (which brings us to now), Javascript would become one of the most optimized and performant dynamic/scripting languages out there.

WHERE IS YOUR 🐍 GOD NOW PYTHONISTAS?!?

🏎💨 Vroom Vroom

So now that we are all caught up on the historical context of how JavaScript has gotten here, let's get into our options for squeezing as much performance out of it as we can. What follows are the various options of leveraging browser native APIs/technologies to accelerate JavaScript execution.

For a TLDR; just follow the opening flowchart (the green leaf nodes hyperlink to the relevant location in this post).

This post does NOT cover optimizations for JavaScript runtimes/VMs/implementations (i.e. how could you make a faster V8)

The techniques at your disposal can be (roughly) grouped into the following types of optimizations:

And each of these is suited particularly well to overcome a specific performance bottleneck (or use case).

You can also combine these techniques if you have a particularly hairy performance problem (i.e. WebGL in a Web Worker)

💉 Picking your Performance Poison

The first step in improving performance is understanding what type of bottleneck you have. Again, the flowchart is your quickest path to choosing the right technique for your particular problem, but that begs the question: how do I know which node to stop at?

Sometimes it might be somewhat intuitive/obvious what the bottleneck is (like if you are trying to do a word count across a 1TB file in the browser for some reason). But often applications are complex enough that it is hard to know a priori where the slow downs are happening (or there may be multiple compounding bottlenecks).

Enter profiling (the good kind). Profiling in a web browser is slightly different than profiling a standalone program, since there are many externalities that affect how your code actually executes in a browser.

To help untangle this web 2 of performance, browsers often have fairly advanced developer tools built in. They are your friends.

Thankfully the general principles and process of profiling are similar to debugging any (non-browser) language. When I am trying to pinpoint a performance bottleneck, I like to imagine I am playing twenty questions with my browser.

  1. Is it you or is it me (and my code)?
  2. Are my calculations taking too long?
  3. Do I have enough memory?
  4. etc.
  5. etc.

For this post we are not going to dive into the nuances of performance profiling since it is a fairly complex process that can vary a lot depending on environment, OS, browser, the network, other applications, etc. And often it is a lot of trial and error. But the resources have lots of good advice from people much wiser than me. For this post we will proceed in the hypothetical:

Which brings us to our first potential solution.

🏄 Streaming the Web

If you are running into latency or memory problems, streaming the data can often help. On the JavaScript side, this looks like an incremental algorithm that updates a shared data structure that represents the result of some calculation. And if the calculation is aggregating something, the output result is often much smaller that the raw data input.

If the amount of memory needed to process a single datum is less than the total available memory, a properly architected streaming solution may be all you need.

Most of the techniques applicable to processing data streams in general can help us too. The main difference however is that we have to work within the constraints of the browser environment.

In the browser, the is primary concern is data/network I/O as this usually has the biggest impact.

For streaming I/O in the browser, there are a variety of options/protocols. And depending on the application/processing contraints one approach may be more appropriate than the others.

ProtocolI/O SemanticsExample Use CaseNotes
fetch()Async HTTPInteract with an APIXMLHttpRequest++
StreamsChunked HTTPProgressive renderingLimited browser support
Web SocketsBidirectional TCPSlack cloneIncreased latency
WebRTCP2P over UDPMultiplayer GamesNeeds signaling server

🧑‍🏭 Web Workers

As you may or may not know, JavaScript is single threaded language with an asynchronous event loop based concurrency model. This is actually not a limitation of the language design, but inherent to the run-time environment (i.e. the browser, Node.js, etc.). And while certain run-times (often JVM based) do in fact support real multi-threading, even implementations marketed as "multi-threaded" are actually just syntactic sugar hiding workers.

Web workers at their core are simply the browser's answer to threads 3. And more importantly, they allow web applications to execute code in parallel, rather than just concurrently.

A big difference between programming with threads and Web Workers however is that a Web Worker executes in its own isolated context (without access to shared memory or the DOM). To communicate with the main program/web page (or any other workers), you must pass messages back and forth.

An advantage of the worker model however is that race conditions are impossible if you have no shared resources (like files or a DB). And much less likely even with shared resources, since access to these is usually more obvious than simply accessing shared memory (like with threads). Additionally, in contrast to multithreading, the single thread in the main program coordinating the communication between the workers cannot be preempted 4. A downside to this (as you have likely been the victim of) is that a runaway JavaScript function in the main program can block the entire event loop and freeze the browser window.

🧮 GP-Unit

The GPU is often the darling of performance junkies… I mean how can you not get starry eyed at a 5120 CUDA core V100 🤩 And these 5120 cores means that you can run parallel operations on large amounts of data. But what they (the junkies) won't tell you is that those 5120 cores can only run a very specific type of instruction.

When considering if GPU based processing can help, you first must have a computational bottleneck. A GPU will not make sending data over the network any faster nor will it give you any more memory. And second, you must be able to formulate you computation as a shader 5.

But. but but but. IF you can write your computation as a shader… the WebGL gods will smile upon you with unreasonable performance.

Another limitation is that to run computations on the GPU, you actually need to transfer the data to the GPUs memory (which is often much less than RAM). So if you will need to shuttle data between RAM and the GPU often, transfer overhead/latency might outweigh the actually speedup.

Because of these aforementioned constraints, often statistical/mathematical computing and machine learning applications are the most amenable to speedup with the GPU (high gain-to-pain ratio).

In the browser, you can access the GPU using the WebGL Javascript API. You can write shaders in OpenGL and use the JavaScript API to execute these shaders on data. But writing shaders is usually a pretty low level interface (especially for non-graphics GPGPU stuff), so if you do want to go down this route I recommend using a higher level library.

🛠🧰 WebAssembly

And so, we are now in the final act… The latest golden boy of the web performance world is WebAssembly. And for good reason.

WebAssembly is exciting for what it brings to the browser environment:

  1. Safe code execution in a sandbox
  2. Near native speeds
  3. Ability to run code written in non-JavaScript languages in a browser
  4. Uses open web standards (and no proprietary plugins)

I will not get into the specific mechanics of how the WebAssembly VM works nor the semantics of the WebAssembly text format. To learn more about those I highly recommend starting with Lin Clark's Cartoon intro to WebAssembly and then checking out some of the other resources.

For this post, I mainly want to provide enough context on WebAssembly so you hopefully can evaluate where and when it might be beneficial.

WebAssembly ironically does not perform the same role as native assembly, nor can you actually do any web programming in it (it doesn't have access to the DOM). I like to think of WebAssembly as something of an intermediate representation for programs that execute in a web browser. Because it is an intermediate representation it is not meant to be programmed in directly (much like you would should not write a program in LLVM-IR), but because it is an IR it is somewhat more readable/understandable than machine instructions.

While WebAssembly might look like the spiritual successor to asm.js on the surface, it represents so much more. Risking hyperbole, its development can be analogized to the first compilers of Admiral Grace Hopper.

WebAssembly can usually execute faster than the equivalent JavaScript for many programs, but it doesn't always run faster. And to situate WebAssembly in our modern toolkit, I would say it is most accurate to think of it as an alternative to C/C++/Rust (which coincidentally are the most well supported languages targeting wasm). JavaScript still does what it was optimized for best (and will continue to do so)— dynamic manipulation of the DOM and orchestration of interaction with the browser.

A few quick notes to clear some things up:

  1. JavaScript DOES NOT compile to wasm 6
  2. WebAssembly is ahead-of-time (AOT) compiled 7
  3. WebAssembly DOES NOT have access to the DOM (or the Web APIs)
  4. WebAssembly uses a sequential linear memory

Because of these points, WebAssembly exists as something of a dual to WebGL in the realm of performance. WebAssembly really excels at speeding up computational intensive processes that often have sequential random memory access and complex control flow.

Which is the Achilles' 👠 of WebGL.

On the practical side of things, one of the most common ways of writing applications targeting WebAssembly is to use emscripten, a C/C++ to wasm compiler toolchain. And as WebAssembly is becoming more widespread, some languages (like Rust, AssemblyScript and Go) have wasm as a direct compilation target.

Thanks for all the (vegan) fish (sticks)

Hopefully this post has laid out a sensible roadmap for determining how you might speedup your JavaScript application. If you do feel like these methods might be applicable, again I highly recommend the linked resources below 👇 to learn more about any of the specific technologies.

And if you have a recommendation/suggestion/correction/comment on anything in here, don't hesitate to leave a comment below or send a message on the bird telegraph @hyphaebeast. Thank you for choosing to spend your attention on this post (instead of some TikTok video) 🙏

We are all learning and I am far from an expert on anything in here….

Resources

Profiling

Benchmarks

WebAssembly

WebGL

CC0
To the extent possible under law, Jonathan Dinu has waived all copyright and related or neighboring rights to Accelerating JavaScript (in the browser). This work is published from: United States.


  1. citation needed ↩︎

  2. pun very much intended ↩︎

  3. and depending on the implementation, are actually just run as a thread ↩︎

  4. no schedulers, no masters ↩︎

  5. Even when using a library for "general purpose" GPU (GPGPU) programming, you still are constrained by the need to formulate your program as a shader. ↩︎

  6. AssemblyScript is a subset of TypeScript that compiles down to wasm however ↩︎

  7. though you can run the compiler in a web page ↩︎

back · whoami · teaching · projects · talks · writing · cv · colophon · join