Back

So You Want to Smuggle a Subnet Into an Airport

Like the bad sushi I once ate at Denver International, airport Wi-Fi is a compromise between having what you need and getting what you want. Public networks are not just sluggish, though — sometimes they are unusable. What happens, for example, when subnets conflict? How can you access private networks with ranges that collide with those networks that you might encounter while roaming, like in airports or coffee shops?

In this blog post, we’ll explain how Bowtie solves these types of problems using IPv6 and a little creativity.

Return to Baggage Claim: Understanding Subnet Conflicts in Public Networks

Let us first set the stage for the problem.

One of the networks in my homelab lives under the subnet 10.50.0.0/16. Per the routing rules that we know and love, any outbound packet destined for an address with the first two octets matching 10.50 needs to exit my network stack on an interface that can reach that subnet. While inside my home network, this is not hard: my operating system’s network machinery sets up a routing table and all privately-addressed 10.50.0.0/16 packets leave my laptop and find their destination inside my LAN.

Compare this to the next time I fly out of BOI. Imagine that the airport Wi-Fi places me in a guest network under the 10.50.0.0/16 subnet — the same subnet that I operate for my homelab. I can stand up a traditional VPN connection that tries to facilitate communication into my private 10.50.0.0/16 homelab network, but there’s a problem.

Normally, a packet headed for a destination address that matches my default gateway — that is, an address outside of any routes I can hit directly on the local network — gets bundled up and handed off to the gateway to forward along. That is great and will let me leave a blistering review online for this so-called sushi restaurant, but the situation is different for packets headed for an address under my VPN.

If I need to reach 10.50.100.1 to clone a private git repo, I drop the outbound packet into my operating system’s network stack and suddenly it does not know where to go. My VPN ultimately needs to wrap up that packet to handoff to the default gateway, but the gateway device also lives under 10.50.0.0/16. We have a chicken-and-egg problem: I have one network interface that can help me reach the gateway inside 10.50.0.0/16 which I need to reach the open Internet and another network interface for my homelab VPN connection that serves 10.50.0.0/16. VPN packets still need to ultimately leave my laptop by traversing the gateway, but this imaginary routing table can not pick between the competing 10.50.0.0/16 entries.

Note that there is no one to blame here necessarily. My private network ranges are mine to choose, and there are a huge number of 10.0.0.0/8 and 192.168.0.0/16 networks out there that do not need to be concerned with each other’s actual ranges due to Network Address Translation (NAT). This only becomes problematic when duplicate ranges collide on adjacent networks, and then you are cast into networking purgatory with nothing but a questionable-looking seaweed wrap to comfort you.

I mention NAT as a way to sidestep this problem between IPv4 networks that have hops available in-between to perform translation, but what if we could push the idea of address translation even further?

A Shocking Transformation: Solving Network Overlap with Creative Solution

Recall that we have determined that I want to reach an address in my homelab like 10.50.100.1/16, but my airport default gateway lives under 10.50.0.0/16 as well, leaving my outbound packet flummoxed. Let us dream up some weird science that we can perform on an everyday packet.

One trait that every VPN has is an external endpoint. From where I sit, gagging on airport sushi, I still need to hit a generally-available public endpoint before tunneling into private addresses. This is key: logically, I can say that the endpoint VPN address is what my private addresses “sit” behind. This means that the combination of VPN address + private resource address can uniquely identify a desired private destination: in human terms, “please reach this private address by tunneling through this other address”. We are sort of talking in circles because that’s what a VPN does, but stay with me.

Your typical VPN tries to make all this routing transparent by initiating a connection to the VPN address and then populating your local route table so that everything appears seamless, but our network collision scenario precludes the ability to use this strategy: we simply can not make conflicting routes work without something more. What we can do is use the VPN endpoint again to try and construct a “fingerprint” for outbound packets that asks our operating system to send our conflicting packet through a specific endpoint, sidestepping the conflicting routes problem.

Dr. Jekyll and Mr. Packet: Leveraging IPv6 for Network Collision Issues

If we do not use the IPv4 routing table, but want to add some hints to the outbound traffic, there’s one sneaky approach we can take: smuggle additional addressing information inside of an IPv6 packet.

The space inside of an IPv6 address is immense — like, you can easily fit more than a couple of entire Internets in there. So why can we not hide an entire private range inside of an IPv6 packet?

It turns out that this networking sleight-of-hand already exists, and it’s called NAT64. In principle, the concept is not too alien: you dedicate a portion of an IPv6 address to store an “inner” IPv4 address. For example, my homelab destination address 10.50.100.1 looks like 0a32:6401 when represented as hexadecimal-formatted IPv6 segments. If I can form a complete and routable IPv6 address with this IPv4 tacked onto the end, then my destination endpoint can unwrap the IPv4 address and carry it the rest of the way. 

This can be better understood with an example. Suppose our imaginary NAT64 address is 64:ff9b::a32:6401 and forms a complete IPv6 address (in a real-world scenario, we would want to ensure that the address was routable in our IPv6 routing table.)

NAT64 in action

If we can get this packet to an endpoint that speaks NAT64, we can unwrap the last few segments to recover 10.50.100.1 from a32:6401, and then process it as normal.

NAT64 with a Bowtie: How Bowtie's NAT64 Solves Networking Challenges

Publicly-reachable Bowtie network Controllers are the generally-available VPN endpoints in this re-imagined scenario. The connection from my laptop to a Controller is simple: it’s a wireguard interface, which means that it’s robust, fast, and secure.

Like tried-and-true NAT, NAT64 can do things that seem like magic, but it’s the implementation of the spec that facilitates the utility of it. To make the magic happen, we push IPv6 routes from the Controller to the Bowtie client. This is the first step to sidestepping IPv4 routes that potentially collide: by claiming our own unique IPv6 route, we can fit whole networks within our own IPv6 ranges, and the vastness of the space means that we can find a place in there comfortably and without fear of overlap.

NAT64 is less of a rigid specification than a general way of routing between IPv4 and IPv6 spaces, so we have some wiggle room when it comes to constructing our NAT64 addresses. In our case, Controllers are inherently designed to operate in distributed fashion wherever their associated networks are. This means that at any given time, a Bowtie client may be aware of one or more Controllers that facilitate connectivity into the private subnets that they serve.

This is important for a few reasons:

Each Controller also knows which subnets it is best able to reach. Imagine Controllers deployed in AWS us-east-1 and us-west-1. If a Controller in us-east-1 is adjacent to a VPC using 10.200.1.0/24, then it can signal this to connected clients. If that Controller ceases to be publicly reachable but another Controller operating in the region is, then the Bowtie client picks it as the next best option. In another example, if the us-east-1 region is experiencing availability problems — which would be an unthinkable situation for this AWS region — then the client can speak to a Controller in us-west-1 and let the Controllers attempt to route between themselves for a better chance at connecting across the AWS network backhaul plane.

Bowtie ZTNA overlay

In this way we get minimal latency and failover resiliency. Just smush the Controller’s IPv6 prefix together with your IPv4 destination and send it down the operating system network stack! In more meaty (fishy?) terms:

  • Decide where to go — in our case, 10.50.100.1
  • Choose a destination Controller to be the entrypoint for our packet. In a traditional VPN, this is just the VPN’s server endpoint, but in our case, we can select from a pool of endpoints for the best match with low latency between sites, and any number of controllers per site for high availability.
  • Combine an IPv6 destination prefix that identifies the desired Controller with our IPv4 destination address and any extra bits that flag it as a NAT64 address. This ends up looking like <controller prefix>:64:a32:6401 in our example above.
  • The Bowtie client software is aware of its Controller endpoints and uses that list to maintain entries in the operating system’s IPv6 routing table.
  • Send the packet into the operating system’s networking stack and let it hit the right IPv6 route which will toss it into the associated wireguard interface and through the encrypted tunnel to the destination Controller.
  • The Controller at the other end of the pipe receives the NAT64 packet, unwraps it, recovers 10.50.100.1, and forwards it along to the right subnet.

At this point, the NAT64 machinery kicks in and establishes a persistent connection between the client at one end and the private resource at the other so that responses that arrive on the Controller’s network interface are re-wrapped into an IPv6 packet. Voila! Any private subnet can work because we smuggle an entire IPv6 subnet into a unique IPv6 prefix. A picture is worth a thousand bits to illustrate the flow:

In this diagram, the parts in red represent the normal IPv4 packets that we’d like to send between edge devices and the private network. These red packets get translated into blue NAT64 packets, wrapped inside wireguard interfaces, and the entire process is reversed on the Controller to recover our plain IPv4 address.

There’s one more piece to turn this strategy into a hands-off solution, though: most people do not pick IPv4 addresses manually, so we prefer to reach sushi-review.internal.example.com through the NAT64 tunnel without constructing the address manually.

It’s Always DNS: The Role of DNS64 in Network Connectivity Solutions

It turns out we do not need to reinvent too many wheels here either, but instead repurpose mechanisms that already work well for this use case. Turning a hostname into an IP address is what DNS is for, after all.

DNS64 is the logical DNS counterpart to NAT64: it permits us to translate between A (IPv4) and AAAA (IPv6) records. Let’s start from the moment I enter sushi-review.internal.example.com into my browser. My poorly-prepared California roll has impaired my cognitive reasoning ability and I do not want to type the internal private IPv4 address, but rather the easier-to-remember friendly DNS name.

The first thing our NAT64+DNS64 system needs to do is sit between DNS queries to helpfully suggest a different record response that points at our NAT64 address. This is not too exotic: Bowtie stands up a DNS server and we preferentially send system DNS queries to it.

At our Controller, an administrator has already defined the shape and configuration of our private network: we have resources like DNS servers there, and the local agent collects all of this information in order to help our private DNS query along. For example, the Controller indicates that queries matching *.internal.example.com should be sent to our internal DNS, so when the local resolver receives sushi-review.internal.example.com, it takes a brief detour to find addresses for this hostname from our private network.

The next step is just NAT64 again! The local DNS resolver looks up the upstream resolvers that the Controller has told it about, constructs a NAT64 packet destined for the desired DNS server, and sends it down the networking stack destined for the conflict-proof DNS IPv4 address. Take note that this means any existing DNS servers in my private network do not need any changes: they receive normal-looking IPv4 packets and send DNS responses without any need to wrap them in NAT64-looking addresses. We are about to handle that.

When our local DNS resolver lookup returns with an unwrapped 10.50.100.1 response for sushi-review.internal.example.com, we take one more step before handing it off to the browser: we transmogrify it into a NAT64 address and then remove the A IPv4 response. This effectively persuades the browser to preferentially start communicating across the NAT64 tunnel and is a sneaky application of something called the Happy Eyeballs protocol, which says, “send out A and AAAA DNS queries, hand them back to me, and I’ll take the AAAA if you have it”. “Happy Eyeballs” is also the name of the sushi dish that is causing my distress, which is a disturbing coincidence.

With the IPv6 DNS AAAA response in-hand, my browser is none the wiser to the fact that it’s actually sending a NAT64-ed packet down the routing table when it disappears into the wireguard interface and emerges at the Controller in my private network. NAT64 kicks in again, and our connection to sushi-review.internal.example.com works transparently.

Here is an even better trick: the local Bowtie resolver will intelligently parse out names that look like private addresses of a form like ip-10-50-100-1.internal.example.com in order to assemble 10.50.100.1 from the hostname and perform translation for you. This achieves the same result for the example IP address we have been using so far, but provides a path to easily reach plain addresses that may not have DNS records like 10.50.100.123.

The examples that I have described so far fall into the “strange address in a strange LAN” scenario, but the principles apply in a variety of interesting ways - not just when roaming in other networks.

Even Further Beyond

Consider how you might access private networks in a public cloud with identical, overlapping subnets in different networks. 

Running one (or more!) Controllers per network unlocks this capability by associating each network “behind” a given Controller. With a little extra configuration we can make an association from arbitrary hostnames to the right Controller and then let the unique NAT64 address take care of pointing in the right direction. For example, we might configure *.us-east-1.sushi-lawsuit.org names to tunnel through the Controller running in us-east-1 but instruct the agent to construct NAT64 addresses destined for the us-west-2 Controller for names matching *.us-west-2.sushi-lawsuit.org. When I attempt to navigate to settlement.us-west-2.sushi-lawsuit.org, I receive a NAT64 Happy Eyeballs AAAA response that may wrap an IPv4 address in an identical subnet as the us-east-1 network, but because the combined Controller prefix+IPv4 address NAT64 “fingerprint” is unique, we can route packets to it without any issue.

Bowtie Controllers peer with each other as well so that as long as we can reach one Controller, we can eventually reach any Controller. If a BGP route breaks between your ISP and AWS but a Controller in GCP is peered with another Controller in EC2, then you can route through a reachable Controller to reach the others. In this way, we provide not only access, but also reliability as well.

As you can imagine, there are many applications for this networking scheme thanks to how fundamentally powerful the primitives are.

A Farewell to Sushi

We hope this has been an informative look into our networking philosophy. If you recognize some of these problems and are looking to put them behind you, try Bowtie today

Original header image attribution: https://www.flickr.com/photos/26226522@N08/2923815673

Original “Dr. Jekyll and My. Hyde” image attribution: https://commons.wikimedia.org/wiki/File:Dr_Jekyll_and_Mr_Hyde_poster_edit2.jpg

See Bowtie In Action

Experience Bowtie's distributed overlay security platform in action. Book a demo to see how we can improve your network's security.