How to set up your own Nebula mesh VPN, step by step (2024)

no, sean, it's not just an exfil tool —

Itching to get your own Nebula mesh VPN up and running? We've got you covered.

Jim Salter -

How to set up your own Nebula mesh VPN, step by step (1)

Further Reading

Nebula VPN routes between hosts privately, flexibly, and efficiently

Last week, we covered the launch of Slack Engineering's open source mesh VPN system, Nebula. Today, we're going to dive a little deeper into how you can set up your own Nebula private mesh network—along with a little more detail about why you might (or might not) want to.

VPN mesh versus traditional VPNs

The biggest selling point of Nebula is that it's not "just" a VPN, it's a distributed VPN mesh. A conventional VPN is much simpler than a mesh and uses a simple star topology: all clients connect to a server, and any additional routing is done manually on top of that. All VPN traffic has to flow through that central server, whether it makes sense in the grander scheme of things or not.

In sharp contrast, a mesh network understands the layout of all its member nodes and routes packets between them intelligently. If node A is right next to node Z, the mesh won't arbitrarily route all of its traffic through node M in the middle—it'll just send them from A to Z directly, without middlemen or unnecessary overhead. We can examine the differences with a network flow diagram demonstrating patterns in a small virtual private network.

How to set up your own Nebula mesh VPN, step by step (2)

All VPNs work in part by exploiting the bi-directional nature of network tunnels. Once a tunnel has been established—even through Network Address Translation (NAT)—it's bidirectional, regardless of which side initially reached out. This is true for both mesh and conventional VPNs—if two machines on different networks punch tunnels outbound to a cloud server, the cloud server can then tie those two tunnels together, providing a link with two hops. As long as you've got that one public IP answering to VPN connection requests, you can get files from one network to another—even if both endpoints are behind NAT with no port forwarding configured.

Where Nebula becomes more efficient is when two Nebula-connected machines are closer to each other than they are to the central cloud server. When a Nebula node wants to connect to another Nebula node, it'll query a central server—what Nebula calls a lighthouse—to ask where that node can be found. Once the location has been gotten from the lighthouse, the two nodes can work out between themselves what the best route to one another might be. Typically, they'll be able to communicate with one another directly rather than going through the router—even if they're behind NAT on two different networks, neither of which has port forwarding enabled.

By contrast, connections betweenany two PCs on a traditional VPN must pass through its central server—adding bandwidth to that server's monthly allotment and potentially degrading both throughput and latency from peer to peer.

Direct connection through UDP skullduggery

Nebula can—in most cases—establish a tunnel directly between two different NATted networks, without the need to configure port forwarding on either side. This is a little brain-breaking—normally, you wouldn't expect two machines behind NAT to be able to contact each other without an intermediary. But Nebula is a UDP-only protocol, and it's willing to cheat to achieve its goals.

If both machines reach the lighthouse, the lighthouse knows the source UDP port for each side's outbound connection. The lighthouse can then inform one node of the other's source UDP port, and vice versa. By itself, this isn't enough to make it back through the NAT pinhole—but if each side targets the other's NAT pinholeand spoofs the lighthouse's public IP address as being the source, their packets will make it through.

UDP is a stateless connection, and very few networks bother to check for and enforce boundary validation on UDP packets—so this source-address spoofing works, more often than not. However, some more advanced firewalls may check the headers on outbound packets and drop them if they have impossible source addresses.

If only one side has a boundary-validating firewall that drops spoofed outbound packets, you're fine. Butifboth ends have boundary validation available, configured, and enabled, Nebula will either fail or be forced to fall back to routing through the lighthouse.

We specifically tested this and can confirm that a direct tunnel from one LAN to another across the Internet worked, with no port forwarding and no traffic routed through the lighthouse. We tested with one node behind an Ubuntu homebrew router, another behind a Netgear Nighthawk on the other side of town, and a lighthouse running on a Linode instance. Running iftop on the lighthouse showed no perceptible traffic, even though a 20Mbps iperf3 stream was cheerfully running between the two networks. So right now, inmost cases, direct point-to-point connections using forged source IP addresses should work.

Setting Nebula up

To set up a Nebula mesh, you'll need at least two nodes, one of which should be a lighthouse. Lighthouse nodes must have a public IP address—preferably, a static one. If you use a lighthouse behind a dynamic IP address, you'll likely end up with some unavoidable frustration if and when that dynamic address updates.

The best lighthouse option is a cheap VM at thecloud provider of your choice. The $5/mo offerings at Linode or Digital Ocean are more than enough to handle the traffic and CPU levels you should expect, and it's quick and easy to open an account and get one set up. We recommend the latest Ubuntu LTS release for your new lighthouse's operating system; at press time that's 18.04.

Installation

Nebula doesn't actually have an installer; it's just two bare command line tools in a tarball, regardless of your operating system. For that reason, we're not going to give operating system specific instructions here: the commands and arguments are the same on Linux, MacOS, or Windows. Just download the appropriate tarball from the Nebula release page, open it up (Windows users will need 7zip for this), and dump the commands inside wherever you'd like them to be.

On Linux or MacOS systems, we recommend creating an /opt/nebula folder for your Nebula commands, keys, and configs—if you don't have an /opt yet, that's okay, just create it, too. On Windows, C:\Program Files\Nebula is probably a more sensible location.

Certificate Authority configuration and key generation

The first thing you'll need to do is create a Certificate Authority using the nebula-cert program. Nebula, thankfully, makes this a mind-bogglingly simple process:

root@lighthouse:/opt/nebula# ./nebula-cert ca -name "My Shiny Nebula Mesh Network"

What you've actually done is create a certificate and key for the entire network. Using that key, you can sign keys for each node itself. Unlike the CA certificate, node certificates need to have the Nebula IP address for each node baked into them when they're created. So stop for a minute and think about what subnet you'd like to use for your Nebula mesh. It should be a private subnet—so it doesn't conflict with any Internet resources you might need to use—and it should be anoddball one so that it won't conflict with any LANs you happen to be on.

Nice, round numbers like 192.168.0.x, 192.168.1.x, 192.168.254.x, and 10.0.0.x should beright out, as the odds are extremely good you'll stay at a hotel, friend's house, etc that uses one of those subnets. We went with 192.168.98.x—but feel free to get more random than that. Your lighthouse will occupy .1 on whatever subnet you choose, and you will allocate new addresses for nodes as you create their keys. Let's go ahead and set up keys for our lighthouse and nodes now:

root@lighthouse:/opt/nebula# ./nebula-cert sign -name "lighthouse" -ip "192.168.98.1/24"root@lighthouse:/opt/nebula# ./nebula-cert sign -name "banshee" -ip "192.168.98.2/24"root@lighthouse:/opt/nebula# ./nebula-cert sign -name "locutus" -ip "192.168.98.3/24"

Now that you've generated all your keys, consider getting them the heckout of your lighthouse, for security. You need the ca.key file only when actually signing new keys, not to run Nebula itself. Ideally, you should move ca.key out of your working directory entirely to a safe place—maybe even a safe place that isn't connected to Nebula at all—and only restore it temporarily if and as you need it. Also note that the lighthouse itself doesn't need to be the machine that runs nebula-cert—if you're feeling paranoid, it's even better practice to do CA stuff from a completely separate box and just copy the keys and certs out as you create them.

Each Nebula nodedoes need a copy of ca.crt, the CA certificate. It also needs its own .key and .crt, matching the name you gave it above. Youdon't need any other node's key or certificate, though—the nodes can exchange them dynamically as needed—and for security best practice, you reallyshouldn't keep all the .key and .crt files in one place. (If you lose one, you can always just generate another that uses the same name and Nebula IP address from your CA later.)

Configuring Nebula with config.yml

Nebula's Github repo offers a sample config.yml with pretty much every option under the sun and lots of comments wrapped around them, and we absolutely recommend anyone interested poke through it see to all the things that can be done. However, if you just want to get things moving, it may be easier to start with a drastically simplified config that has nothing but what you need.

Lines that begin with a hashtag are commented out and not interpreted.

## This is Ars Technica's sample Nebula config file.#pki: # every node needs a copy of the CA certificate, # and its own certificate and key, ONLY. # ca: /opt/nebula/ca.crt cert: /opt/nebula/lighthouse.crt key: /opt/nebula/lighthouse.keystatic_host_map: # how to find one or more lighthouse nodes # you do NOT need every node to be listed here! # # format "Nebula IP": ["public IP or hostname:port"] # "192.168.98.1": ["nebula.arstechnica.com:4242"]lighthouse: interval: 60 # if you're a lighthouse, say you're a lighthouse # am_lighthouse: true hosts: # If you're a lighthouse, this section should be EMPTY # or commented out. If you're NOT a lighthouse, list # lighthouse nodes here, one per line, in the following # format: # # - "192.168.98.1"listen: # 0.0.0.0 means "all interfaces," which is probably what you want # host: 0.0.0.0 port: 4242# "punchy" basically means "send frequent keepalive packets"# so that your router won't expire and close your NAT tunnels.#punchy: true# "punch_back" allows the other node to try punching out to you,# if you're having trouble punching out to it. Useful for stubborn# networks with symmetric NAT, etc.#punch_back: truetun: # sensible defaults. don't monkey with these unless # you're CERTAIN you know what you're doing. # dev: nebula1 drop_local_broadcast: false drop_multicast: false tx_queue: 500 mtu: 1300 routes:logging: level: info format: text# you NEED this firewall section.## Nebula has its own firewall in addition to anything# your system has in place, and it's all default deny.## So if you don't specify some rules here, you'll drop# all traffic, and curse and wonder why you can't ping# one node from another.# firewall: conntrack: tcp_timeout: 120h udp_timeout: 3m default_timeout: 10m max_connections: 100000# since everything is default deny, all rules you# actually SPECIFY here are allow rules.# outbound: - port: any proto: any host: any inbound: - port: any proto: any host: any

Warning: our CMS is mangling some of the whitespace in this code, so don't try to copy and paste it directly. Instead, get working, guaranteed-whitespace-proper copies from Github: config.lighthouse.yaml and config.node.yaml.

There isn't much different between lighthouse and normal node configs. If the node is not to be a lighthouse, just set am_lighthouse to false, and uncomment (remove the leading hashtag from) the line # - "192.168.98.1", which points the node to the lighthouse it should report to.

Note that the lighthouse:hosts list uses thenebula IP of the lighthouse node, not its real-world public IP! The only place real-world IP addresses should show up is in the static_host_map section.

Starting nebula on each node

I hope you Windows and Mac types weren't expecting some sort of GUI—or an applet in the dock or system tray, or a preconfigured service or daemon—because you're not getting one. Grab a terminal—a command prompt run as Administrator, for you Windows folks—and run nebula against its config file. Minimize the terminal/command prompt window after you run it.

root@lighthouse:/opt/nebula# ./nebula -config ./config.yml

That's all you get. If you left the logging set at info the way we have it in our sample config files, you'll see a bit of informational stuff scroll up as your nodes come online and begin figuring out how to contact one another.

If you're a Linux or Mac user, you might also consider using the screen utility to hide nebula away from your normal console or terminal (and keep it from closing when that session terminates).

Figuring out how to get Nebula to start automatically is, unfortunately, an exercise we'll need to leave for the user—it's different from distro to distro on Linux (mostly depending on whether you're using systemd or init). Advanced Windows users should look into running Nebula as a custom service, and Mac folks should call Senior Technology Editor Lee Hutchinson on his home phone and ask him for help directly.

Conclusion

Nebula is a pretty cool project. We love that it's open source, that it uses the Noise platform for crypto, that it's available on all three major desktop platforms, and that it's easy...ish to set up and use.

With that said, Nebula in its current form is really not for people afraid to get their hands dirty on the command line—not just once, butalways. We have a feeling that some real UI and service scaffolding will show up eventually—but until it does, as compelling as it is, it's not ready for "normal users."

Right now, Nebula's probably best used by sysadmins and hobbyists who are determined to take advantage of its dynamic routing and don't mind the extremely visible nuts and bolts and lack of anything even faintly like a friendly interface. We definitelydon't recommend it in its current form to "normal users"—whether that means yourself or somebody you need to support.

Further Reading

WireGuard on Windows early preview

Unless you really, really need that dynamic point-to-point routing, a more conventional VPN like WireGuard is almost certainly a better bet for the moment.

The Good

  • Free and open source software, released under the MIT license
  • Cross platform—looks and operates exactly the same on Windows, Mac, and Linux
  • Reasonably fast—our Ryzen 7 3700X managed 1.7Gbps from itself to one of its own VMs across Nebula
  • Point-to-point tunneling means near-zero bandwidth needed at lighthouses
  • Dynamic routing opens interesting possibilities for portable systems
  • Simple, accessible logging makes Nebula troubleshooting a bit easier than WireGuard troubleshooting

The Bad

  • No Android or iOS support yet
  • No service/daemon wrapper included
  • No UI, launcher, applet, etc

The Ugly

  • Did we mention the complete lack of scaffolding? Please don't ask non-technical people to use this yet
  • The Windows port requires the OpenVPN project's tap-windows6 driver—which is, unfortunately, notoriously buggy and cantankerous
  • "Reasonably fast" is relative—most PCs should saturate gigabit links easily enough, but WireGuard is at least twice as fast as Nebula on Linux

Promoted Comments

  • rhuber Smack-Fu Master, in training

    pokrface wrote:

    show nested quotes

    It seems like an init script would be dead-simple. There's just not much to this.

    Actually, Jim, that raises a question—can the nebula process run under a non-root context, like a proper service should?

    We have a reasonably good solution coming in the 1.1 milestone (and already merged into master).

    https://github.com/slackhq/nebula/pull/3

    This integrates the service mode bits from github.com/kardianos/service to make nebula a self installing service on every platform. By default we still prefer init scripts for Linux, but service mode makes deploying this on Windows and MacOS much easier. For instance, by running `./nebula -service install` on Windows, nebula will be added to the real system service manager and started automatically on boot. On MacOS, this just automatically creates the launchd bits.

    If you want to try it today, just git clone the latest master from

    https://github.com/slackhq/nebula

    and use the command `make service [platform]` to build it. For Windows, this would be `make service bin-windows`.

    In 1.1 we will likely make "service mode" the default in release binaries for Windows/MacOS, but not Linux.

  • UserIDAlreadyInUse Ars Tribunus Militum et Subscriptor

    For those wanting to try it out, Google's Always Free VM option is a good choice; no cost, *just* enough horsepower to install and run Nebula, public, static IP address.

  • rhuber Smack-Fu Master, in training et Subscriptor

    Jim Salter wrote:

    show nested quotes

    Surrrrvey says:

    show nested quotes

    𝘽𝙕𝙕𝙕𝙕𝙕𝙏.

    Just a quick note, nebula doesn't inherently require being run as root, as long as you allow it to do network-things. This is easily done with setcap with the CAP_NET_ADMIN capability, i.e. you can do:

    sudo setcap cap_net_admin+ep /usr/local/bin/nebula

    ...and then you can run it as any user you like. Just make sure the nebula configuration and creds are readable by that user.

Promoted Comments

  • rhuber Smack-Fu Master, in training

    pokrface wrote:

    show nested quotes

    It seems like an init script would be dead-simple. There's just not much to this.

    Actually, Jim, that raises a question—can the nebula process run under a non-root context, like a proper service should?

    We have a reasonably good solution coming in the 1.1 milestone (and already merged into master).

    https://github.com/slackhq/nebula/pull/3

    This integrates the service mode bits from github.com/kardianos/service to make nebula a self installing service on every platform. By default we still prefer init scripts for Linux, but service mode makes deploying this on Windows and MacOS much easier. For instance, by running `./nebula -service install` on Windows, nebula will be added to the real system service manager and started automatically on boot. On MacOS, this just automatically creates the launchd bits.

    If you want to try it today, just git clone the latest master from

    https://github.com/slackhq/nebula

    and use the command `make service [platform]` to build it. For Windows, this would be `make service bin-windows`.

    In 1.1 we will likely make "service mode" the default in release binaries for Windows/MacOS, but not Linux.

  • UserIDAlreadyInUse Ars Tribunus Militum et Subscriptor

    For those wanting to try it out, Google's Always Free VM option is a good choice; no cost, *just* enough horsepower to install and run Nebula, public, static IP address.

  • rhuber Smack-Fu Master, in training et Subscriptor

    Jim Salter wrote:

    show nested quotes

    Surrrrvey says:

    show nested quotes

    𝘽𝙕𝙕𝙕𝙕𝙕𝙏.

    Just a quick note, nebula doesn't inherently require being run as root, as long as you allow it to do network-things. This is easily done with setcap with the CAP_NET_ADMIN capability, i.e. you can do:

    sudo setcap cap_net_admin+ep /usr/local/bin/nebula

    ...and then you can run it as any user you like. Just make sure the nebula configuration and creds are readable by that user.

How to set up your own Nebula mesh VPN, step by step (2024)

References

Top Articles
Does HHC Show Up on a Drug Test?
How Do Penguins Swim and Dive So Well? [and Other Questions]
Spn 1816 Fmi 9
Cooking Chutney | Ask Nigella.com
Quick Pickling 101
Cad Calls Meriden Ct
Coffman Memorial Union | U of M Bookstores
Gore Videos Uncensored
Nesb Routing Number
Mail Healthcare Uiowa
Zachary Zulock Linkedin
Garrick Joker'' Hastings Sentenced
Zoebaby222
Newgate Honda
Turning the System On or Off
Alaska: Lockruf der Wildnis
Sand Castle Parents Guide
Cbs Trade Value Chart Fantasy Football
Used Sawmill For Sale - Craigslist Near Tennessee
Echat Fr Review Pc Retailer In Qatar Prestige Pc Providers – Alpha Marine Group
Band Of Loyalty 5E
Joann Ally Employee Portal
Accident On The 210 Freeway Today
Thick Ebony Trans
Colonial Executive Park - CRE Consultants
Sessional Dates U Of T
4 Times Rihanna Showed Solidarity for Social Movements Around the World
Keyn Car Shows
N.J. Hogenkamp Sons Funeral Home | Saint Henry, Ohio
Bfri Forum
Promatch Parts
What are the 7 Types of Communication with Examples
Pokemmo Level Caps
Teenbeautyfitness
Newsday Brains Only
Hypixel Skyblock Dyes
Supermarkt Amsterdam - Openingstijden, Folder met alle Aanbiedingen
Skip The Games Ventura
Waffle House Gift Card Cvs
Raising Canes Franchise Cost
Woodman's Carpentersville Gas Price
Crazy Balls 3D Racing . Online Games . BrightestGames.com
The Largest Banks - ​​How to Transfer Money With Only Card Number and CVV (2024)
Owa Hilton Email
Lamont Mortuary Globe Az
Sara Carter Fox News Photos
Frequently Asked Questions
A rough Sunday for some of the NFL's best teams in 2023 led to the three biggest upsets: Analysis
Canvas Elms Umd
Legs Gifs
Helpers Needed At Once Bug Fables
Pilot Travel Center Portersville Photos
Latest Posts
Article information

Author: Chrissy Homenick

Last Updated:

Views: 5949

Rating: 4.3 / 5 (74 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Chrissy Homenick

Birthday: 2001-10-22

Address: 611 Kuhn Oval, Feltonbury, NY 02783-3818

Phone: +96619177651654

Job: Mining Representative

Hobby: amateur radio, Sculling, Knife making, Gardening, Watching movies, Gunsmithing, Video gaming

Introduction: My name is Chrissy Homenick, I am a tender, funny, determined, tender, glorious, fancy, enthusiastic person who loves writing and wants to share my knowledge and understanding with you.