🏠 Home

NixOS Minecraft Server on AWS

2024, March 4


Preface

Part of the series:

Making a MineCraft server with NixOS on EC2

  • Done from an M1 MacBook (OSX), should work equally well on Linux. Linux might need to adjust the docker command to use the SSH agent inside the container but other than that, it should be very much the same.
  • We will make the server turn off automatically when there are no players online in order to save some pennies.
  • For this exercise we will be using a t4g.medium EC2 instance in AWS. The EC2 instances are billed by the hour that they are ON. The t4g.medium is a server instance with 4GiB RAM. The t4g.small is also capable of running the Minecraft server, with 2GiB RAM. (There is a Free Trial for t4g.small until Dec 31st 2024) . The t4g "family name" of these instances indicate that they run on proprietary AWS chipset called "Graviton", this is relevant to us because this chipset has an arm64 architecture, which is important to take into consideration at build time: when we use the nix-build command to create the nix store.

Prerequisites

  • You installed Docker.
  • You installed Nix.
  • You installed direnv.
  • You have an AWS account with which to pay for the server instance.
  • You have a Microsoft Account and purchased MineCraft Java Edition (needed to play).
  • Friends to play with whom also own a copy of MineCraft Java Edition.
  • Some knowledge of Docker, Nix and the ability to SSH into a server.

Step 0: Clone the repo

All the code is found in the repo, I will be explaining some of the code throughout the article but do refer to the repo for the complete solution. I won't cover every line of code in the article.

Feel free to fork the repo to your own account and change the code as you see fit while you follow along the article. Keep your repo private if you add secrets to the source files.

Throughout the article I say that some commands will be explained later, these commands are explained at the end of the article after step #7. Feel free to scroll down to see the explanation when you deem it relevant.

Step 1: Build a Docker image with the base image for our NixOS EC2

ec2-base.nix, this is a simplified configuration that will let us build the base EC2 nix store and store it in a Docker image.

We will also use the Dockerfile.

We can also create a makefile for convenience.

One important detail is that we are using --build-arg arch="arm64" in the docker build command (found in the makefile) to determine which architecture we are building for. We use this param in the docker file here:

ARG arch
FROM nixos/nix:latest-${arch}

Also we are using arm64 because that's the architecture that we are going to use for our EC2 instance. Basically, your docker container must be the same architecture as the target system. This is because when we build nix packages, we must build them in a host system that has the same architecture as the target system. In this case our host system is the Docker container and the target system is the EC2 instance. Of course, we must also build in the same OS. This is why I include RUN bash $(nix-build '<nixpkgs>' -A gnu-config)/config.guess > gnu-config in the Dockerfile, a small debug helper: this lets nix detect the current system identifier and saves it to a file, in case I doubt which system I am building on, I can just check the file. You can also run this in your home system, try it in your shell with one of these: $(nix-build '<nixpkgs>' -A gnu-config)/config.guess or echo "$(uname -m)-$(uname -s)".

Using the same kind of host and target platform for building NixOS is one important requirement and it is the reason why Docker is so great. If you already have the same platform in your local machine as you do in your server then you could skip Docker altogether, but a lot of us either enjoy using other platforms and still want the benefits of NixOs on a server, or maybe you use NixOS on both machines but your chipset doesn't have the same architecture as your server.

Side-note: I found the config.guess cmd here: https://nix.dev/tutorials/cross-compilation.html#determining-the-host-platform-config

Side-note: Nixpkgs includes support for cross-compilation of certain packages (different host and target platform) but this a very specialized operation that works well only for some packages that have been worked on to have said capacity.

Building the simplified Nix configuration (ec2-base.nix) is not strictly necessary, it is just a small convenience that allows using the base nixos for different Docker containers, saving us a small amount of time if we were to create more servers than one. For this particular tutorial it won't be that useful as we are not going to re-use the image for different servers, but lets do it now in case it is useful in the future.

The Docker image will contain the result of building ec2-base in the /nix store.

Step 2: Create an EC2 container

2) A: Create a keypair

Begin by creating an AWS keypair at https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#KeyPairs:, replace us-east-1 if you prefer. I will call mine MCServer. This will download the private key .pem. Store the private key securely as it will be important to SSH into your server. It is common to place the key in ~/.ssh/name and then updating the config in ~/.ssh/config. Other secure places to store your private keys include Password managers, some integrate with SSH agent such as KeePassXC which makes it awfully convenient. Do a little research if you feel you need to know more about SSH keypair authentication.

If you added your keypair to the ssh agent correctly then you should be able to look at the public key with this command:

ssh-add -L | grep "keypair_name"

2) B: Create your EC2 instance.

EC2 instances are created from a base image called an "AMI".

You may (or may not) find the Amazon AMI image in the official page https://nixos.org/download#nixos-amazon

Currently the AMI image is missing for NixOS 23.11, so instead we will use the CLI as suggested in the official page to create an AMI and we will pick an older version. We will use the previously created keypair, you will also need to get an AWS_ACCESS_KEY_ID from AWS console (top right submenu, find "security credentials").

For the available AMI IDs, check out: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/virtualisation/amazon-ec2-amis.nix

Pick the latest NixOS version in your region, and we will go with aarch64 because it allows us to use t4g instances that are built on AWS Graviton chip, their pricing is better than the t2 instances which would be required for x86 architecture. To compare pricing, go to AWS Calculator.

In this case I am picking ami-0a061ca437b63df33

2) C: Creating the EC2 instance

We will be using a generous 12 GiB of EBS disk space for wiggle room, and a t4g.medium instance type. The t4g.small instance type can run the minecraft-server alright, if you choose the small then adjust the nix config (shown further below) of the minecraft-server to use jvmOpts = "-Xms512M -Xmx1536M";.

with nix-shell

You can create the EC2 instance with nix-shell like this, but you are better off installing AWS CLI because we will use it later to find out the IP of the server.

NIXPKGS_ALLOW_UNFREE=1 nix-shell -p ec2_api_tools

[nix-shell]$ ec2-run-instances  --region us-east-1 -k MCServer -O <your_access_key> -W <your_secret> --instance-type t4g.medium ami-0a061ca437b63df33 --block-device-mappings 'DeviceName=/dev/xvda,Ebs={VolumeSize=12}'

with aws cli

Download AWS CLI, add your access_key and secret to the file ~/.aws/credentials:

[default]
region = us-east-1
aws_access_key_id = <your_access_key>
aws_secret_access_key = <your_access_secret>

and then use the cli

aws ec2 run-instances --region us-east-1 --key-name MCServer --instance-type t4g.medium --image-id ami-0a061ca437b63df33 --block-device-mappings 'DeviceName=/dev/xvda,Ebs={VolumeSize=12}'

Okay cool! so if everything worked you should be able to see your instance in your EC2 dashboard, somewhere like https://us-east-1.console.aws.amazon.com/ec2

At first the instance will be on "initializing", we won't be able to SSH into it until it is ready so wait for a bit, maybe give the instance a name on your EC2 dashboard for future clarity. Take note of the public IP that was auto-assigned to the instance.

2) D: SSH into the server

Once it is ready... Lets SSH! For that we need to find out instace's IP, simply click on the instance in the dashboard and take note of its IP. Remember, the IP for EC2 instances is dealloacated whenever they are turned off, ocne you turn them on again a new IP will be allocated, different from the previous one. For now we will accept that the IP changes whenever we turn off the instance. One alternative is to use an Elastic IP from AWS to keep the same IP through shut downs, but this incurs a cost for retaining the IP while the instance is shut down.

Oh, but first, lets update the AWS Security Group of our instance to allow SSH connection. For this you click on the instance, go to the Security tab, and then click the link below Security groups with, probably, a "default" name. Then click "Edit Inbound rules". Add a new allowed Inbound rule, select SSH type which corresponds to port 22, and add 0.0.0.0/0 to the CIDR block, unless you have a static IP and then you can be more safe by allowing only your IP to connect.

Okay now we are ready, lets do it!

$ ssh root@<your.server.IP>

The authenticity of host '...' can't be established.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes

Your system will show you a fingerprint of your server, accept with yes and voilá, we are in!

$ [root@ip-...:~]#

Now that your SSH connection has been succesful, you can close it.

Making the SSH command more convenient

We don't want to type or remember the IP address of our server every time. So we will use direnv. Look over to .envrc at the root of the project, it is used for environment variables, in this case we want to store our IP address.

Typically we would add the following to the .envrc file:

export MC_IP=ip.address.here.0

But the .envrc looks a little different, instead it has these contents:

export MC_INSTANCE_ID=<YOUR_INSTANCE_ID>
export MC_IP=`aws ec2 describe-instances --instance-ids ${MC_INSTANCE_ID} | jq -r '.Reservations[].Instances[].PublicIpAddress'`

In this case the <YOUR_INSTANCE_ID> is the ID that AWS assigned to your instance. You can find it for your instance in your AWS Console. This is a permanent identifier. The difference with this and the IP address is that AWS assigns a different IPv4 address to your instance every time you turn it on after shutting it down. Because we are trying to save pennies by shutting it down whenever it is unused, we will always get a different IP address and adding it manually to our .envrc would require us to constantly change the value. Instead we use the instance ID, and then through AWS CLI and some jq we can grab the currently assigned IPv4 address when the instance is turned on.

After adding your instance_id, or whenever you change it, just run direnv allow to update the variables in your current shell.

This is a good way of keeping secret ENV vars outside of your git repo as well as a convenient way of storing them, you can add your .envrc to the .gitignore. You can now ssh without typing the whole IP: ssh root@$MC_IP. Even better, I added a little convenience to the makefile, so you just run make user or make root and it lets you SSH. Note: you can't use make user yet because we have not created the user in our server, right now only root is available.

Side-note: the make root and make user commands include -o StrictHostKeyChecking=accept-new to avoid the fingerprint message, as we will get that message whenever the IP changes, which will happen whenever we turn on the EC2 instance, and we hope to do that quite often because we want it to turn-off automatically.

Step 3: Configure your NixOS

Time for the declarative configuration.

First we want to disable SSH password authentication for better protection. Right now we already deployed our machine so we have to be careful as we have not disabled PasswordAuthentication, but due to the fact that we only have the Root user and the default settings disable password auth with root user due to PermitRootLogin prohibit-password, that means we are OK but we must disable passwowrd auth before creating new users. We can look at the default settings if we SSH as root:

[root@ip...:~]# cat /etc/ssh/sshd_config
...
PasswordAuthentication yes
PermitRootLogin prohibit-password

so lets remove pass auth for SSH and lets add fail2ban

  services.openssh.settings.PasswordAuthentication = false;
  services.fail2ban.enable = true;

We want to add a user so we don't login as root whenever we want to SSH. We add the public key from our keypair, in the docker we save this public key to the file /public.key before performing the Nix build, this way we can tell the configuration where to find the file. The /public.key file comes from one of our scripts in ./files/scripts when we are in the docker container. You will adjust the docker run command to include the name of your keypair. For now the mc-server/configuration.nix will look something like this with the new user:

  security.sudo.wheelNeedsPassword = false;
  
  # I am calling my user "jose", replace it with the name that you 
  # prefer. Also replace the user in the makefile for `make user`.
  
  users.users.jose = {
    isNormalUser = true;
    extraGroups = [ "wheel" ];
    openssh.authorizedKeys.keyFiles = [ /public.key ];
  };

and some other convenient packages.

  environment.systemPackages = [ pkgs.vim pkgs.htop pkgs.netcat pkgs.cloud-utils ];

# cloud-utils to have `growpart` in our toolset, 
# in case we need to increase EBS storage.
# netcat can be useful for testing TCP connections.

  services.journald.extraConfig = ''
    SystemMaxUse=300M
  '';

Okay cool, we are not done yet but we can check that this configuration builds without errors.

Lets run the docker container, this is the whole command, we'll explain it later but just know you have to replace --env KEYNAME="MCServer" to use the actual name of your keypair:

	docker run \
		--rm \
		--name mc-builder \
		--mount source=mcvol,target=/nix \
		-v ./scripts/.:/files/scripts/. \
		-v ./mc-server/.:/files/nix-files/. \
		--workdir /files \
		--env SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" \
		--env SERVER="${MC_IP}" \
		--env KEYNAME="MCServer" \
		-v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock \
		-it \
		nixos/builder:arm64 bash

For convenience we put it in a makefile at root of the project and just do this:

$ make mc-builder

This should give us a shell inside the docker container at the /files directory which you can confirm by running pwd once you are in the container.

To test the build just run this script from the /files directory:

$ ./scripts/build.sh

Your nix store will build the new derivations defined the nix configuration. The nix commands can be found inside the ./scripts/build.sh file, we will explain it later but do take a look at the file to get an idea of what we are doing. If it finishes without error then the nix build was successful. Additionally make mc-build runs the docker container and also executes ./scripts/build.sh automatically, for convenience.

We could deploy this build. If you are making your own configuration.nix from scratch, at this point you would have a nix store built with only the source code inside configuration.nix. Doing small builds to check that your nix files are correct is a good way of solving any errors in small steps. We can build without deploying it to the server, and once we are satisfied with our configuration and we confirmed that it built correctly, then we can move on to deployment.

So lets say we confirmed the simplified nix built was correct, now we can exit the docker container and go back to our configuration.nix where we will add the minecraft-server nixpkg, we can confirm that this is the name of the package by looking at https://search.nixos.org/packages, and we can look at the available options by looking at https://search.nixos.org/options

  services.minecraft-server = {
    enable = true;
    eula = true;
    openFirewall = true;
    # Commented out because we are not using these
    # but they could come-in handy eventually:
    # declarative = true;
    # white-list = {};
    serverProperties = {
      server-port = 25565;
      difficulty = 3;
      gamemode = 1;
      max-players = 5;
      motd = "NixOS Minecraft server!";
      # white-list = true;
    };
    jvmOpts = "-Xms1024M -Xmx2048M";
  };

then open the ports in the AWS console security group the same you did with SSH, do so for TCP and for port 25565 that we are using for the server.

Step 4: Build and Deploy your NixOS

$ make mc-builder
# inside docker:
$ ./scripts/build.sh
$ ./scripts/copy-closure.sh
$ ./scripts/switch-closure.sh

# or, use `scripts/push.sh` to do all of the previous three in a single step

after doing these steps, ssh into your server and check that the service is active: sudo systemctl status minecraft-server

for system logs: journalctl -u minecraft-server --no-pager

Side-note: Up until this point we have not talked about the watcher program that will shut-down your server if there are zero active connections. If you are following this guide with the entirety of the source code then your server will deploy with the automatic shut-down service. I tell you this so you are not shocked when your server shuts down automatically. On the other hand, if you are writing your files from scratch, then expect that you have not included the watcher program yet, and your server will remain online indefinitely, which will add towards your AWS bill.

Step 5: Connect to your minecraft server from your local minecraft game

Now lets try to connect from our minecraft client :)

Open minecraft and join by using the public IPv4 from the server. Should work fine. I had to try to attempt joining twice before it connected. You could share the IPv4 with other people and they would be able to connect at the same time if you wish.

Step 6: Turn off EC2 automatically

Sweet! You could take a break and start playing minecraft if you wish. However we want to be mindful with our wallets and turn off the server if it is not used. We also don't want to rely on the players taking action to turn off the server as this would be a burden and easy to forget.

The basic idea is the following:

  • We build a watcher program that monitors active connections, if we haven't seen active connections for 15 minutes then shut down the server. These 15 minutes are defined in the watcher app, feel free to adjust the Rust code if you want a different setting.
  • /mc-server/watcher includes the nix files to build the watcher program
  • /mc-server/watcher/configuration.nix installs our custom package and sets the systemd service that runs our watcher on system startup.
  • /mc-server/watcher/cargo/ is the server with our watcher app code, it is a small Rust-lang app.
  • /mc-server/watcher/default.nix is the nix derivation that builds our watcher app, it uses buildRustPackage
  • The secret-sauce command used by the Rust app to monitor active connections is this one: netstat -atuen | grep 25565.*ESTABLISHED, grep succeeds only if it matches on ESTABLISHED connections to the minecraft-server port. The source code can be found in cargo/src/main.rs
  • That's it!

To deploy just do the same we did earlier. You can look at the status of the watcher app in the server by logging into SSH and then checking sudo systemctl status mc-watcher. The logs can be found in journalctl.

Now you can turn-on your server and not worry about shutting it down.

watcher/configuration.nix looks like this:

{ lib, pkgs, ... }:
let 
 watcherPkg = pkgs.callPackage ./default.nix {};
in
{
  # adding the derivation to systemPackages makes it available to us
  environment.systemPackages = [ watcherPkg ];
  users.users.mc-watcher = {
    isSystemUser = true;
    extraGroups = [ "wheel" ];
    group = "mc-watcher";
  };
  users.groups.mc-watcher = {};

  systemd.services."mc-watcher" = {
    wantedBy = ["multi-user.target"];
    description = "watcher for Minecraft-server activity";
    serviceConfig = {
      Type = "simple";
      User = "mc-watcher";
      ExecStart = "${lib.getExe watcherPkg}";
    };
    path = [ "/run/wrappers" "/run/current-system/sw"];
  };
}

A few comments:

  • To run the shutdown command and turn off the server, the service needs to run sudo shutdown now -h, for this reason we give sudo privileges to the service user here: extraGroups = [ "wheel" ];.

  • To start the service on system startup, we add: wantedBy = ["multi-user.target"];.

  • To tell the systemd service the startup command, we use ExecStart = "${lib.getExe watcherPkg}"; to give the path to the binary built by the watcherPkg derivation.

  • To configure the $PATH variable of the service such that it finds the commands sudo, netstat and shutdown, we pass in the paths where those binaries can be found like this: path = [ "/run/wrappers" "/run/current-system/sw"];.

  • For a detailed study of Nix derivations, going through the Nix Pills is a very thorough explanation although a bit low level at times. Chapter 7 was one of the most useful to me.

Step 7: Find some friends that like minecraft

Everything is ready for you to turn-on the server when you or your friends wish to play. You will have to communicate the new IP to your friends whenever you turn-on the server. Remember to find a way to voice-call with the other players.

Explaining some of the commands

# build.sh
# we add our server to known_hosts to avoid the fingerprint message when we send our
# build to the server through SSH
echo `ssh-keyscan -t rsa $SERVER` > ~/.ssh/known_hosts
# and the public key is attached to the nix build, so that we can login into the user
# account with the same keypair as we do to the root account
echo `ssh-add -L  | grep $KEYNAME` > /public.key
# finally we build the config and store the path to the derivation in a file,
# this path is used later to know where to copy the nix build
nix-build --show-trace \
  ./nix-files/server.nix >> /store.path

build.sh builds the nix store inside the docker container, we use ssh-keyscan to avoid finger-print message, and we use ssh-add -L to later include the public key on our NixOS user with openssh.authorizedKeys.keyFiles = [ /public.key ];. Finally, the build command gives us a path to the nix store where the build can be found, we want to save this path to /store.path to use with later command.

# copy-closure.sh
STORE_PATH=$(cat /store.path) \
  && nix-copy-closure --to --use-substitutes ${SERVER} \
  $(ls $STORE_PATH | xargs -i -n1 echo ${STORE_PATH}/{})

copy-closure.sh here we use the /store.path to tell nix the build we copy to the server. Then nix-copy-closure uses SSH to copy the files to the server.

# switch-closure.sh
# activate the nixos
STORE_PATH=$(cat /store.path) \
  SERVER_PROFILE="/nix/var/nix/profiles/system" \
  && ssh root@$SERVER \
    "nix-env --profile $SERVER_PROFILE --set $STORE_PATH;" \
    "$SERVER_PROFILE/bin/switch-to-configuration switch"

Finally switch-closure.sh will use SSH to switch the server configuration with switch-to-configuration and the nix build we copied over.

mc-builder:
	docker run \
		--rm \
		--name mc-builder \
		--mount source=mcvol,target=/nix \
		-v ./scripts/.:/files/scripts/. \
		-v ./mc-server/.:/files/nix-files/. \
		--workdir /files \
		--env SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" \
		--env SERVER="${MC_IP}" \
		--env KEYNAME="MCServer" \
		-v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock \
		-it \
		nixos/builder:arm64 bash

The docker run command has my favorite tricks.

  • --rm deletes the container after we exit, we don't want to keep around the container and its contents, we want to have a new one every time.
  • --mount source=mcvol,target=/nix is the best part about this process because it lets us persist the nix store across docker containers. This means that Nix will re-use previous builds if nothing changes and we perform a new build. Nix is smart and it re-builds only the newest changes to our configuration while keeping anything that stayed the same. This is extremely beneficial as time goes on and our nix configuration grows with more and more changes.
  • -v ./scripts/.:/files/scripts/. -v ./mc-server/.:/files/nix-files/. this lets us mount our files into the container, this is convenient because we can update our source files and they will be reflected whenever we run a new container.
  • --env SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock this is useful to use the SSH agent of our system inside the docker container, found here. If you are not on Mac then make sure that your SSH agent is working fine inside the Docker container (check with ssh-add -L, you should see your public keys).
  • -it nixos/builder:arm64 bash this tells docker to use the image we created at the start, and gives us an interactive bash shell to the container.

Future ideas

  • I might make a second post showing how to build a Discord bot that other people can use to turn-on the server and fully automate the process of playing with others.
  • Find other programs that you would like to self-host and manage your own Nix server! Some sweet open source apps that you could host include Syncthing, Miniflux, Tailscale, and many more, just search for ideas. If you host your own apps then it would be a great opportunity to learn how to configure NGINX with NixOS, just make sure to include encryption with Lets Encrypt and certbot.
  • Figuring out how to use IPv6 for the server instead of IPv4 as AWS started charging for v4 IPs as of February 1, 2024 .

Useful References

Useful commands

  • To check the disk usage on the server: du -h --max-depth=1 /
  • Research garbage collection of the Nix store to keep disk usage down if you install more programs.

Subscribe to my mailing list

Only to update you when I submit a new blog post.


Go back to the top
© Copyright 2023 by Jose.