tldr: I’d like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I’m not sure what the best/safest way to do it is. Asking my partner to use tailscale or wireguard is asking too much unfortunately. I was curious to know what you all recommend.

I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I’m kind of unsure what the best approach is. Hosting services on the internet has risk and I’d like to reduce that risk as much as possible.

  1. I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?

  2. Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

  3. What’s the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.

  4. Any other tips or info you care to share would be greatly appreciated.

  5. Feel free to talk me out of it as well.

EDIT:

If anyone comes across this and is interested, this is what I ended up going with. It took an evening to set all this up and was surprisingly easy.

  • domain from namecheap
  • cloudflare to handle DNS
  • Nginx Proxy Manager for reverse proxy (seemed easier than Traefik and I didn’t get around to looking at Caddy)
  • Cloudflare-ddns docker container to update my A records in cloudflare
  • authentik for 2 factor authentication on my immich server
    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I currently have a nginx docker container and certbot docker container that I have working but don’t have in production. No extra features, just a barebones reverse proxy with an ssl cert. Knowing that, I read through Caddy’s homepage but since I’ve never put an internet facing service into production, it’s not obvious to me what features I need or what I’m missing out on. Do you mind sharing what the quality of life improvements you benefit from with Caddy are?

  • rimjob_rainer@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Why is it too much asking your partner to use wireguard? I installed wireguard for my wife on her iPhone, she can access everything in our home network like she was at home, and she doesn’t even know that she is using VPN.

    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      A few reasons

      1. My partner has plenty of hobbies but sys-admin isn’t one of them. I know I’ll show them how to turn off wireguard to troubleshoot why “the internet isn’t working” but eventually they would forget. Shit happens, sometimes servers go down and sometimes turning off wireguard would allow the internet to work lol
      2. I’m a worrier. If there was an emergency, my partner needed to access the internet but couldn’t because my DNS server went down, my wireguard server went down, my ISP shit the bed, our home power went out, etc., and they forgot about the VPN, I’d feel terrible.
      3. I was a little too ambitious when I first got into self hosting. I set up services and shared them before I was ready and ended up resetting them constantly for various reasons. For example, my Plex server is on it’s 12th iteration. My partner is understandably weary to try stuff I’ve set up. I’m at a point where I don’t introduce them to a service I set up unless accessing it is no different than using an app (like the Homeassistant app) or visiting a website. That intermediary step of ensuring the VPN is on and functional before accessing the service is more than I’d prefer to ask of them

      Telling my partner to visit a website seems easy, they visit websites every day, but they don’t use a VPN everyday and they don’t care to.

  • lorentz@feddit.it
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    If security is one of your concerns, search for “HTTP client side certificates”. TL;DR: you can create certificates to authenticate the client and configure the server to allow connections only from trusted devices. It adds extra security because attackers cannot leverage known vulnerabilities on the services you host since they are blocked at http level.

    It is a little difficult to find good and updated documentation but I managed to make it work with nginx. The downside is that Firefox mobile doesn’t support them, but Firefox PC and Chrome have no issues.

    Of course you want also a server side certificate, the easiest way is to get it from Let’s Encrypt

  • 𝘋𝘪𝘳𝘬@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    How do you handle SSL certs and internet access in your setup?

    I have NPM running as “gateway” between my LAN and the Internet and let handle it all of my vertificates using the built-in Let’s Encrypt features. None of my hosted applications know anything about certificates in their Docker containers.

    As for your questions:

    1. You can and should – it makes managing the applications much easier. You should use some containerization. Subdomains and correct routing will be done by the reverse proxy. You basically tell the proxy “when a request for foo.example.com comes in, forward it to myserver.local, port 12345” where 12345 is the port the container communicates over.
    2. 100% depends on your use case. I purchased a domain because I host stuff for external access, too. I just have my setup to report it’s external IP address to my domain provider. It basically is some dynamic DNS service but with a “real domain”. If you plan to just host for yourself and your friends, some generic subdomain from a dynamic DNS service would do the trick. (Using NPMs Let’s Encrypt configuration will work with that, too.)
    3. You can’t. Every georestricting can be circumvented. If you want to restrict access, use HTTP basic auth. You can set that up using NPM, too. So users authenticate against NPM and only when it was successful,m the routing to the actual content will be done.
    4. You might want to look into Cloudflare Tunnel to hide your real IP address and protect against DDoS attacks.
    5. No 🙂
    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 months ago

      “NPM” node package manager?

      1. Yeah I’ve been playing around with docker and a domain to see how all that worked. Got the subdomains to work and everything, just don’t have them pointing to services yet.
      2. I’m definitely interested in the authentication part here. Do you have an tutorials you could share?
      3. Will do, thanks
      4. ❤️

      I don’t know how markdown works. that should be 1,3,4,5

  • jimmy90@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    nixos with nginx services does all proxying and ssl stuff, fail2ban is there as well

    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I know I should learn NixOS, I even tried for a few hours one evening but god damn, the barrier to entry is just a little too high for me at the moment 🫤

      • jimmy90@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        i guess you were able to install the os ok? are you using proxmox or regular servers?

        i can post an example configuration.nix for the proxy and container servers that might help. i have to admit debugging issues with configurations can be very tricky.

        in terms of security i was always worried about getting hacked. the only protection for that was to make regular backups of data and config so i can restore services, and to create a dmz behind my isp router with a vlan switch and a small router just for my services to protect the rest of my home network

        • a_fancy_kiwi@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          i guess you were able to install the os ok? are you using proxmox or regular servers?

          I was. It was learning the Nix way of doing things that was just taking more time than i had anticipated. I’ll get around to it eventually though

          I tried out proxmox years ago but besides the web interface, I didn’t understand why I should use it over Debian or Ubuntu. At the moment, I’m just using Ubuntu and docker containers. In previous setups, I was using KVMs too.

          Correct me if I’m wrong, but don’t you have to reboot every time you change your Nix config? That was what was painful. Once it’s set up the way you want, it seemed great but getting to that point for a beginner was what put me off.

          I would be interested to see the config though

          • jimmy90@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            3 months ago

            you only need to reboot Nix when something low level has changed. i honestly don’t know where that line is drawn so i reboot quite a lot when i’m setting up a Nix server and then hardly reboot it at all from then on even with auto-updates running oh and if i make small changes to the services i just run sudo nixos-rebuild switch and don’t reboot

          • jimmy90@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            3 months ago

            this is my nginx config for my element/matrix services

            as you can see i am using a proxmox NixOS with an old 23.11 nix channel but i’m sure the config can be used in other NixOS environments

            
            { pkgs, modulesPath, ... }:
            
            {
              imports = [
                (modulesPath + "/virtualisation/proxmox-lxc.nix")
              ];
            
              security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];
            
              system.stateVersion = "23.11";
              system.autoUpgrade.enable = true;
              system.autoUpgrade.allowReboot = true;
            
              nix.gc = {
                automatic = true;
                dates = "weekly";
                options = "--delete-older-than 14d";
              };
            
              networking.firewall.allowedTCPPorts = [ 80 443 ];
            
              services.openssh = {
                enable = true;
                settings.PasswordAuthentication = true;
              };
            
              users.users.XXXXXX = {
                isNormalUser = true;
                home = "/home/XXXXXX";
                extraGroups = [ "wheel" ];
                shell = pkgs.zsh;
              };
            
              programs.zsh.enable = true;
            
              security.acme = {
                acceptTerms = true;
                defaults.email = "[email protected]";
              };
            
              services.nginx = {
                enable = true;
            
                virtualHosts._ = {
                  default = true;
                  extraConfig = "return 500; server_tokens off;";
                };
            
                virtualHosts."XXXXXX.dynu.net" = {
                  enableACME = true;
                  addSSL = true;
            
                  locations."/_matrix/federation/v1" = {
                    proxyPass = "http://192.168.10.131:8008/";
                    extraConfig = "client_max_body_size 300M;" +
                      "proxy_set_header X-Forwarded-For $remote_addr;" +
                      "proxy_set_header Host $host;" +
                      "proxy_set_header X-Forwarded-Proto $scheme;";
                  };
            
                  locations."/" = {
                    extraConfig = "return 302 https://element.xxxxxx.dynu.net/;";
                  };
            
                  extraConfig = "proxy_http_version 1.1;";
                };
            
                virtualHosts."matrix.XXXXXX.dynu.net" = {
                  enableACME = true;
                  addSSL = true;
            
                  extraConfig = "proxy_http_version 1.1;";
            
                  locations."/" = {
                    proxyPass = "http://192.168.10.131:8008/";
                    extraConfig = "client_max_body_size 300M;" +
                      "proxy_set_header X-Forwarded-For $remote_addr;" +
                      "proxy_set_header Host $host;" +
                      "proxy_set_header X-Forwarded-Proto $scheme;";
                  };
                };
            
                virtualHosts."element.XXXXXX.dynu.net" = {
                  enableACME = true;
                  addSSL = true;
                  locations."/" = {
                    proxyPass = "http://192.168.10.131:8009/";
                    extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
                  };
                };
            
                virtualHosts."call.XXXXXX.dynu.net" = {
                  enableACME = true;
                  addSSL = true;
                  locations."/" = {
                    proxyPass = "http://192.168.10.131:8080/";
                    extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
                  };
                };
            
                virtualHosts."livekit.XXXXXX.dynu.net" = {
                  enableACME = true;
                  addSSL = true;
            
                  locations."/wss" = {
                    proxyPass = "http://192.168.10.131:7881/";
            #        proxyWebsockets = true;
                    extraConfig = "proxy_http_version 1.1;" +
                      "proxy_set_header X-Forwarded-For $remote_addr;" +
                      "proxy_set_header Host $host;" +
                      "proxy_set_header Connection \"upgrade\";" +
                      "proxy_set_header Upgrade $http_upgrade;";
                  };
            
                  locations."/" = {
                    proxyPass = "http://192.168.10.131:7880/";
            #        proxyWebsockets = true;
                    extraConfig = "proxy_http_version 1.1;" +
                      "proxy_set_header X-Forwarded-For $remote_addr;" +
                      "proxy_set_header Host $host;" +
                      "proxy_set_header Connection \"upgrade\";" +
                      "proxy_set_header Upgrade $http_upgrade;";
                  };
                };
            
                virtualHosts."livekit-jwt.XXXXXX.dynu.net" = {
                  enableACME = true;
                  addSSL = true;
                  locations."/" = {
                    proxyPass = "http://192.168.10.131:7980/";
                    extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
                  };
                };
            
                virtualHosts."turn.XXXXXX.dynu.net" = {
                  enableACME = true;
                  http2 = true;
                  addSSL = true;
                  locations."/" = {
                    proxyPass = "http://192.168.10.131:5349/";
                  };
                };
            
              };
            }
            
            
            
            
            
          • jimmy90@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            3 months ago

            this is my container config for element/matrix podman containers do not run as root so you have to get the file privileges right on the volumes mapped into the containers. i used top to find out what user the services were running as. you can see there are some settings there where you can change the user if you are having permissions problems

            
            
            
            { pkgs, modulesPath, ... }:
            
            {
            
              imports = [
                (modulesPath + "/virtualisation/proxmox-lxc.nix")
              ];
            
              security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];
            
              system.stateVersion = "23.11";
              system.autoUpgrade.enable = true;
              system.autoUpgrade.allowReboot = false;
            
              nix.gc = {
                automatic = true;
                dates = "weekly";
                options = "--delete-older-than 14d";
              };
            
              services.openssh = {
                enable = true;
                settings.PasswordAuthentication = true;
              };
            
              users.users.XXXXXX = {
                isNormalUser = true;
                home = "/home/XXXXXX";
                extraGroups = [ "wheel" ];
                shell = pkgs.zsh;
              };
            
              programs.zsh.enable = true;
            
              environment.etc = {
                "fail2ban/filter.d/matrix-synapse.local".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter ''
                  [Definition]
                  failregex = .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Failed password login.*
                              .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Attempted to login as.*\n.*Invalid username or password.*
                '');
              };
            
              services.fail2ban = {
                enable = true;
                maxretry = 3;
                bantime = "10m";
                bantime-increment = {
                  enable = true;
                  multipliers = "1 2 4 8 16 32 64";
                  maxtime = "168h";
                  overalljails = true;
                };
                jails = {
                  matrix-synapse.settings = {
                    filter = "matrix-synapse";
                    action = "%(known/action)s";
                    logpath = "/srv/logs/synapse.json.log";
                    backend = "auto";
                    findtime = 600;
                    bantime  = 600;
                    maxretry = 2;
                  };
                };
              };
            
              virtualisation.oci-containers = {
                containers = {
            
                  postgres = {
                    autoStart = false;
                    environment = {
                      POSTGRES_USER = "XXXXXX";
                      POSTGRES_PASSWORD = "XXXXXX";
                      LANG = "en_US.utf8";
                    };
                    image = "docker.io/postgres:14";
                    ports = [ "5432:5432" ];
                    volumes = [
                      "/srv/postgres:/var/lib/postgresql/data"
                    ];
                    extraOptions = [
                      "--label" "io.containers.autoupdate=registry"
                      "--pull=newer"
                    ];
                  };
            
                  synapse = {
                    autoStart = false;
                    environment = {
                      LANG = "C.UTF-8";
            #          UID="0";
            #          GID="0";
                    };
             #       user = "1001:1000";
                    image = "ghcr.io/element-hq/synapse:latest";
                    ports = [ "8008:8008" ];
                    volumes = [
                      "/srv/synapse:/data"
                    ];
                    log-driver = "json-file";
                    extraOptions = [
                      "--label" "io.containers.autoupdate=registry"
                      "--log-opt" "max-size=10m" "--log-opt" "max-file=1" "--log-opt" "path=/srv/logs/synapse.json.log"
                      "--pull=newer"
                    ];
                    dependsOn = [ "postgres" ];
                  };
            
                  element = {
                    autoStart = true;
                    image = "docker.io/vectorim/element-web:latest";
                    ports = [ "8009:80" ];
                    volumes = [
                      "/srv/element/config.json:/app/config.json"
                    ];
                    extraOptions = [
                      "--label" "io.containers.autoupdate=registry"
                      "--pull=newer"
                    ];
            #        dependsOn = [ "synapse" ];
                  };
            
                  call = {
                    autoStart = true;
                    image = "ghcr.io/element-hq/element-call:latest-ci";
                    ports = [ "8080:8080" ];
                    volumes = [
                      "/srv/call/config.json:/app/config.json"
                    ];
                    extraOptions = [
                      "--label" "io.containers.autoupdate=registry"
                      "--pull=newer"
                    ];
                  };
            
                  livekit = {
                    autoStart = true;
                    image = "docker.io/livekit/livekit-server:latest";
                    ports = [ "7880:7880" "7881:7881" "50000-60000:50000-60000/udp" "5349:5349" "3478:3478/udp" ];
                    cmd = [ "--config" "/etc/config.yaml" ];
                    entrypoint = "/livekit-server";
                    volumes = [
                      "/srv/livekit:/etc"
                    ];
                    extraOptions = [
                      "--label" "io.containers.autoupdate=registry"
                      "--pull=newer"
                    ];
                  };
            
                  livekitjwt = {
                    autoStart = true;
                    image = "ghcr.io/element-hq/lk-jwt-service:latest-ci";
                    ports = [ "7980:8080" ];
                    environment = {
                      LK_JWT_PORT = "8080";
                      LIVEKIT_URL = "wss://livekit.xxxxxx.dynu.net/";
                      LIVEKIT_KEY = "XXXXXX";
                      LIVEKIT_SECRET = "XXXXXX";
                    };
                    entrypoint = "/lk-jwt-service";
                    extraOptions = [
                      "--label" "io.containers.autoupdate=registry"
                      "--pull=newer"
                    ];
                  };
            
                };
              };
            
            }
            
            
            
            
            
          • jimmy90@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            yeah proxmox is not necessary unless you need lots of separate instances to play around with

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    A fairly common setup is something like this:

    Internet -> nginx -> backend services.

    nginx is the https endpoint and has all the certs. You can manage the certs with letsencrypt on that system. This box now handles all HTTPS traffic to and within your network.

    The more paranoid will have parts of this setup all over the world, connected through VPNs so that “your IP is safe”. But it’s not necessary and costs more. Limit your exposure, ensure your services are up-to-date, and monitor logs.

    fail2ban can give some peace-of-mind for SSH scanning and the like. If you’re using certs to authenticate rather than passwords though you’ll be okay either way.

    Update your servers daily. Automate it so you don’t need to remember. Even a simple “doupdates” script that just does “apt-get update && apt-get upgrade && reboot” will be fine (though you can make it more smart about when it needs to reboot). Have its output mailed to you so that you see if there are failures.

    You can register a cheap domain pretty easily, and then you can sub-domain the different services. nginx can point “x.example.com” to backend service X and “y.example.com” to backend service Y based on the hostname requested.

    • markstos@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      I would recommend automating only daily security updates, not all updates.

      Ubuntu and Debian have “unattended-upgrades” for this. RPM-based distros have an equivalent.

  • 486@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

    That does not work. As soon as you get SSL certificates, expect the domain name to be public knowledge, especially with Let’s Encrypt and all other certificate authorities with transparency logs. As a general rule, don’t rely on something to be hidden from others as a security measure.

  • tritonium@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Why do so many people do this incorrectly. Unless you are actually serving a public then you don’t need to open anything other than a WireGuard tunnel. My phone automatically connects to WireGuard as soon as I disconnect from my home WiFi so I have access to every single one of my services and only have to expose one port and service.

    If you are going through setting up caddy or nginx proxy manager or anything else and you’re not serving a public… you’re dumb.