• 1 Post
  • 22 Comments
Joined 4 months ago
cake
Cake day: August 23rd, 2024

help-circle


  • There are plenty of valid reasons to want to use a reverse proxy for SSH:

    • Maybe there is a Forgejo instance and Gitea instance running on the same server.
    • Maybe there is a Prod Forgejo instance and Dev Forgejo instance running on the same server.
    • Maybe both Forgejo and an SFTP are running on the same server.
    • Maybe Forgejo is running in a cluster like Docker Swarm or Kubernetes
    • Maybe there is a desire to have Caddy act as a bastion host due to an inability to run a true bastion host for SSH or reduce maintenance of managing yet another service/server in addition to Caddy

    Regardless of the reason, your last point is valid and the real issue here. I do not think it is possible for Caddy to reverse proxy SSH traffic - at least not without additional software (either on the client, server, or both) or some overly complicated (and likely less secure) setup. This may be possible if TCP traffic included SNI information, but unfortunately it does not.


  • people often seem to have a misinformed idea that the first item on your dns server list would be preferred and that is very much not the case

    I did not know that. TIL that I am people!

    Do you know if it’s always this way? For example, you mentioned this is how it works for DNS on a laptop, but would it behave differently if DNS is configured at the network firewall/router? I tried searching for more info confirming this, but did not find information indicating how accurate this is.


  • Depending on the network’s setup, having Pihole fail or unavailable could leave the network completely broken until Pihole becomes available again. Configuring the network to have at least one backup DNS server is therefore extremely important.

    I also recommend having redundant and/or highly available Pihole instances running on different hardware if possible. It may also be a good idea to have an additional external DNS server (eg: 1.1.1.1, 8.8.8.8, 9.9.9.9, etc.) configured as a last resort backup in the event that all the Pihole instances are unavailable (or misconfigured).


  • The steps below are high level, but should provide an outline of how to accomplish what you’re asking for without having to associate your IP address to any domains nor publicly exposing your reverse proxy and the services behind the reverse proxy. I assume since you’re running Proxmox that you already have all necessary hardware and would be capable of completing each of the steps. There are more thorough guides available online for most of the steps if you get stuck on any of them.

    1. Purchase a domain name from a domain name registrar
    2. Configure the domain to use a DNS provider (eg: Cloudflare, Duck DNS, GoDaddy, Hetzner, DigitalOcean, etc.) that supports wild card domain challenges
    3. Use NginxProxyManager, Traefik, or some other reverse proxy that supports automatic certificate renewals and wildcard certificates
    4. Configure both the DNS provider and the reverse proxy to use the wildcard domain challenge
    5. Setup a local DNS server (eg: PiHole, AdGuardHome, Blocky, etc.) and configure your firewall/router to use the DNS server as your DNS resolver
    6. Configure your reverse proxy to serve your services via domains with a subdomain (eg: service1.domain.com, service2.domain.com, etc.) and turn on http (port 80) to https (port 443) redirects as necessary
    7. Configure your DNS server to point your services’ subdomains to the IP address of your reverse proxy
    8. Access to your services from anywhere on your network using the domain name and https when applicable
    9. (Optional) Setup a VPN (eg: OpenVPN, WireGuard, Tailscale, Netbird, etc.) within your network and connect your devices to your VPN whenever you are away from your network so you can still securely access your services remotely without directly exposing any of the services to the internet

  • This would only work if there is no other traffic on the port being used (eg: port 22). If both the host SSH service and Forgejo SSH service expect traffic on port 22, then this would not work since server name indication (SNI) is not provided with SSH traffic and Caddy would not be able to identify the appropriate destination for multiple SSH services traffic.


  • Are you able to provide some details on how you are doing this? I don’t think you can do much with reverse proxies and SSH beyond routing all traffic on port 22 (or the configured SSH port) to whichever port SSH is listening on. In other words, the reverse proxy cannot route SSH traffic for the host on port 22 to the host, route SSH traffic for Forgejo on port 22 to Forgejo’s SSH process, and SFTP traffic on port 22 to the SFTP process - at least not via domain name like a HTTP/HTTPS reverse proxy would work.

    Instead, this would need to be done via IP address where the host SSH process listens on 192.168.1.2, the Forgejo SSH process listens on 192.168.1.3, and the SFTP process listens on 192.168.4. Otherwise, each of those services would need to use different ports.


  • I believe the reverse proxy settings in your post is just configured to handle the http/https connection, not the SSH connection. Instead, SSH connections are likely being routed to the machine running Foegejo via DNS and your reverse proxy is not involved with anything related to SSH.

    I assume you either have SSH disabled on your host or SSH on your host uses a port other than 22?


  • The thing that makes casting so appealing for me is how ubiquitous it is. It eliminates situations with guests where they would recommend a show/movie only to find out that I can’t easily play the content because it’s only available on a streaming service that the guest pays for and I do not. As long as the guest brought a device and connected it to my WiFi, it more than likely could be casted without having to install another app and/or sign up for a new service (or have the guest login with their account).

    I am becoming less optimistic about it though. I just do not think that the level of ubiquity that Chromecast reached even 10 years ago will be matched with a FOSS alternative. Developers would need to incorporate it into their apps, websites, etc. or it would need to be compatible with existing solutions. I doubt Google will open Chromecast up enough so other options can be fully compatible with it. Additionally, without the backing of a major corporation, I do not see developers taking the time to make their content compatible with another casting option.


  • Agreed! I am concerned though that even if a viable casting alternative started gaining momentum that Google would essentially prevent it from being widely adopted or incorporated into apps/websites the way that Chromecast is. I think it would have to be created by a large tech or media company and/or be compatible with Chromecast.

    Apps are still really frustrating though. If an app exists (big if), I found the apps to either miss key features compared to the corresponding apps on other platforms or the UI/UX was terrible for a TV app.

    I could get by if just one of casting or the apps were comparable to more popular alternatives. Having neither makes it very difficult to moved away from those alternatives.


  • I do not think what I would want as a replacement exists (yet). My main requirements are:

    • Only FOSS software and firmware
    • Similar level of “casting” compatibility/ubiquity as the discontinued Chromecast
    • Easy navigation and/or great UI/UX
    • Can be controlled with a stand alone remote control, phone/tablet/laptop, and remote services like Home Assistant
    • As portable and low powered as the discontinued Chromecast (or no less portable than a small mini-pc)
    • Ability to turn on/off the TV, switch inputs, and control the volume
    • Ability to install apps/plugins to directly on the device (maybe even things like Lutris, Moonlight, or something similar for gaming)
      • Ideally, the apps would be as well maintained and provide similar levels of quality as something like an Android TV or Apple TV
    • (bonus) Ability to store media locally for offline playback

    I think the closest I have seen is LibreELEC + Kodi on a RaspberryPi or mini-pc. It’s still not quite there for my tastes though. Hopefully the recent Chromecast announcement will lead to more/better alternatives in the coming months!


  • I am comfortable routing traffic via domain name through a reverse proxy. I am doing that via Traefik and can setup rules so that different sub-domains, domains, and/or path is routed to the appropriate end point (IP address and port). The issue is that k3s does not receive that information for SSH traffic since SNI (ie: the sub-domain, domain, etc.) is not included in SSH traffic. If SSH traffic provided SNI information, this issue would be much less complicated as I would only need to make sure that the port 22 traffic intended for k3s did not get processed by SSH or any other service on the node.

    If I were to setup a DMZ, I think I would need to setup one unique public IP per SSH service. So I would need to create a public DNS record for sftp.my.domain to VPS#1’s public IP and ssh.forgejo.my.domain to VPS#2’s public IP. Assuming both VPS#1 and VPS#2 have some means of accessing the internal network (eg: VPN) then I could port forward traffic received by VPS#1 on port 22 to 192.168.1.40:22 and traffic received by VPS#2 on port 22 to 192.168.1.50:22. I had not considered this and based on what I have seen so far, I think that this would be the only solution to allow external traffic to access these services using the normal port 22 when there is more than one service in k3s expecting traffic on port 22 (or any other port that receives traffic without SNI details).


  • Maybe I was not clear, but I do not think that you understand what I was trying to say with the second part of my last message.

    Assume that multiple VIPs are setup and there is a load balancer IP for the SFTP entry point (eg: 192.168.1.40:22) and a different load balancer IP for the Forgejo SSH entry point (eg: 192.168.1.50:22). My local DNS can be setup so that sftp.my.domain points to 192.168.1.40 and ssh.forgejo.my.domain points to 192.168.1.50. When I make a request within my network, the DNS lookup will appropriately route sftp.my.domain:22 to 192.168.1.40:22 and ssh.forgejo.my.domain:22 to 192.168.1.50. I believe this is what you are recommending and exactly what I want. I will need to get the multiple VIP part of this setup worked out so I can do this.

    However, this will not work when the traffic is received from outside of my network even if the above configuration is setup correctly. If you were to try to connect to either sftp.my.domain:22 or ssh.forgejo.my.domain:22, your traffic would be routed to my public IP address. My firewall/router would receive the traffic on port 22 and port forward the traffic to the single IP address assigned to that port forwarding rule. When k3s receives the traffic from my firewall/router, k3s will not have any SNI information (ie: it will not know whether you were using sftp.my.domain or ssh.forgejo.my.domain - or any other domain for that matter). Even if I were able to setup multiple port forwarding rules for port 22 on the firewall/router, I would still be unable to appropriately route the traffic because the firewall/router would also not know if the traffic was intended for sftp.my.domain or ssh.forgejo.my.domain. As a result, at most you would only be able to use one of the services because external traffic for both sftp.my.domain and ssh.forgejo.my.domain will be routed to the same IP address and k3s would have no idea what domain (if any) is being used.

    There are a few solutions (eg: use different ports for each SSH or non-TLS trafficked service, wrap the SSH traffic in TLS to give k3s SNI information to route traffic to the appropriate endpoint, configure SSH on the node to route traffic to the appropriate IP address based on SSH user, require each client to use the local network or VPN, etc.), but none of them are as seamless and easy as routing TLS traffic which can use SNI information.


  • I had not thought about using IPv6 for this. It’s definitely something that I would need to research more as I know that this would expose my attack surface and may require an overhaul of the network (or at least a very thorough review).

    I’m not sure I understand the concern about Traefik. I am using it as a reverse proxy and forcing HTTPS for all applicable services (which unfortunately does not apply to this particular situation). I am honestly a little confused about the control plane, tls-san, gateway, load balancer, ingress, etc. and how they all work together. I may not be using Traefik as the Load Balancer and instead have Kube-VIP as the LoadBalancer. I did not configure Kube-VIP any particular way for Load Balancing, but I did configure Traefik with a few Load Balancer specific options. When I tried to setup Kube-VIP with the additional IP addresses for load balancing, I was unable to get k3s to work correctly so I assumed that Traefik was my Load Balancer instead of Kube-VIP.


  • That all makes sense and tried setting it up that way but could not get it to work. I am not sure if it was an issue with my network, k3s, Kube-VIP, or Traefik (or some combination of them). I will try getting it to work again.

    Even if I do though, I would run into an issue if I publicly exposed these services (I understand there are security implications of doing so). How would I route traffic received externally/publicly on port 22 to more than one IP address? I think I would only be able to do this for local/internal traffic by managing the local DNS.


  • I’m already doing that, but just for one VIP. I think I just need to get the additional VIPs working.

    I know that I will need to update my local network’s DNS so that something like service#1 = git.ssh.local.domain and git.ssh.local.domain = 192.168.50.10 and service#2 = sftp.local.domain and sftp.local.domain = 192.168.50.20. I would setup 192.168.50.10 as the load balancer IP address to Forgejo’s SSH entrypoint and 192.168.50.20 as the load balancer IP address to the SFTP’s entrypoint. However, how would I handle requests/traffic received externally? The router/firewall would receive everything and can port forward port 22 to a single IP address, which would prevent one (or more) service from being used externally, correct?



  • I am unsure if I understood everything correctly, but I believe I am already doing everything that you mentioned. I followed the Kube-VIP’s ARP daemonset’s documentation. The leader election works. I am not using Kube-VIP for load balancing though. Instead, I am using Traefik, which is using the same IP address that was assigned to the control plane during both k3s’s and Kube-VIP’s setup. However, I am unable to get any additional VIP addresses to properly route to Traefik.

    Even if I did get the additional VIP addresses working, I think I still have one last issue to overcome. I can control the local network’s DNS so that service#1 is assigned VIP#1 and service#2 assigned VIP#2. However, how would this be handled for traffic received externally? If the external/public DNS has service#1 and service#2 assigned to the network’s public IP address, both service’s traffic would be received by the router/firewall on port 22. The router/firewall could forward traffic on port 22 to (presumably) a single IP address, which would only allow service#1 or service#2 (but not both) to receive traffic publicly, correct?