Private Kubernetes Service, with a public endpoint — inlets/inlets-operator

Inlets Operator — Exposing Services on Private Kubernetes Clusters with a Public IP

Gokul Chandra
ITNEXT
Published in
8 min readJan 24, 2020

--

Kubernetes Cloud Providers embedded the knowledge and context of each public and private cloud providers into most of the Kubernetes components. With these providers it is easy to expose Kubernetes services running on the specific platforms using the platform native LoadBalancing constructs. If a user is deploying to an EC2 instance or a DigitalOcean Droplet, then they have a public IPv4 address, but when working behind a corporate firewall, NAT, or within a VM or container, this just doesn’t work.

There are multiple other scenarios where this might be an issue:

  • Unable to expose localhost application directly to internet without DMZ & other network configuration
  • Limited resources in private data centers where development is carried out.
  • Unable to share websites for testing purpose
  • Develop any services behind enterprise firewall which consume Webhooks (HTTP CallBacks). For example, developers code in private space has no routable IP address and GitHub simply has no way to send a message.
  • Sharing a website temporarily that is running only on a developer machine

Tools like Ngrok are well known for tunnelling services for exposing localhost to the web. These tools implement a multiplatform tunnelling, reverse proxy that establishes secure tunnels from a public endpoint such as internet to a locally running network service using a WebSocket. WebSocket is a naturally full-duplex, bidirectional, single-socket connection. With WebSocket, the HTTP request becomes a single request to open a WebSocket connection and reuses the same connection from the client to the server, and the server to the client.

A client runs in the internal network ( alongside the Applications ) and connects to a remote server with HTTP websockets. The Server then forward the request to the Client over the one of the offered websockets.

Websocket Tunnelling

Alex EllisInlets combine a reverse proxy and WebSocket tunnels to expose internal and development endpoints to the public Internet via an exit-node. A exit-node is a publicly reachable server on any public-cloud platform with Inlets-Server process configured.

Inlets — Exit Node on Public Cloud Platforms

Similar tools such as ngrok or Argo Tunnel from Cloudflare are closed-source, have limited built-in and can work out expensive. ngrok is also often banned by corporate firewall policies meaning it can be unusable. Inlets aims to dynamically bind and discover your local services to DNS entries with automated TLS certificates to a public IP address over a websocket tunnel.

Without inlets user might have to configure required firewall rules for the webhooks to reach applications in internal network. This might be a daunting task as keeping track of all the dynamic connections is impossible.

Scenario — Incoming Webhooks without Inlets

With inlets all the requests are sent from a publicly hosted exit-node (inlets server) to inlets client residing on-premise. Inlets act as a traffic sink, and thus not prone to any types of abuses. It won’t relay any requests out to the public internet. Instead, inlets suffers from the opposite problem.

Scenario — Incoming Webhooks with Inlets

As mentioned, by default, a LoadBalancer Service for Kubernetes services is only available on cloud providers, not privately hosted Kubernetes clusters. The cleanest way to get traffic into a cluster seems to be a load balancer. However, it requires an external service usually provided by GCP or AWS that doesn’t come with Kubernetes. With Inlets-operator users can seamlessly enable public LoadBalancer for private Kubernetes Services without having to manage a full-fledged cluster on a public cloud platform.

inlets-operator

In Kubernetes setting, inlets is implemented as an operator that make use of CRD (Custom Resource Definition) to manage tunnels and its components. With Inlets-operator users can dynamically create an exit server on any of the supported platforms like Digitalocean, Packet, GCP etc. With each service type ‘LoadBalancer’ a dedicated server is created on the supported public cloud platform with Inlets server process.

Inlets-Operator on Kubernetes

Inlets CRD shown below enables users to operate (create/delete/get/describe) tunnels as extension to Kubernetes API.

Inlets-Operator Custom Resource Definition

In this walkthrough the operator and sample applications are deployed on a Kubernetes cluster running on a intel-nuc connected to home private network. Inlets operator is deployed as a Kubernetes deployment and constantly polls information from Kubernetes-API for any services with type LoadBalancer.

Operator Deployment
Operator Controller and Kube-apiserver
Operator Deployment

Inlets-operator supports multiple platforms (Digitalocean, Packet, Scaleway, GCP), with the supported platforms users can make use of auto provisioning of required components to seamlessly initiate an exit-node. Trying out inlets-operator with Digitalocean:

Creating a API key for authentication:

Creating API Keys: Digitalocean

A Kubernetes secret with the access token created above is created:

Kubernetes Secret: Access Key

A sample deployment(Kuard) is created along with a service — type:LoadBalancer.

Sample Deployment
Sample Deployment

As seen below, a kuard-service with type:LoadBalancer is create where the process kicks in.

Sample Service — type:Loadbalancer

Inlets-operator detects the above service and initiates creating a droplet on the users Digitalocean account using the credential supplied as Kubernetes secret above.

Operator provisioning Droplet on Digitalocean

As the provider selected here is Digitalocean, a droplet is automatically created on the user account with Inlets server process running in the same and a public ip assigned.

Droplet — Exit Node on Digitalocean
Droplet with public _IP— Exit Node on Digitalocean

As seen below the inlets-server process runs on the droplet created above:

Inlets Server on Droplet

A inlets-client deployment is created on the cluster with the operator and auto-configured with the remote exit-node information.

Inlets-Client

As shown below the client is configured with the public_ip of the droplet created above as remote server destination and local service (kuard-service) as an upstream (generally how a reverse-proxy like envoy work).

Inlets-Client configured with Server information

Kuard service created above with type:LoadBalancer will be allocated with the public_ip of the droplet deployed above as an external_ip:

LoadBalancer with Public_IP

Accessing service running in a private environment with public_ip of the droplet created above.

Accessing application using Droplet’s public_ip

inlets-server process running on the droplet proxying all the requests to the local kuard application running in an internal network.

Inlets-server on Digitalocean proxying requests

Users can use cloud providers DNS:

DNS Configuration

Users can dynamically delete the services which triggers the delete action and all the resources on the public cloud will be cleaned up.

Deleting a Service

The operator spec enables users to select other supported providers:

Configuration for other supported platforms

TLS

By default, for development inlets is configured to use a non-encrypted tunnel which is vulnerable to man-in-the-middle attacks. Users can use Caddy or Nginx to implement TLS by running the same on the exit-node or using nginx running to proxy the inlets server through nginx. By enabling HTTPS users connect to an encrypted endpoint and the inlets clients can also connect to the server over an encrypted tunnel. cert-magic (a go based cert minting stack) is also mentioned in the roadmap.

As seen, with inlets-operator users can easily procure a publicly accessible endpoint for applications running on private clusters. There are many scenarios where inlets can be handy:

  • Develop integrations with webhooks on localhost. No need to deploy a service to cloud or configure firewall/NAT on the service. Same endpoint will work no matter where you are working today, whether it’s home, office or a coffee shop without any need to manage DNS and Public_IP.
  • Use WebSocket tunnels to expose home automation services without having public IP address or configuring NAT. Binaries also available for arm architecture and can run on Raspberry Pi or similar hardware
  • Connect IoT devices and integrate directly through WebSockets or Node-RED provided node — no need to run a web server on the device to receive webhooks.
  • No need to constantly redeploy services to cloud in a dev environment for testing purposes. Users can create a public endpoint on the local private dev environments. Basic and token authentication methods can be used to secure the tunnels.

Inlets can be deployed on any ARM based device enabling it to run on devices like Raspberry Pi and can be used as a gateway for incoming traffic to services running internally. This also can optimize cloud costs (average running cost: 5USD/month) by just running a tiny VM for inlets-server without having to run an entire Kubernetes cluster on the cloud for dev purposes.

--

--