Exposing services to external traffic

When exposing your GKE cluster to external traffic, you can use either a node port or a load balancer. For 90% of your deployments within GKE, you will want to choose to create a load balancer. Exposing your workloads via node ports allows you to set up your own load balancers or to expose directly nodes via their IP.

Exposing your applications and/or services with a load balancer in GKE is quite easy and can be accomplished with the cloud console for the gcloud CLI:

kubectl expose deployment nginx-1 --type "LoadBalancer"

When you expose your workloads via a load balancer, you specify the network protocol, the port you want to expose traffic on externally, and the internal cluster port that will be the target of your traffic. If you don't specify a port, it will default to port 80. If you do not specify a target port, it will default to the same port as exposed externally.

It's not a bad idea to expose your deployments internally on non-standard ports and to only use standard TCP ports such as port 80 when exposing your applications and services to external traffic. The port mapping in GKE makes managing these types of configurations very easy.

When you expose your GKE cluster externally via a load balancer, the following components are created:

  • Kubernetes Ingress
  • GCP Networking load balancer
  • GKE Service of type load balancer

These three high-level components, and there are many more that support these three, weave together seamlessly to support routing and balancing external traffic to your container cluster. These components can be seen and drilled down into via the GKE Services dashboard:

Performing all of these steps individually via either the cloud console or CLI is great if you are experimenting with GKE or performing some R&D on how best to configure your container cluster. Once you have your plan of attack nailed down, you will want to create a YAML file, or files, to define your deployment. Here's a YAML file that defines a service that will support load balancing external traffic to your container cluster:

apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-1
name: nginx-1-wh5pb
namespace: default
spec:
clusterIP: 10.11.255.101
externalTrafficPolicy: Cluster
ports:
- nodePort: 31136
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-1
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.184.222.150