routing traffic to a public Kubernetes service on AWS EC2

routing traffic to a public Kubernetes service on AWS EC2

I have a kubernetes (0.15) cluster running on CoreOS instances on Amazon EC2
When I create a service that I want to be publicly accessible, I currently add some private IP addresses of the EC2 instances to the service description like so:
“kind”: “Service”,
“apiVersion”: “v1beta3”,
“metadata”: {
“name”: “api”
“spec”: {
“ports”: [
“name”: “default”,
“port”: 80,
“targetPort”: 80
“publicIPs”: [“”, “”],
“selector”: {
“app”: “api”

Then I can add these IPs to an ELB load balancer and route traffic to those machines.
But for this to work I need to have a maintain the list of all the machines in my cluster in all the services that I am running, which feels wrong.
What’s the currently recommended way to solve this?

If I know the PortalIP of a service is there a way to make it routable in the AWS VPC infrastructure?
Is it possible to assign external static (Elastic) IPs to Services and have those routed?

(I know of createExternalLoadBalancer, but that does not seem to support AWS yet)


Solution 1:

If someone will reach this question then I want to let you know that external load balancer support is available in latest kubernetes version.

Link to the documentation

Solution 2:

You seem to have a pretty good understanding of the space – unfortunately I don’t have any great workarounds for you.

CreateExternalLoadBalancer is indeed not ready yet – it’s taking a bit of an overhaul of the services infrastructure to get it working for AWS because of how differently AWS’s load balancer is from GCE’s and Openstack’s load balancers.

Unfortunately, there’s no easy way to have the PortalIP or an external static IP routable directly to the pods backing the service, because doing so would require the routing infrastructure to update whenever any of the pods gets moved or recreated. You’d have to have the PortalIP or external IP route to the nodes inside the cluster, which is what you’re already effectively doing with the PublicIPs field and ELB.

What you’re doing with the load balancer right now is probably the best option – it’s basically what CreateExternalLoadBalancer will do once it’s available. You could instead put the external IPs of the instances into the PublicIPs field and then reach the service through one of them, but that’s pretty tightly coupling external connectivity to the lifetime of the node IP you use.