How do I get one pod to network to another pod in Kubernetes? (SIMPLE)

  • A+
Category:Languages

I've been banging my head against this wall on and off for a while. There is a ton of information on Kubernetes on the web, but it's all assuming so much knowledge that n00bs like me don't really have much to go on.

So, can anyone share a simple example of the following (as a yaml file)? All I want is

  • two pods
  • let's say one pod has a backend (I don't know - node.js), and one has a frontend (say React).
  • A way to network between them.

And then an example of calling an api call from the back to the front.

I start looking into this sort of thing, and all of a sudden I hit this page - https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this. This is super unhelpful. I don't want or need advanced network policies, nor do I have the time to go through several different service layers that are mapped on top of kubernetes. I just want to figure out a trivial example of a network request.

Hopefully if this example exists on stackoverflow it will serve other people as well.

Any help would be appreciated. Thanks.

EDIT; it looks like the easiest example may be using the Ingress controller.

EDIT EDIT;

I'm working to try and get a minimal example deployed - I'll walk through some steps here and point out my issues.

So below is my yaml file:

apiVersion: apps/v1beta1 kind: Deployment metadata:   name: frontend   labels:     app: frontend spec:   replicas: 3   selector:     matchLabels:       app: frontend   template:     metadata:       labels:         app: frontend     spec:       containers:       - name: nginx         image: patientplatypus/frontend_example         ports:         - containerPort: 3000 --- apiVersion: v1 kind: Service metadata:   name: frontend spec:   type: LoadBalancer   selector:     app: frontend   ports:     - protocol: TCP       port: 80       targetPort: 3000 --- apiVersion: apps/v1beta1 kind: Deployment metadata:   name: backend   labels:     app: backend spec:   replicas: 3   selector:     matchLabels:       app: backend   template:     metadata:       labels:         app: backend     spec:       containers:       - name: nginx         image: patientplatypus/backend_example         ports:         - containerPort: 5000 --- apiVersion: v1 kind: Service metadata:   name: backend spec:   type: LoadBalancer   selector:     app: backend   ports:     - protocol: TCP       port: 80       targetPort: 5000 --- apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: frontend spec:         rules:   - host: www.kubeplaytime.example     http:       paths:       - path: /         backend:           serviceName: frontend           servicePort: 80       - path: /api         backend:           serviceName: backend           servicePort: 80 

What I believe this is doing is

  • Deploying a frontend and backend app - I deployed patientplatypus/frontend_example and patientplatypus/backend_example to dockerhub and then pull the images down. One open question I have is, what if I don't want to pull the images from docker hub and rather would just like to load from my localhost, is that possible? In this case I would push my code to the production server, build the docker images on the server and then upload to kubernetes. The benefit is that I don't have to rely on dockerhub if I want my images to be private.

  • It is creating two service endpoints that route outside traffic from a web browser to each of the deployments. These services are of type loadBalancer because they are balancing the traffic among the (in this case 3) replicasets that I have in the deployments.

  • Finally, I have an ingress controller which is supposed to allow my services to route to each other through www.kubeplaytime.example and www.kubeplaytime.example/api. However this is not working.

What happens when I run this?

patientplatypus:~/Documents/kubePlay:09:17:50$kubectl create -f kube-deploy.yaml deployment.apps "frontend" created service "frontend" created deployment.apps "backend" created service "backend" created ingress.extensions "frontend" created 
  • So first, it appears to create all the parts that I need fine with no errors.

    patientplatypus:~/Documents/kubePlay:09:22:30$kubectl get --watch services

    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

    backend LoadBalancer 10.0.18.174 <pending> 80:31649/TCP 1m

    frontend LoadBalancer 10.0.100.65 <pending> 80:32635/TCP 1m

    kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10d

    frontend LoadBalancer 10.0.100.65 138.91.126.178 80:32635/TCP 2m

    backend LoadBalancer 10.0.18.174 138.91.121.182 80:31649/TCP 2m

  • Second, if I watch the services, I eventually get IP addresses that I can use to navigate in my browser to these sites. Each of the above IP addresses works in routing me to the frontend and backend respectively.

HOWEVER

I reach an issue when I try and use the ingress controller - it seemingly deployed, but I don't know how to get there.

patientplatypus:~/Documents/kubePlay:09:24:44$kubectl get ingresses NAME       HOSTS                      ADDRESS   PORTS     AGE frontend   www.kubeplaytime.example             80        16m 
  • So I have no address I can use, and www.kubeplaytime.example does not appear to work.

What it appears that I have to do to route to the ingress extension I just created is to use a service and deployment on it in order to get an IP address, but this starts to look incredibly complicated very quickly.

For example, take a look at this medium article: https://medium.com/@cashisclay/kubernetes-ingress-82aa960f658e.

It would appear that the necessary code to add for just the service routing to the Ingress (ie what he calls the Ingress Controller) appears to be this:

--- kind: Service apiVersion: v1 metadata:   name: ingress-nginx spec:   type: LoadBalancer   selector:     app: ingress-nginx   ports:   - name: http     port: 80     targetPort: http   - name: https     port: 443     targetPort: https --- kind: Deployment apiVersion: extensions/v1beta1 metadata:   name: ingress-nginx spec:   replicas: 1   template:     metadata:       labels:         app: ingress-nginx     spec:       terminationGracePeriodSeconds: 60       containers:       - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3         name: ingress-nginx         imagePullPolicy: Always         ports:           - name: http             containerPort: 80             protocol: TCP           - name: https             containerPort: 443             protocol: TCP         livenessProbe:           httpGet:             path: /healthz             port: 10254             scheme: HTTP           initialDelaySeconds: 30           timeoutSeconds: 5         env:           - name: POD_NAME             valueFrom:               fieldRef:                 fieldPath: metadata.name           - name: POD_NAMESPACE             valueFrom:               fieldRef:                 fieldPath: metadata.namespace         args:         - /nginx-ingress-controller         - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend --- kind: Service apiVersion: v1 metadata:   name: nginx-default-backend spec:   ports:   - port: 80     targetPort: http   selector:     app: nginx-default-backend --- kind: Deployment apiVersion: extensions/v1beta1 metadata:   name: nginx-default-backend spec:   replicas: 1   template:     metadata:       labels:         app: nginx-default-backend     spec:       terminationGracePeriodSeconds: 60       containers:       - name: default-http-backend         image: gcr.io/google_containers/defaultbackend:1.0         livenessProbe:           httpGet:             path: /healthz             port: 8080             scheme: HTTP           initialDelaySeconds: 30           timeoutSeconds: 5         resources:           limits:             cpu: 10m             memory: 20Mi           requests:             cpu: 10m             memory: 20Mi         ports:         - name: http           containerPort: 8080           protocol: TCP 

This would seemingly need to be appended to my other yaml code above in order to get a service entry point for my ingress routing, and it does appear to give an ip:

patientplatypus:~/Documents/kubePlay:09:54:12$kubectl get --watch services NAME                    TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE backend                 LoadBalancer   10.0.31.209   <pending>     80:32428/TCP                 4m frontend                LoadBalancer   10.0.222.47   <pending>     80:32482/TCP                 4m ingress-nginx           LoadBalancer   10.0.28.157   <pending>     80:30573/TCP,443:30802/TCP   4m kubernetes              ClusterIP      10.0.0.1      <none>        443/TCP                      10d nginx-default-backend   ClusterIP      10.0.71.121   <none>        80/TCP                       4m frontend   LoadBalancer   10.0.222.47   40.121.7.66   80:32482/TCP   5m ingress-nginx   LoadBalancer   10.0.28.157   40.121.6.179   80:30573/TCP,443:30802/TCP   6m backend   LoadBalancer   10.0.31.209   40.117.248.73   80:32428/TCP   7m 

So ingress-nginx appears to be the site I want to get to. Navigating to 40.121.6.179 returns a default 404 message (default backend - 404) - it does not go to frontend as / aught to route. /api returns the same. Navigating to my host namespace www.kubeplaytime.example returns a 404 from the browser - no error handling.

QUESTIONS

  • Is the Ingress Controller strictly necessary, and if so is there a less complicated version of this?

  • I feel I am close, what am I doing wrong?

FULL YAML

Available here: https://gist.github.com/patientplatypus/fa07648339ee6538616cb69282a84938

Thanks for the help!

EDIT EDIT EDIT

I've attempted to use HELM. On the surface it appears to be a simple interface, and so I tried spinning it up:

patientplatypus:~/Documents/kubePlay:12:13:00$helm install stable/nginx-ingress NAME:   erstwhile-beetle LAST DEPLOYED: Sun May  6 12:13:30 2018 NAMESPACE: default STATUS: DEPLOYED  RESOURCES: ==> v1/ConfigMap NAME                                       DATA  AGE erstwhile-beetle-nginx-ingress-controller  1     1s  ==> v1/Service NAME                                            TYPE          CLUSTER-IP   EXTERNAL-IP  PORT(S)                     AGE erstwhile-beetle-nginx-ingress-controller       LoadBalancer  10.0.216.38  <pending>    80:31494/TCP,443:32118/TCP  1s erstwhile-beetle-nginx-ingress-default-backend  ClusterIP     10.0.55.224  <none>       80/TCP                      1s  ==> v1beta1/Deployment NAME                                            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE erstwhile-beetle-nginx-ingress-controller       1        1        1           0          1s erstwhile-beetle-nginx-ingress-default-backend  1        1        1           0          1s  ==> v1beta1/PodDisruptionBudget NAME                                            MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE erstwhile-beetle-nginx-ingress-controller       1              N/A              0                    1s erstwhile-beetle-nginx-ingress-default-backend  1              N/A              0                    1s  ==> v1/Pod(related) NAME                                                             READY  STATUS             RESTARTS  AGE erstwhile-beetle-nginx-ingress-controller-7df9b78b64-24hwz       0/1    ContainerCreating  0         1s erstwhile-beetle-nginx-ingress-default-backend-849b8df477-gzv8w  0/1    ContainerCreating  0         1s   NOTES: The nginx-ingress controller has been installed. It may take a few minutes for the LoadBalancer IP to be available. You can watch the status by running 'kubectl --namespace default get services -o wide -w erstwhile-beetle-nginx-ingress-controller'  An example Ingress that makes use of the controller:    apiVersion: extensions/v1beta1   kind: Ingress   metadata:     annotations:       kubernetes.io/ingress.class: nginx     name: example     namespace: foo   spec:     rules:       - host: www.example.com         http:           paths:             - backend:                 serviceName: exampleService                 servicePort: 80               path: /     # This section is only required if TLS is to be enabled for the Ingress     tls:         - hosts:             - www.example.com           secretName: example-tls  If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:    apiVersion: v1   kind: Secret   metadata:     name: example-tls     namespace: foo   data:     tls.crt: <base64 encoded cert>     tls.key: <base64 encoded key>   type: kubernetes.io/tls 

Seemingly this is really nice - it spins everything up and gives an example of how to add an ingress. Since I spun up helm in a blank kubectl I used the following yaml file to add in what I thought would be required.

The file:

apiVersion: apps/v1beta1 kind: Deployment metadata:   name: frontend   labels:     app: frontend spec:   replicas: 3   selector:     matchLabels:       app: frontend   template:     metadata:       labels:         app: frontend     spec:       containers:       - name: nginx         image: patientplatypus/frontend_example         ports:         - containerPort: 3000 --- apiVersion: v1 kind: Service metadata:   name: frontend spec:   type: LoadBalancer   selector:     app: frontend   ports:     - protocol: TCP       port: 80       targetPort: 3000 --- apiVersion: apps/v1beta1 kind: Deployment metadata:   name: backend   labels:     app: backend spec:   replicas: 3   selector:     matchLabels:       app: backend   template:     metadata:       labels:         app: backend     spec:       containers:       - name: nginx         image: patientplatypus/backend_example         ports:         - containerPort: 5000 --- apiVersion: v1 kind: Service metadata:   name: backend spec:   type: LoadBalancer   selector:     app: backend   ports:     - protocol: TCP       port: 80       targetPort: 5000 --- apiVersion: extensions/v1beta1 kind: Ingress metadata:   annotations:     kubernetes.io/ingress.class: nginx spec:   rules:     - host: www.example.com       http:         paths:           - path: /api             backend:               serviceName: backend               servicePort: 80           - path: /             frontend:               serviceName: frontend               servicePort: 80 

Deploying this to the cluster however runs into this error:

patientplatypus:~/Documents/kubePlay:11:44:20$kubectl create -f kube-deploy.yaml deployment.apps "frontend" created service "frontend" created deployment.apps "backend" created service "backend" created error: error validating "kube-deploy.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[1]): unknown field "frontend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath, ValidationError(Ingress.spec.rules[0].http.paths[1]): missing required field "backend" in io.k8s.api.extensions.v1beta1.HTTPIngressPath]; if you choose to ignore these errors, turn validation off with --validate=false 

So, the question then becomes, well crap how do I debug this? If you spit out the code that helm produces, it's basically non-readable by a person - there's no way to go in there and figure out what's going on.

Check it out: https://gist.github.com/patientplatypus/0e281bf61307f02e16e0091397a1d863 - over a 1000 lines!

If anyone has a better way to debug a helm deploy add it to the list of open questions.

EDIT EDIT EDIT EDIT

To simplify in the extreme I attempt to make a call from one pod to another only using namespace.

So here is my React code where I make the http request:

axios.get('http://backend/test') .then(response=>{   console.log('return from backend and response: ', response); }) .catch(error=>{   console.log('return from backend and error: ', error); }) 

I've also attempted to use http://backend.exampledeploy.svc.cluster.local/test without luck.

Here is my node code handling the get:

router.get('/test', function(req, res, next) {   res.json({"test":"test"}) }); 

Here is my yaml file that I uploading to the kubectl cluster:

apiVersion: apps/v1beta1 kind: Deployment metadata:   name: frontend   namespace: exampledeploy   labels:     app: frontend spec:   replicas: 3   selector:     matchLabels:       app: frontend   template:     metadata:       labels:         app: frontend     spec:       containers:       - name: nginx         image: patientplatypus/frontend_example         ports:         - containerPort: 3000 --- apiVersion: v1 kind: Service metadata:   name: frontend   namespace: exampledeploy spec:   type: LoadBalancer   selector:     app: frontend   ports:     - protocol: TCP       port: 80       targetPort: 3000 --- apiVersion: apps/v1beta1 kind: Deployment metadata:   name: backend   namespace: exampledeploy   labels:     app: backend spec:   replicas: 3   selector:     matchLabels:       app: backend   template:     metadata:       labels:         app: backend     spec:       containers:       - name: nginx         image: patientplatypus/backend_example         ports:         - containerPort: 5000 --- apiVersion: v1 kind: Service metadata:   name: backend   namespace: exampledeploy spec:   type: LoadBalancer   selector:     app: backend   ports:     - protocol: TCP       port: 80       targetPort: 5000 

The uploading to the cluster appears to work as I can see in my terminal:

patientplatypus:~/Documents/kubePlay:14:33:20$kubectl get all --namespace=exampledeploy  NAME                            READY     STATUS    RESTARTS   AGE pod/backend-584c5c59bc-5wkb4    1/1       Running   0          15m pod/backend-584c5c59bc-jsr4m    1/1       Running   0          15m pod/backend-584c5c59bc-txgw5    1/1       Running   0          15m pod/frontend-647c99cdcf-2mmvn   1/1       Running   0          15m pod/frontend-647c99cdcf-79sq5   1/1       Running   0          15m pod/frontend-647c99cdcf-r5bvg   1/1       Running   0          15m  NAME               TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE service/backend    LoadBalancer   10.0.112.160   168.62.175.155   80:31498/TCP   15m service/frontend   LoadBalancer   10.0.246.212   168.62.37.100    80:31139/TCP   15m  NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.extensions/backend    3         3         3            3           15m deployment.extensions/frontend   3         3         3            3           15m  NAME                                        DESIRED   CURRENT   READY     AGE replicaset.extensions/backend-584c5c59bc    3         3         3         15m replicaset.extensions/frontend-647c99cdcf   3         3         3         15m  NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE deployment.apps/backend    3         3         3            3           15m deployment.apps/frontend   3         3         3            3           15m  NAME                                  DESIRED   CURRENT   READY     AGE replicaset.apps/backend-584c5c59bc    3         3         3         15m replicaset.apps/frontend-647c99cdcf   3         3         3         15m 

However, when I attempt to make the request I get the following error:

return from backend and error:   Error: Network Error Stack trace: createError@http://168.62.37.100/static/js/bundle.js:1555:15 handleError@http://168.62.37.100/static/js/bundle.js:1091:14 App.js:14 

Since the axios call is being made from the browser, I'm wondering if it is simply not possible to use this method to call the backend, even though the backend and the frontend are in different pods. I'm a little lost, as I thought this was the simplest possible way to network pods together.

EDIT X5

I've determined that it is possible to curl the backend from the command line by exec'ing into the pod like this:

patientplatypus:~/Documents/kubePlay:15:25:25$kubectl exec -ti frontend-647c99cdcf-5mfz4 --namespace=exampledeploy -- curl -v http://backend/test * Hostname was NOT found in DNS cache *   Trying 10.0.249.147... * Connected to backend (10.0.249.147) port 80 (#0) > GET /test HTTP/1.1 > User-Agent: curl/7.38.0 > Host: backend > Accept: */* >  < HTTP/1.1 200 OK < X-Powered-By: Express < Content-Type: application/json; charset=utf-8 < Content-Length: 15 < ETag: W/"f-SzkCEKs7NV6rxiz4/VbpzPnLKEM" < Date: Sun, 06 May 2018 20:25:49 GMT < Connection: keep-alive <  * Connection #0 to host backend left intact {"test":"test"} 

What this means is, without a doubt, because the front end code is being executed in the browser it needs Ingress to gain entry into the pod, as http requests from the front end are what's breaking with simple pod networking. I was unsure of this, but it means Ingress is necessary.


First of all, let's clarify some apparent misconceptions. You mentioned your front-end being a React application, that will presumably run in the users browser. For this to work, your actual problem is not your back-end and front-end pods communicating with each other, but the browser needs to be able to connect to both these pods (to the front-end pod in order to load the React application, and to the back-end pod for the React app to make API calls).

To visualize:

                                                 +---------+                                              +---| Browser |---+                                                                                               |   +---------+   |                                              V                 V +-----------+     +----------+         +-----------+     +----------+ | Front-end |---->| Back-end |         | Front-end |     | Back-end | +-----------+     +----------+         +-----------+     +----------+       (what you asked for)                     (what you need) 

As already stated, the easiest solution for this would be to use an Ingress controller. I won't go into detail on how to set up an Ingress controller here; in some cloud environments (like GKE) you will be able to use an Ingress controller provided to you by the cloud provider. Otherwise, you can set up the NGINX Ingress controller. Have a look at the NGINX Ingress controllers deployment guide for more information.

Define services

Start by defining Service resources for both your front-end and back-end application (these would also allow your Pods to communicate with each other). A service definition might look like this:

apiVersion: v1 kind: Service metadata:   name: backend spec:   selector:     app: backend   ports:     - protocol: TCP       port: 80       targetPort: 8080 

Make sure that your Pods have labels that can be selected by the Service resource (in this example, I'm using app=backend and app=frontend as labels).

If you want to establish Pod-to-Pod communication, you're done now. In each Pod, you can now use backend.<namespace>.svc.cluster.local (or backend as shorthand) and frontend as host names to connect to that Pod.

Define Ingresses

Next up, you can define the Ingress resources; since both services will need connectivity from outside the cluster (the users browser), you will need Ingress definitions for both services.

apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: frontend spec:         rules:   - host: www.your-application.example     http:       paths:       - path: /         backend:           serviceName: frontend           servicePort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: backend spec:         rules:   - host: api.your-application.example     http:       paths:       - path: /         backend:           serviceName: backend           servicePort: 80 

Alternatively, you could also aggregate frontend and backend with a single Ingress resource (no "right" answer here, just a matter of preference):

apiVersion: extensions/v1beta1 kind: Ingress metadata:   name: frontend spec:         rules:   - host: www.your-application.example     http:       paths:       - path: /         backend:           serviceName: frontend           servicePort: 80       - path: /api         backend:           serviceName: backend           servicePort: 80 

After that, make sure that both www.your-application.example and api.your-application.example point to your Ingress controller's external IP address, and you should be done.

Comment

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: