Networks, Services, Routes and Scaling
Networking
- Kubernetes cluster is composed of a master and worker nodes.
- These are virtual or physical machines so they all have their own IP addresses.
- When application is deployed on this cluster in the form of Docker containers in pods. Each pod get an IP address assigned to it. These pods could be running different types of applications dependent on each other.
- Pods must be able to communicate with each other. So they must be on a network configured in a way where they can communicate with each other and have unique IP addresses.
- OpenShift uses OpenShift Software-defined Networking (OpenShift SDN) to solve this problem.
OpenShift SDN
- SDN creates a virtual network span across nodes using Open vSwitch.
- vSwitch is a distributed virtual switch used to interconnect virtual machine in a hypervisor.
- The default network ID for the overlay network is 10.128.0.0/14
- Each node is assigned a unique subnet such as 10.128.0.0, 10.128.2.0, 10.128.4.0
- All pods on these nodes get a unique IP address within that subnet.
- You can see the IP addresses assigned to each pod by
oc get pods -o wide
- However, communicating through IP addresses may not be a good idea as it is not guaranteed to be the same each time when pod restarts.
- OpenShift has a built-in DNS server that map IP addresses to pods and services.
- This enables us to use pod name or service name to connect to each other instead of IP addresses.
- OpenShift leverages SkyDNS to implement DNS functionality on top of ETCD.
- Establishing connection between the pods directly is not recommended.
- The recommended way is to use services.
- OpenShift SDN provides different kinds of plugins.
- The default plugin configured is the ovs-subnet that provides network connectivity between all pods
- OpenShift also provides the session ovs-multitenant plugin to separate projects from each other.
- OpenShift also supports additional plugins such as Nuage, Contiv, and Flannel.
- These plugins have its own way for networking.
- Users can access our application by using Services and Routes.
Services
- The concept of services in Kubernetes is the same as in OpenShift.
- Services help connect different applications or group of pods with one another instead if using IP addresses.
- Service acts as a load balancer for each section or our microservices architecture.
- Services provides us with the flexibility of modifying and re-deploying the underlying microservices without having to worry about modifying configuration of other dependent applications.
- Each service gets an internal IP address assigned.
- This is called cluster IP which is for internal communication within the cluster.
- We link a service to the pods using Selectors.
- To link the service to all the pods created from Simple Webapp Docker, we use a selector
deploymentconfig=simple-webapp-docker
- We also specify the port on the service to listen to and the target port on the pods to forward requests to.
Route
- The user would like to access our application using a hostname like www.somewebapp.com
- A route helps us expose the service to external users through a hostname.
- Route is like a proxy server such as HAProxy or F5.
- You can configure load balancing, security as well as split traffic between services with routes.
- The route is responsible for balancing load across across the different pods within the deployment.
Load Balancing
- Source is the default strategy.
- Source strategy looks at the IP address of the user accessing the application and makes sure that user is always directed to the same backend server for the duration of that session i.e. sticky session functionality.
- Round Robin
- Each request is directed to a separated backend each time the request is originated from the same user IP address.
- Leastconn
- Routes traffic to the endpoint with the lowest number of connections.
Security
- You can configure SSL so the web application can be accessed using HTTPS.
- Under Insecure Traffic, you can configure to allow users to access your site using HTTP or redirect the user to the HTTPS.
- You can also configure the certificates and private keys in the same section.
- Routes also allow us to split traffic between two services for A/B testing purposes.
- The alternate services section splits traffic between two services in this case.
- It is easy as moving the slider in the direction we wish to go based on the test results.
Workshop
Create Service
- Copy the Service YAML template from here.
apiVersion: v1
kind: Service
metadata:
name: simple-webapp-docker
spec:
selector:
deploymentconfig: simple-webapp-docker
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
This is an internal cluster IP address which cannot be accessed externally.
Create Route
- Use the UI and the default settings
- Try to update the code and see end-to-end process from build to deploy.
Scale Deployment
- Scaling our application is as easy as modifying the number of replicas in the replication controller.
- You may use simple-webapp-color to test scaling and routing.
- Check out documentation for Route.
- To change load balancing strategy to round robin, add these 2 variables under
annotations
:
apiVersion: v1
kind: Route
metadata:
annotations:
haproxy.router.openshift.io/balance: 'roundrobin'
haproxy.router.openshift.io/disable_cookies: 'false'