Tips for Using Kubernetes in Large Enterprises

Kubernetes (K8s), designed by Google engineers to manage the more than two billion containers that Google starts each week, is a current favorite of the DevOps and larger tech world for good reason. It provides the orchestration and automation necessary for companies, particularly enterprise organizations, to maximize cloud ecosystems and productivity both of which are near requirements in the modern business world. Read on to find out how to ensure it does the same for you. 

Kubernetes Overview

Kubernetes is an open-source container orchestration Platform-as-a-Service (PaaS) that is used to deploy, manage, and monitor containerized workloads and applications across on-premise and cloud environments. With it, you can run applications in a wide variety of self-contained environments with scalable performance. Since the K8s framework is focused on the automation of tasks, it is ideal for the development and deployment of applications and DevOps workflows.

Kubernetes features include:

  • Service discovery and load balancing—can identify containers by DNS name or IP address and redistribute traffic from high load to low load areas 
  • Storage orchestration—allows automatic mounting of any storage type, including local and public cloud
  • Automated rollouts and rollbacks—allows you to define the state of deployed containers, systematically roll out changes, and automatically rollback on failure
  • Automatic bin packing—can specify how each container in a pod uses CPU and RAM resources and use specifications for better resource management
  • Self-healing—can restart or replace failed containers, terminate unresponsive containers, and restrict traffic until containers are ready
  • Secret and configuration management—can store, deploy, and update secrets and app configurations without rebuilding container images or exposing sensitive information

Tips for Using Kubernetes on an Enterprise Scale

If you’re just starting with K8s or transitioning from a small business to enterprise-scale deployment, the following tips can help you get more out of your configuration.

Consider a Managed Service

Although it may not seem like much of a tip, using a managed service for enterprise deployments can be well worth the additional cost. The high complexity of Kubernetes makes it challenging to deploy and maintain if you do not already have the proper in-house expertise.

Managed services can help fill this gap through varying levels of support, including self-service deployments of templated configurations, management of self-hosted operations, and fully-managed Platform-as-a-Service (PaaS) solutions. On the other hand, if you are interested in the development of K8s or the creation of platforms for the service, managed services likely won’t benefit you.

If you are interested in managed services, however, these are just a few of the options: 

  • Google Kubernetes Engine (GKE)—provides a production-ready environment in Google Cloud for the installation, management, and operation of K8s clusters. It includes financially backed SLAs, vertical auto-scaling, a Sandbox environment for added security, and usage metering.
  • Platform9 Managed Kubernetes—provides a fully managed Software-as-a-Service solution that is infrastructure-agnostic. It includes zero-touch upgrades, multi-cluster operations, built-in monitoring, and a 24/7/365 SLA.
  • Amazon Elastic Container Service (EKS)—provides a managed service for running K8s control plane instances in AWS. It is integrated with many AWS services, automatically detects and replaces problem instances, and provides automated version upgrades.
  • Red Hat OpenShift Container Platform—provides a self-deployed Platform-as-a-Service that is infrastructure-agnostic. It is Linux-based, can be integrated directly into Integrated Development Environments (IDEs), and includes built-in monitoring.

Pay Attention to Security

The complexity of K8s deployments has a measurable effect on security management, with many more moving parts to secure than you may be aware of. The most important goals when securing your configuration are: 

  • Control API access—use Transport Layer Security (TLS) for all traffic and make sure to authenticate and check the authorization of all API clients
  • Control Kubelet access—enable Kubelet authentication and authorization 
  • Manage workloads and users—set resource limits, control pod node access, and restrict workload/user privileges, network access, and cloud metadata API access
  • Protect cluster components—restrict ectd access, frequently rotate infrastructure credentials, limit the use of alpha or beta features and third-party integrations, and use at-rest encryption 

Using cluster segmentation in combination with firewalls that are native to containers and segregating roles by duty will help further protect your deployment. Perhaps the most important thing for security, however, is to ensure that you are monitoring and logging your systems so that if an incident occurs you can act quickly and effectively.

Monitor and Log System Events

Small drops in availability or brief downtimes have a significant impact on revenue and productivity. To avoid these occurrences and ensure security standards, you should employ robust and consistent monitoring and logging measures.

Monitoring will alert you to security or performance issues, allowing you to respond quickly and prevent or minimize damage. You can accomplish this using either resource metrics or a full metrics pipeline, like Prometheus, Google Cloud Monitoring, or Sysdig. Resource metrics will provide you with a limited set of metrics, related to cluster components and the kubectl top utility, that can be accessed through API, while a full metrics pipeline will provide a comprehensive set that is more useful for directing automated responses to performance drops.

Logging can help you track down and analyze any issues that occur, is required for auditing and regulatory compliance, and can provide insight for performance optimization. In k8s, logging is done via kubectl logs or the integration of third-party tools, like Elastic or Fluentd, which have the additional benefit of log aggregation and search functions. To simplify incident analysis, as well as compliance and auditing, you should write your logs in a stdout/stderr format for consistency, regardless of the method you choose.

Use Custom Controllers

In K8s, controllers are used to ensure the desired state of a cluster matches its observed state, with each controller responsible for a particular resource. There are numerous built-in controllers you can use, such as the Replica Sets controller, which makes sure that the correct number of pods is running in a cluster, or the Node Controller, which verifies the state of servers and responds if they go down.

Built-in controllers are great for standard tasks but you’ll get more flexibility and control from custom controllers. For example, they can facilitate the dynamic reloading of application configurations upon cluster change, the creation of namespaces, deployment monitoring, node issue corrections, and more.

Custom controllers can help you simplify the process of deployment management, especially in comparison with the process for using toolchains, as it allows you use a small amount of code to access APIs and can even provide a declarative API when used in combination with custom resources. 

Conclusion

Enterprise-scale Kubernetes deployments can be extremely complex and a significant challenge to manage and maintain but this complexity doesn’t negate the benefits that a successful deployment can provide. Hopefully, the tips covered here have given you the confidence needed to make the most of K8s, or at least have given you some ideas of where to start.

If you still feel lost, make sure to reach out to the extensive community that supports K8s. There is a wide range of experience represented there and people are often glad to help you out if they can. 

About the Author

Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Oracle, Zend, CheckPoint and Ixia, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership. Gilad holds a B.Sc. in Economics from Tel Aviv University, and has a keen interest in psychology, Jewish spirituality, practical philosophy and their connection to business, innovation and technology.

Sign up for the free insideAI News newsletter.