When Don’t You Need Kubernetes?

Erbis
11 min readOct 11, 2023

--

According to a CNCF survey, 96% of companies use or consider using Kubernetes in their IT environment. Big names like Google, Shopify, Udemy, Slack, Robinhood, Nubank, and The New York Times successfully use Kubernetes to enhance their software operations and streamline development pipelines. Statista lists Kubernetes third in the list of leading technology platforms, with 16.4% of the global market share.

It seems that not a single development can do without Kubernetes, and all companies strive to have it among their technologies. But is Kubernetes suitable for you, and is the use of Kubernetes always economically beneficial for the company?

The article below will share the insights.

Understand your use case

Imagine a huge e-commerce store with many products and an extensive customer base. Hundreds of customers visit the store daily and make hundreds of purchases.

Your task is to establish fast and uninterrupted dispatch of goods from warehouses. You can do this either automatically or manually. In the first case, you will hire several managers to manually process orders and adjust items in stock. In the second case, you will install a system that automatically fills orders and adjusts stock when a purchase is made.

Now imagine a small store with a limited assortment of highly specialized goods. Purchases in the store are made regularly, but not in great numbers, and you can predict customer traffic at a certain time.

Your task remains the same — establishing smooth dispatch of goods from the warehouse. You also have the choice to automate sales or process them manually. In this case, the second option is more about cost-effectiveness. Hiring one manager is cheaper and more hassle-free than developing or purchasing WMS (warehouse management software).

Now, let’s transfer the example of an e-commerce store to a software development system. In such a system, companies often rely on containerization, an approach that packages applications and their dependencies into isolated environments. In large enterprise systems, there are many such containers, and managing them manually is problematic.

This is where Kubernetes comes into play.

Kubernetes is used for automatic container orchestration. It provides tools for automated deployment and management of isolated environments within the software system. This allows organizations to manage large-scale projects easier and more efficiently.

However, if your software is not resource-extensive and has predictable flows, automatic orchestration may not be needed. In this case, you can entrust an expert DevOps engineer with container management and save on IT environment setup.

For which projects is it best to use Kubernetes?

Kubernetes is more suitable for high-load, scalable, big-data projects. Such projects typically involve work with multiple container clusters and cloud PaaS/IaaS solutions.

Thanks to containers, you can isolate resources referred to specific project parts. For example, you can isolate the CPU, memory, and storage required for payment transaction processing so that these operations do not slow down order processing or shipment tracking.

Containers also enable parallel processing of different operations. This allows the performing of multiple tasks simultaneously without slowing down the overall app performance.

Last. but not least, containers simplify high-workload management because you can add or remove them as needed.

But let’s get back to Kubernetes and its use in high-load projects.

To set up CI/CD in such projects, you need to make sure that each container performs its task clearly and on time and that interdependent processes do not slow each other down.

Kubernetes use is among the best DevOps practices today. It allows you to build a highly efficient pipeline from many containers. With features like Kubelet and Autoscaling, you can quickly respond to changing requirements. For example, you can automatically start and stop application containers when problems are detected. You can also automatically adjust the number of running Pods (replicas) in a Deployment, ReplicaSet, or other controller based on the observed workload or resource utilization.

Thus, Kubernetes acts as a conductor in the orchestra of containers. It ensures that each container plays its part effectively, enters on time, and performs at a specific frequency.

Container orchestration

Projects that do not need to use Kubernetes

For small and medium-sized companies, containerization in general and Kubernetes in particular may not be needed. Such companies may find the Kubernetes setup too complex and resource intensive.

Also, if you have monolithic architecture, deploying and managing it manually is simpler than using automatic deployment tools.

When it becomes too difficult to maintain the code, you may ‘cut’ the project into microservices with their own interfaces. It is still possible, then, to manage microservices without the orchestrator or with a less complicated orchestrator.

Additionally, if you have a limited budget and are not going to extend your development team, consider using virtual machines. You can manage them with the help of a configuration management system, for example, Ansible. Such machines do not require high qualification of staff that maintains them.

All in all, the types of projects where you do not need Kubernetes are as follows:

  • Monolithic applications
  • Applications with predictable user traffic
  • Low-loaded or mid-loaded applications
  • Static websites
  • Single-instance applications
  • Resource-constrained environments
Projects that don’t need Kubernetes

Pros and cons of Kubernetes

Kubernetes is a highly efficient but not a one-size-fits-all technology. Kubernetes can greatly enhance containerized infrastructure but may be overkill for non-extensive and traffic-predictable projects.

If you ask yourself, “Do I need Kubernetes?” consider the pros and cons of this technology before making the final decision.

Kubernetes pros

Orchestration. Kubernetes helps configure automatic container management. This is essential for cloud-native applications because such applications contain thousands of microservices in their respective containers.

Elastic scaling. Kubernetes has lots of features for effortless and fast scalability. The great thing is that you can specify custom metrics for auto-scaling and enable your app to use more or less computing resources based on your specific demands.

Self-healing. Kubernetes compares the actual state of the application with its desired state specified in the configuration file. If any deviations are detected, it makes adjustments to return the application to the needed state. For example, it can reschedule failed pods, replace unresponsive pods, and adjust replica counts.

Cloud-agnosticism. Kubernetes can be run in any cloud environment or on-premises. You can use it equally effectively in AWS, Microsoft Azure, Google Cloud, or with bare metal servers.

Rolling updates. This feature allows you to avoid downtimes during the software upgrade. By replacing resource’s Pods incrementally, you keep the app run stable and do not cause trouble to users.

Extensibility. Kubernetes has a variety of plugins to extend its functionality. For example, you can use network plugins, storage plugins, ingress controllers, monitoring and logging plugins, service meshes, cluster auto scalers, etc. You can also integrate Kubernetes into infrastructure as code (IaC) workflow.

Active community. Kubernetes has an extensive community that allows you to access documentation, forums, and other knowledge-based resources. This facilitates your experience of using Kubernetes in different projects.

Kubernetes cons

Complexity. Kubernetes is a sophisticated technology that cannot be figured out in one day. You should hire experienced DevOps to use Kubernetes in your project or invest in the education and training of less experienced experts.

Networking setup. Kubernetes has a higher level of networking abstraction than traditional networking technologies. It treats containers and pods as first-class citizens. Each such citizen has their own IP address. Such an abstraction provides greater flexibility but, at the same time, might be confusing for those used to a more traditional setup.

Compatibility. If you decide to upgrade or downgrade Kubernetes to a specific version, you may face compatibility problems with existing application components. It is crucial to check the Kubernetes release documentation before the upgrade and make sure that version migration does not ruin the current infrastructure setup.

Debugging challenges. Kubernetes is used for setting up infrastructure in complex distributed systems. Such systems have numerous interconnected components. When an issue appears, it is challenging to detect the source of the problem and troubleshoot it immediately.

Security concerns. Misconfigurations in Kubernetes can lead to application vulnerabilities and potential data breaches. Some common mistakes during the Kubernetes configuration are inadequate role-based access control, excessive pod permissions, insecure network policies, unsecured API servers, lack of network isolation, and unchecked admission controllers.

Vendor lock-in. You may face many challenges if you decide to switch from Kubernetes to another orchestration system. These may include making changes to configuration files and deployment manifests, implementing app refactoring, and revising the integration ecosystem.

Kubernetes costs

Even though Kubernetes is an open-source technology, this does not mean you can use it entirely free of charge. The final price will depend on the project specifics, IT infrastructure, and configuration. Here are the main components that will form the Kubernetes cost:

Infrastructure. This includes the cost of physical servers, storage, virtual machines (VMs), and network resources that you need to run your Kubernetes clusters. Infrastructure costs are also affected by the number of nodes and their configurations and the cloud provider you choose.

Storage. The storage class, capacity, and provider will have a direct impact on Kubernetes costs. The thing is, in Kubernetes, you use Persistent Volumes to store data from containers that have finished their lifecycle. The more data you have, the bigger storage capacity you will consume. Additionally, if your app generates a lot of outbound traffic, this may increase storage costs because some providers charge extra fees for data transfer.

Network. The network cost will depend on the amount of your app traffic and the cloud provider you work with. The expenses will specifically cover data transfer fees, load balancer charges, and network egress costs.

Cluster size. A cluster consists of machines (nodes) needed to run a containerized application orchestrated by Kubernetes. The more nodes you have, the higher the cluster costs are. So, it is crucial to thoroughly evaluate your project requirements and set up a cluster that provisions the needed amount of worker nodes.

Human resources. As you may have understood, Kubernetes is a complex technology that requires highly professional DevOps to set up and manage. So, if you decide to use it in your project, consider adding $4000 to $6000 to your monthly budget, which will cover the salary of full-time senior DevOps in the Eastern European region.

Kubernetes alternatives

If you are using containerization in your application and do not want to use Kubernetes, there are two alternative options:

  1. use a different, simpler orchestrator
  2. orchestrate containers manually

Use a different orchestrator

Kubernetes is the most popular orchestration platform, but there are many other orchestrators that you can consider for your project. The most popular are:

  • Docker Swarm, a native clustering and orchestration solution for Docker containers
  • Apache Mesos, a general-purpose cluster manager that can also orchestrate containers
  • Nomad, an open-source scheduler and orchestrator from HashiCorp
  • Amazon ECS (Elastic Container Service), a managed container orchestration service provided by AWS
  • Google Cloud Run, a fully managed container platform that focuses on serverless computing and container execution

Orchestrate containers manually

Manual orchestration of containers means that you write a script with specified parameters to manage containers. This is opposed to using orchestrators, where you simply adjust the necessary parameters for orchestration.

In the manual orchestration scenario, we recommend using Jenkins. This will allow you to quickly integrate container-based tasks into the CI/CD pipeline and automate container management with self-written scripting.

Here’s how you can use it:

  • Integrate Jenkins with Git and trigger changes pushed to the code
  • Define and execute shell scripts
  • Customize your scripts the way you need
  • Use Jenkins’ mechanisms to handle errors and notifications
  • Use Jenkins’ Docker plugin to simplify interaction with Docker
Deploying app with Jenkins

Managed Kubernetes services

Managed Kubernetes services should be considered as another alternative if you want to use Kubernetes but don’t want to configure it yourself, as you can use the predefined configurations offered by third-party cloud providers. In this case, you will need to focus on container management, and the cloud provider will deploy the Kubernetes cluster itself.

Here’s what you can get by choosing managed services:

  • Cluster provisioning
  • Cluster scaling
  • Cluster upgrades
  • Security tools
  • Monitoring and logging
  • Backup and disaster recovery
  • Load balancing
  • Resource elastic scaling
  • Integration with other cloud services
  • Cost management tools

The most popular managed Kubernetes services are:

  1. Amazon Elastic Kubernetes Service (EKS)
  2. Google Kubernetes Engine (GKE)
  3. Azure Kubernetes Service (AKS)
  4. DigitalOcean Kubernetes
  5. Rancher
  6. Red Hat OpenShift Dedicated

Looking to adopt the best DevOps practices?

In today’s dynamic world, it is impossible to do without using the latest technologies. However, the use of such technologies must keep pace with your business needs.

Selecting a tech stack is one of the most crucial stages in project development. In many ways, it determines the time, budget, and release deadlines.

In conditions when organizations are massively adopting CI/CD practices, Kubernetes is becoming a must-have tool for deploying highly loaded applications. But, if your application has a predictable load level, Kubernetes can be unreasonably resource intensive.

If you are unsure whether Kubernetes is needed for your project and which alternatives can be used in your particular case, consult with DevOps experts. After analyzing your product, they will offer the most beneficial solution that goes in-hand with your business needs.

FAQ

When do I need Kubernetes?

You may need Kubernetes when you have complex containerized applications, microservices, or a need for efficient orchestration, scaling, and management of container workloads across diverse environments. Kubernetes helps streamline deployment, scaling, and maintenance of applications, making it valuable in scenarios with dynamic workloads and scalability requirements.

What is a typical project example where Kubernetes should be used?

A typical project example where Kubernetes should be used is a modern e-commerce platform with multiple microservices, requiring automatic scaling, high availability, and efficient resource utilization to handle fluctuating customer traffic and ensure seamless service delivery. Kubernetes simplifies the management and orchestration of containerized microservices in such a dynamic environment.

Can I start with a simpler solution and migrate to Kubernetes later if needed?

Yes, you can start with a simpler deployment solution and migrate to Kubernetes later as your application’s complexity and scaling requirements evolve. Kubernetes provides flexibility to transition gradually when the benefits of container orchestration become more significant for your project.

Is Kubernetes suitable for small businesses and startups?

Kubernetes can be suitable for small businesses and startups that anticipate rapid growth and have complex application requirements, but it may initially introduce complexity. Smaller organizations might consider starting with simpler hosting solutions and gradually adopting Kubernetes as their needs evolve.

How can I monitor and manage Kubernetes to ensure optimal performance?

You can monitor and manage Kubernetes for optimal performance by using tools like Prometheus for monitoring, Grafana for visualization, and Kubernetes-native solutions like Kubelet and kube-proxy to maintain cluster health. Additionally, Kubernetes provides features for resource allocation, scaling, and pod autoscaling to optimize performance as your workloads change.

--

--