Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services. It facilitates both declarative configuration and automation, and it has a large, rapidly growing ecosystem. If Kubernetes is properly implemented into your engineering workflows, it can lead to great productivity gains.
The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.
One of the main benefits of Kubernetes is that is has an impressive heritage. Kubernetes has a very mature and proven underlying architecture. Its design is built on over 10 years of operational experience by Google engineers who helped build and maintain the largest container platform in the world. Secondly Kubernetes has an outstanding community and industry support. Kubernetes’ broad adoption, growth, support and popularity stands out among all other container orchestration solutions. The project gained a very large active user and developer open source community, as well as the support of global enterprises, IT market leaders and major cloud providers. Thirdly, Kubernetes has a rich feature set and application support compared to other container management systems. It supports a wide spectrum of workloads, programming languages and frameworks, enabling stateless, stateful, and data-processing workloads and making Kubernetes flexible enough to meet the needs of a wide range of users and use cases. Another benefit of Kubernetes is its ongoing development. Soon after its first release, Kubernetes gained a very large and active community. With about 2000 Github contributors at the moment, varying from engineers working at fortune 500 companies to individual developers and engineers, new features are being released constantly. The large and diverse user community also steps in to answer questions and foster collaboration.
Thanks to the Cloud Native Computing Foundation (CNCF Landscape, 2021) managing the open source project, and the vibrant ecosystem, Kubernetes has been standardized to expose well defined APIs. The documentation and resources made the technology accessible to many developers and operators. This Kubernetes ecosystem helps to use Kubernetes more easily and efficiently reducing the negative impact of its general complexity. By relying on some existing tools, specifically made for cloud-native software, you can get solutions that you can hardly ever build yourself.
But what APIs can be interesting for your Kubernetes clusters and why? In almost all cases the use of Kubernetes clusters grows rapidly even when starting small, so there will be a lot to manage eventually. But also look further than only the management of the platform. To create a Kubernetes platform which will be production ready for software development, you might also want to have a look at the infrastructure capabilities of the complete surrounding platform features. For example:
– Common API and Packaging.
– Health Checks.
– High Availability.
– Load Balancing.
– Overlay Networking.
– Network Security Policies.
– Backup and Recovery.
– Service Discovery.
– RBAC and Access Control.
– Support maintenance.
Every enterprise will add different APIs to the Kubernetes clusters, depending on the organization and the needs and use. Several complete software stacks and support solutions exist for teams adopting containers which address the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.
What APIs are you using within your Kubernetes infrastructures?
Source: Cloud Native Computing Foundation. (2021). Retrieved from https://landscape.cncf.io