Kubernetes is complex and needs a lot of configuration to run properly. To be able to deploy containerized applications to Kubernetes, developers have to create configuration files, manage logging and tracing, and write their own CI/CD scripts using tools like Jenkins or Drone. Knative takes all this hassle away and combined with the Otomi Container Platform makes it even easier.
Knative helps developers by hiding many of these tasks, simplifying container-based management, and enabling you to concentrate on writing application code. Otomi Container Platform supports Knative out-of-the-box together with all Istio service mesh features. When the Otomi container platform is installed on Kubernetes version 1.15.x clusters, you can directly start using Knative. This makes deploying and managing cloud-native applications really easy. Otomi delivers an extra abstraction layer on top of Knative, so you don’t even have to know how to use Knative. Just tell Otomi which container image to run and if you would like to scale to zero or not.
In this post, we’ll explain 2 current supported ways to deploy containers with Otomi Container Platform and Knative. We’ll also take a look at a new feature that will be available in the upcoming new release of Otomi Container Platform: deploying containers with Knative and automated CD.
Option 1: Deploy with Knative using Container Platform values
All Otomi Container Platform features are configured using a simple set of values. In a previous post, we explained how to simply configure ingress for services (also for Knative services).
To deploy your container using Knative, go to the cloud/cluster/team values, and add the following service:
- name: hello
Continue reading this article on our blog.