Artifact Registry is not just a repository to store and manage Kubernetes artifacts or images like the GCR. It also supports Maven, npm, and Apt, language packages. So, developers can easily store them in a private or public repository. It is very easy and simple to create and publish artifacts.
In this article, we will discuss how to manage Helm charts with Google Artifact Registry. Helm is a tool that helps you to manage Kubernetes applications. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish. Helm…
In Docker, we build docker images and store them in a remote repository such as Docker Hub, GCR or ECR. Same with Helm you can generate Kubernetes packages with Helm called Charts. These charts can package into archive (tgz) files and store them in a remote or local repository.
There are 03 main components in Helm.
chart
, combined with a…
Most of the time, developers store their docker images in the Docker Hub. If you are working on a project and you have private images that need to be stored in a private place, most of the time developers configure a nexus server and store it there.
But Google Cloud’s GCR is a perfect solution to this problem. With GCR, you can store, manage, and secure your Docker container images easily. All you want is a service account with proper access permission. GCR is not just a docker repository. …
We all have exercised to use the terminal when interacting with Kubernetes. Kubevious is a tool that helps to visualize all the resources in a Kubernetes cluster. As in the above picture, we can simply check a number of namespaces, applications, pods, nodes, and many other details used in the Kubernetes cluster. This makes developers work easier and interactive.
The most important thing in this tool is, it can be installed in the cluster easily with simple 3 steps. Kubevious can be installed using helm
.
First, you need to have helm installed in your local machine. …
When we develop applications, we keep our passwords and other sensitive data such as usernames, keys etc secured. Security is one of the main factors we need to consider when developing enterprise applications. Most of the time developers encode and save them. In the last article, we discussed how to move variables into a ConfigMap and store them. But the issue is these values are stored there in a plain text mode. This is definitely not the best method to store a password.
So, there come Kubernetes secrets. Secrets are used to store sensitive data. …
In the last article, we discuss how to use environmental variables in Kubernetes definition files. Now, we will see how to use a separate configuration file to manage environmental data. ConfigMaps
are used to store these environmental configuration data in Kubernetes. It is a file in the form of key value
pair. After configuring the configMap
, you need to inject it into the pod definition in Kubernetes. So, the configuration data in the configMap
file will be available as environmental variables in the application hosted inside the container in the pod.
Below is a code snippet copied from the previous…
When running applications in your own data centers you need to have proper disaster recovery techniques. Having a duplicate second datacenter is one method for disaster recovery. But when working enterprise-level, maintaining a second datacenter cost a lot.
AWS provides a cost-effective solution for this issue. AWS has large datacenter groups named “Region”. An AWS region is build to be closest to the business traffic demand. Each region is connected with high-speed fiber networks controlled by AWS.
Every region is isolated from each other. In other terms, no data can go in or out from the datacenter unless granting permission…
Let’s start with an example. You have a system with two services as worker
and master
. The worker
sends messages to the master
. The system works fine. Suddenly the master
becomes unavailable to receive the messages from the worker
. So, when the worker
sends messages it waits for the master
to show up, but as it is not available the worker
start dropping the messages.
This happens because the master
and worker
have a tightly coupled architecture. A tightly coupled architecture means whenever one of the services fails, the entire system breaks down.
First, you go to the link https://aws.amazon.com/console/ and create an AWS account. If you already have an account you can log in to the console.