run-kubernetes-cluster-locally

How to run Kubernetes Cluster locally

Microservices can be run in Kubernetes usually requires a cluster running in the cloud or on-premise. During the development or when debugging, developers often need to run their application quickly in Kubernetes. 

The solution to this problem is to run Kubernetes locally on your development machine using Docker Desktop.

Installing Kubernetes locally

You have to install Docker Desktop first to run your microservice in Kubernetes on your Windows developer computer. After  installation open it, and go to settings and then go to the Kubernetes tab. There you click on Enable Kubernetes as shown bellow:

Enabling kubernetes on docker desktop

Enabling restarts Docker. It may take a couple of minutes but once it’s back up, you have Docker and Kubernetes with one node running.

Notice : In case kubernets is not displayed under Settings do the followings step:

  1. Uninstall docker
  2. Inside of your windows user folder , delete following.
.kube .docker
  1. Delete docker and dockerDesktop in ProgramData (maybe you need start cmd.  as admin and delete the docker folder as >rmdir -force docker)
  2. Install latest version of  Docker desktop (Docker Desktop 4.11.1).
  3. Then only activate Kubernetes, not the other options.

Configure the Kubernetes Context

First make sure that you have selected the right context of your local Kubernetes. To check the context, right-click on the Docker tray and hover over the Kubernetes tab. By default, the local Kubernetes context is called docker-desktop. If it is not selected, select it. Otherwise, you won’t deploy to your local Kubernetes cluster. look to the following image:

 

Note: Some times kubernetes from Azure is running and it shall be seen as following figure:

when kubernetes from auzre is seen in the docker desktop it should be unchecked

In this case only docker-desktop should be checked otherwise it shall not work.

Deploying your Microservice to your local Kubernetes

Deploying Microservice (application)  to a local Kubernetes works as like as the same way as if Kubernetes was running in the cloud or in your local network. Therefore, we will use Helm to deploy my Microservice (ProductMicroservice)

Deploying your Microservice with Helm

If you don’t know what Helm is or if you haven’t installed it yet, see my previous post  Deploy microservice to Kubernetes using Helm Charts .  The code of the demo on GitHub.

To deploy the microservice, Start command line from  MicroServices_DotNET_Core-Master and then navigate to the Helm chart of the ProductMicroservice. You can find it under ProductMicroservice\ProductMicroservice charts. The chart is a in the folder called productmicroservice. Deploy this chart with Helm by running the following command in command line:

Notice : kubernetes should be running on the docker desktop as  shown above, otherwise you can get error.

We run the following command in the com

helm install product productmicroservice
The result shown in the following image:
Installing Service: productmicroservie via helm

 Notice : to  undeploy  product (delete installation of product)use following command:

helm delete product

As we see from the result we can check the status of service via command:

 kubectl get svc -w productmicroservice
If we run this command:
Status pending
As we see status is pending, but it should be running.
 
The package gets deployed within seconds. After it is finished, connect to the dashboard of your cluster (Octant). If you don’t know how to do that, see my post Azure Kubernetes Service (AKS) where I have described how to use Octant and how to access your Kubernetes cluster with Octant.
 
 
You can check  the status of helm chart by following command:
helm status release_name
In my case it shall be
helm status product

You can install Octant direct by running the following command on Windows Powershell as admin

choco install octant –confirm

If it is succeed then you can get the following:

Now connect to Octant by writing octant on the Windows Octant: and go to the Service via :

Now we want to access the application (productmicroservice) from outside we  have to localhost as external IP.

The reason that external ip is none, somewhere of our deployment has been gone to wrong way.

I have redeployed product as following steps:

  1.  Reset the kubernetes on Docker Desktop and Enabled again.
  2. Deleted image mehzan07/productmicroservice
  3. build a new image (via visual studio) productmicroservice
  4. give a new tag mehzan07/productmicroservice by command:
 docker tag productmicroservice:latest mehzan07/productmicroservice:latest

5. Redeployed again productmicroservice from the charts with following command

helm install product productmicroservice

6. Changed the port in values.yaml from 80 to 21334 under Service as follow (port 80 is for

service:
type: LoadBalancer
port: 23456

7. Run the command helm upgrade:

helm upgrade product productmicroservice

Testing the Microservice on the local Kubernetes Cluster

Open the Services tab and you will see the productmicroservice service with its external IP “localhost” and port 21334.

Open your browser, enter  localhost : 21334   then the Swagger UI  loaded the productmicroservice as following:

run-kubennetes-cluster-locally-15.png
Swagger UI on localhost

Now you can access to the database via Get , Post, etc in the above Swagger UI, to get and add products, … as following image:

run-kubennetes-cluster-locally-16.png
Get all products from database via Swagger UI

Changing the Port of the Microservice

If you want to change the port your microservice is running on, open the values.yaml file inside the productmicroservice under chart folder. Change the port in the service section from 21334  to your desired port, for example, 23456.

service:
type: LoadBalancer
port: 23456
If you already have the microservice deployed, use helm upgrade to re-deploy it with the changes, otherwise use helm install.
Upgrading is as following:
helm upgrade product productmicroservice

After the package is update, open your browser and navigate to localhost:23456 and the Swagger UI will be displayed:

run-kubennetes-cluster-locally-17.png
Swagger UI is loaded after change of port to 23456

Notice 1 : don’t change port to 80, because localhost:80 is reserved for IIS (Internet Information Service and can’t load the Swagger.

Notice 2: If your  service has localhost as external IP  but Swagger UI is not loaded (website can’t be access, or like this) then one reason can be Swagger UI configuration in your Startup.cs file is only for Development environment. Check Startup.cs  in the Configure() method and change it as following code:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
// Swagger should be out of Dev Environment, should be load in both production and Dev environment
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "Product API V1");
c.RoutePrefix = string.Empty;
});

For more information and troubleshooting look to the following shortcut texts.

The helm install command syntax includes a release name, the path to the chart, and optional flags:
helm install [release-name] [chart] [flags]
release_name:  you can give any name you want. I have given product.
chart: the chart name you have created. In my case it is productmicroservice
flags:  are optional

Some useful flags are:

--atomic Deletes the installation in case the process fails. The flag also automatically sets the --wait flag.
--create-namespace [string] Creates the namespace for the release if it has not previously existed.
--dependency-update Runs a Helm dependency update before the installation.
--dry-run Performs a simulation of the installation process for testing purposes.
-g--generate-name Generates a release name. The [release-name] parameter is omitted.
-h--help Shows the installation help file.
-o--output format Prints the output in one of the allowed formats – YAML, JSON, or the table (default).
--set [stringArray] Provides space to set values directly in the command line. Multiple values are allowed.
-f--values [strings] Takes values from the file or URL the user specifies. Multiple values sources are allowed.
--verify Verifies the package before its use.
--version [string] Lets the user specify the exact chart version (for example, 1.2.1) or a chart version range (for example, ^2.0.0).
--wait Waits for the system to be in the ready state before marking the release as successful. The waiting time is specified with the --timeout flag (the default value is 5 minutes).
How to uninstall (delete) helm chart which is created by helm install command
 
For more detail about look to the  helm install command and parameters 
If it is success then you can see the following status of chart as following
NAME: product
LAST DEPLOYED: Fri Aug 12 11:12:23 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running ‘kubectl get svc -w productmicroservice’
export SERVICE_IP=$(kubectl get svc –namespace default productmicroservice -o jsonpath='{.status.loadBalancer.ingress[0].ip}’)
echo http://$SERVICE_IP:80
 

 

To uninstall (delete) helm chart which is created by helm install command:

Deleting a Helm chart

If you need to remove a Helm chart from the deployment, you can delete it.

Following this procedure will completely delete the identified Helm chart from the deployment.

 
  1. Get a list of Helm charts using the following command:
    helm list
  2. From the list, identify the release name of the you want to delete.
  3. Run the following command, replacing <var class="keyword varname"><release-name></var> with your release name of the chart you want to delete:
    helm delete <var class="keyword varname"><release-name></var>

 

[bg_collapse view=”button-green” color=”#4a4949″ icon=”arrow” expand_text=”Docker-compose project host In Visual Studio:” collapse_text=”Docker-compose project host In Visual Studio:” ]

When we have Reinstalled Docker and began to run this project to build an image then we can see the following:

Error Your Docker server host is configured for 'Linux', however the docker-compose project targets 'Windows'. 

 Don’t Switch Docker to window, Change docker-compose project target to Linux as following.

1- Remove ocker-compose.dcproj and create a new, in this case the add: container orchestration support, is visible in VS, and set it to Docker Compose and then Linux and then  Add Docker support set to Linux now we have a new docker-compose.dcproj with a new docker file.

2- Edit docker-compose.dcproj  and add the following in it: 

<PropertyGroup>
<ActiveDebugProfile>Docker Compose</ActiveDebugProfile>
<TargetFramework>net6.0</TargetFramework>
<DockerTargetOS>Linux</DockerTargetOS>
</PropertyGroup>

Now we have Linux OS in the both Docker Service and Docker-compose project  Target.

If you run docker-compose in Visual studio in the both Debug and Release mode in both mode it is working fine and Swagger is opened and access to database correctly. You can build image and Container via VS.

In this case the Docker: Settings: Docker Engine json configuration is:

{
"builder": {
"gc": {
"defaultKeepStorage": "20GB",
"enabled": true
}
},
"experimental": false,
"features": {
"buildkit": true
}
}

[/bg_collapse]

 

[bg_collapse view=”button-green” color=”#4a4949″ icon=”arrow” expand_text=”Troubleshooting on kubernetes (locally)” collapse_text=”Troubleshooting on kubernetes (locally)” ]

If you have problems in Octant  dashboard  as following image:

Deployment of productmicroservice
Error:
In Namespace : Overview: Deployment : No replicas exist for this deployment
Service for productmicroservice
Error:
In Discovery and Load balance: Server: Error: Service has no endpoint addresses

What are Kubernetes Endpoints?

Endpoints in Kubernetes is a resource to track the IP addresses of the objects or pods which are dynamically assigned to it and which works as a service selector which matches a pod label by adding the IP addresses to the endpoints and these points can be viewed using software kubectl get endpoints.

First we check the Service Error (Error: Service has no endpoint addresses) what means this? If we check the External IP for productmicroservice it is <none> instead of a real IP address or local host and that is the problem. why it is none?

with inspecting the endpoints with kubectl get ep kubectl describe ep. If you see pod IP’s next to NotReadyAddresses in the endpoints description, this indicates there’s a problem with the pod that’s causing it not to be ready, in which case it will fail to be registered against the endpoints.

With running of kubectl get ep in the command line:

C:\Users\default1>kubectl get ep
NAME ENDPOINTS AGE
kubernetes 192.168.65.4:6443 19h
productmicroservice 17h

We see that produtmicroservice has no end point in the command above.

With running of kubectl describe epin the command line:

kubectl describe ep

In the above figure if you see pod productmicroservice Addresses is none and  NotReadyAddresses in the endpoints description, this indicates there’s a problem with the pod that’s causing it not to be ready, in which case it will fail to be registered against the endpoints.

If the pod isn’t ready it can be because of a failing health/liveness probe.

If we run: kubectl get services in the command line:

C:\Users\default1>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h
productmicroservice LoadBalancer 10.102.218.163 <pending> 80:30803/TCP 18h

In the above command we see that productmicroservice has an  External IP: 10.102.218.163 and it is pending  and this is  the problem.

The ‘selector’ on your service (kubectl get services kubectl describe myServiceName) should match a label on the pods (kubectl get pods kubectl describe po myPodName). E.g. selector = app=myAppName, pod label = app=myAppName. That’s how the service determines which of the endpoints it should be trying to connect to.

If we run the kubectl get pods in the command line:

C:\Users\default1>kubectl get pods
NAME READY STATUS RESTARTS AGE
productmicroservice-55c6b5d687-cb5fn 0/1 ImagePullBackOff 0 18h

We see that productmicroservice-55c6b5d687-cb5fn and it is not ready 0/1 and status= ImagePullBackOff

We take this pod name and run the command: kubectl describe po myPodName). by replacing myPodName to the productmicroservice-55c6b5d687-cb5fn then we can get more information about this pod:

kubectl describe po myPodName

From the above figure we see:

Reason: BackOff, 

Message: Back-off pulling image “mehzan07/productmicroservice:latest”

Ok we see that kubernetes trying to pull image: mehzan07/productmicroservice:latest

with checking images in the locally: 

C:\Users\default1>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
productmicroservice latest 060e7e78db23 18 hours ago 295MB

As we see there is no mehzan07/productmicroservice in the local (docker desktop) and we should create a new image with tag (-t mehzan07/productmicroservice) with following command,  the reason is in docker desktop and in  Dockerhub,  my account name is mehzan07.

This Repository and tag, port comes from the value.yaml file under Chart: productmicorservice (with creating of chart).

Now we run the following command:

docker build -t mehzan07/productmicroservice . -f ./ProductMicroservice/ProductMicroservice/Dockerfile

Or add a new tag for the existence image:

docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

For me:

docker tag productmicroservice:latest mehzan07/productmicroservice:latest

 

Then restart docker desktop and see that an image mehzan07/productmicroservice is created.

Restart the dashboard of kubernetes: service now we see the following image.

productmicroservice is OK (green)

Service is OK, but still external IP is none ?

Now we can investigate again with commands:

kubectl get ep kubectl describe ep  kubectl get services kubectl describe myServiceName) (kubectl get pods kubectl describe po myPodName)

If we run  kubectl get services command:

C:\Utvecklingprogram\Microservices\MicroServices_DotNET_Core-Master>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h
productmicroservice LoadBalancer 10.102.218.163 <pending> 80:30803/TCP 20h

If we update the port in the values.yaml under productmicroservice in the chart folder and then run the the command:

helm upgrade product productmicroservice

After that runc the command:

kubectl get services

Now we see the following info:

productmicroservice External IP : localhost

As we see Service productmicroservice localhost as External IP.

[/bg_collapse]

 

Kubernetes provides a command line tool for communicating with a Kubernetes cluster’s control plane, using the Kubernetes API.

This tool is named kubectl.

For configuration, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the --kubeconfig flag.

This overview covers kubectl syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the kubectl reference documentation.

For installation instructions, see Installing kubectl; for a quick guide, see the cheat sheet. If you’re used to using the docker command-line tool, kubectl for Docker Users explains some equivalent commands for Kubernetes

Syntax:

Use the following syntax to run kubectl commands from your terminal window:

kubectl [command] [TYPE] [NAME] [flags]

where commandTYPENAME, and flags are:

  • command: Specifies the operation that you want to perform on one or more resources, for example creategetdescribedelete.

  • TYPE: Specifies the resource type. Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output:

    kubectl get pod pod1
    kubectl get pods pod1
    kubectl get po pod1
    

    NAME: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example kubectl get pods.

    When performing an operation on multiple resources, you can specify each resource by type and name or specify one or more files:

    • To specify resources by type and name:

      • To group resources if they are all the same type: TYPE1 name1 name2 name<#>.
        Example: kubectl get pod example-pod1 example-pod2

      • To specify multiple resource types individually: TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>.
        Example: kubectl get pod/example-pod1 replicationcontroller/example-rc1

    • To specify resources with one or more files: -f file1 -f file2 -f file<#>

      • Use YAML rather than JSON since YAML tends to be more user-friendly, especially for configuration files.
        Example: kubectl get -f ./pod.yaml
    • flags: Specifies optional flags. For example, you can use the -s or --server flags to specify the address and port of the Kubernetes API server.

  • If you need help, run kubectl help from the terminal window.

    In-cluster authentication and namespace overrides

    By default kubectl will first determine if it is running within a pod, and thus in a cluster. It starts by checking for the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT environment variables and the existence of a service account token file at /var/run/secrets/kubernetes.io/serviceaccount/token. If all three are found in-cluster authentication is assumed.

    To maintain backwards compatibility, if the POD_NAMESPACE environment variable is set during in-cluster authentication it will override the default namespace from the service account token. Any manifests or tools relying on namespace defaulting will be affected by this.

    POD_NAMESPACE environment variable

    If the POD_NAMESPACE environment variable is set, cli operations on namespaced resources will default to the variable value. For example, if the variable is set to seattlekubectl get pods would return pods in the seattle namespace. This is because pods are a namespaced resource, and no namespace was provided in the command. Review the output of kubectl api-resources to determine if a resource is namespaced.

    Explicit use of --namespace <value> overrides this behavior.

    How kubectl handles ServiceAccount tokens

    If:

    • there is Kubernetes service account token file mounted at /var/run/secrets/kubernetes.io/serviceaccount/token, and
    • the KUBERNETES_SERVICE_HOST environment variable is set, and
    • the KUBERNETES_SERVICE_PORT environment variable is set, and
    • you don’t explicitly specify a namespace on the kubectl command line

    then kubectl assumes it is running in your cluster. The kubectl tool looks up the namespace of that ServiceAccount (this is the same as the namespace of the Pod) and acts against that namespace. This is different from what happens outside of a cluster; when kubectl runs outside a cluster and you don’t specify a namespace, the kubectl command acts against the namespace set for the current context in your client configuration. To change the default namespace for your kubectl you can use the following command:

    kubectl config set-context --current --namespace=<namespace-name>

Conclusion

Helm for Kubernetes is a package manager, which can be used to easily deploy and update  applications. we have enabled kubernetes on docker-desktop and run it locally in development environment. we have seen  also how to install helm, and via helm we have installed our chart productmicroservice. and installed Octant as dashboard for kubernetes and tested productmicroservice on kubernetes locally. We have even seen how to delete helm chart, how to troubleshoot problems in docker desktop and kubernetes.

The code shall be found on my  GitHub.

In my next post I shall describe, How to Manage Kubernetes Resources

This post is part of “Kubernetes step by step”.

Back to home page

Leave a Reply

Your email address will not be published. Required fields are marked *