Blogger news

Blogger templates

Wednesday, July 17, 2019

Azure Functions Dependency Injection

Azure Functions supports the dependency injection, which builds on top of the ASP.NET Core Dependency Injection features. Support for dependency injection begins with Azure Functions 2.x. This is a hand-on lab which shows how to implement dependency injection in Azure function Before you can use dependency injection, you must install the following NuGet packages: Microsoft.Azure.Functions.Extensions Microsoft.NET.Sdk.Functions package version 1.0.28 or later Optional: Microsoft.Extensions.Http (Only required for registering HttpClient at startup)

Sunday, February 11, 2018

Build a serverless chatbot (voice and text) with Amazon Lex for Facebook

I am so excited to share another hands-on lab and this time building a serverless Amazon Lex chatbot for Facebook. By end of this video, you will be able to build a chatbot on your own.

Amazon Lex is an AWS service for building conversational interfaces into applications using voice and text. With Amazon Lex, the same deep learning engine that powers Amazon Alexa is now available to any developer, enabling you to build sophisticated, natural language chatbots into your new and existing applications. Amazon Lex provides the deep functionality and flexibility of natural language understanding (NLU) and automatic speech recognition (ASR) to enable you to build highly engaging user experiences with lifelike, conversational interactions and create new categories of products.

Hope you will enjoy this. If you like the video please don't hesitate to subscribe and share it to the world.

Thursday, February 8, 2018

Build a Serverless Web Application on AWS Cloud- Part 2

This is the second part of the video "Build a Serverless Web Application on AWS Cloud". Those who haven't watched the first part I would really recommend to watch it here before watching this part

Friday, November 10, 2017

Design Thinking

Design thinking is methodology, which is extremely useful when solving complex problems. It’s another iterative approach with multiple stages in it. It starts with understanding the customer needs, re-framing the problem to the human understandable way; create as many possible ideas, prototype the ideas and test it. You can use this approach to solve any real-life complex problems regardless of the domain. 

There are many variants of the Design Thinking process in use today but we are focusing on the five-stage model, which are Empathise, Define, Ideate, Prototype, and Test.


This is the first stage and the main idea is to gain an empathic understanding of the problem you are trying to solve. Get involved in what your customers are doing. Shadow them for few days and understand their pain. Put yourself in the customer's shoe and think like them, act like them. Empathy is crucial to a human-centered design process such as Design Thinking and empathy allows people to get out of their assumptions and understand the customer needs by acting as a customer for a while


During the Define stage, you put together the information you have learned and collected during the empathize phase. Do a deep analysis of the problem. In other words redefine the problem as a problem statement, which is humanly understandable. A problem statement is a clear concise description of the issues that need to be addressed by a problem-solving team. Basically, it starts with the 5 'W's - Who, What, Where, When and Why. 

  • Who - Who does the problem affect?
  • What - What are the boundaries of the problem, e.g. organizational, workflow, geographic, customer, segments, etc. - What is the issue? - What is the impact of the issue? - What impact is the issue causing? - What will happen when it is fixed? - What would happen if we didn’t solve the problem?
  • When - When does the issue occur? - When does it need to be fixed?
  • Where - Where is the issue occurring? Only in certain locations, processes, products, etc.
  • Why - Why is it important that we fix the problem? - What impact does it have on the business or customer?


During this stage, the team is ready to start generating ideas. Throw all the possible ideas. Always remember there is nothing called ‘stupid idea’. Share and discuss all possible ideas based on the analysis which made during the design stage. Do continues brainstorming sessions and other ideation techniques such as Storyboarding, SCAMPER. It is really important to get as much as possible ideas and choose the best ideas to solve the problem


At this stage create the solution out of the ideas derived during ideate stage which in turn derived from design and empathize stage. Create prototype is not a long process. It should be as quick as making a working model. For example, if working on UI then prototype should be a working model based on static data. The main goal of prototyping is to get feedback from the customer. At this stage, the team will have a better idea about the constraints of the product, the better perspective of how the end user thinks.


At this stage, the stakeholders do a thorough review and testing of the developed prototype using the best solutions. Stakeholders will share their complete feedback and thoughts.

The idea that selected the best according to the feedback of the customers and end users in the prototype phase will be executed. After the testing phase, the entire process of design thinking can be repeated depending on the feedback from customers. If customers approve the solution then the process of design thinking stops.

Design thinking minimizes the uncertainty and risk by engaging customers or users through the series of defined stages. Instead of assumptions, design thinkers rely on customer insights gained from real-world experiments.

Sunday, November 5, 2017

Cloud computing without auto scaling is almost same as traditional computing

I had no plan to write this article until I have seen below question in StackOverflow today, 

"My 2-CPU usage reached 100% very frequently and I need to restart my server again. For a while, it works fine but after few minutes it reaches 100%. Because of 100% usage of CPU, my website goes slow and not able to open it easily"

The main idea behind cloud computing is not to end up with a situation like this. If you have a well-architectured cloud platform then you wouldn't end up with a situation like this. If you are running out of resource in the cloud then there is no difference between traditional application hosting and cloud apart from cost. One thing is sure this happened because of the way it architectured the servers.

Elasticity is the real beauty of cloud computing. Elasticity means to expand and contract on its own when needed. We should architecture the cloud infrastructure in such a way that the load or resource utilization goes above certain limit spin up another server and remove the newly created instance when there is not enough load or low resource utilization. In traditional computing, this won't happen. If we need to cope up with the load then we have to manually provision the servers depends on high load, which means it is not elastic

In cloud computing elasticity is 'Auto Scaling'. As the name indicates scale out and scale in automatically when needed. Once we set up the infrastructure then we don't need to worry about the application load or resource utilization as everything will be automatically handled based on the continued monitoring and health check of the systems.

Let me explain with an example. Say I have only one web server hosted in the cloud and is not enabled. Think the server capacity is to handle on 10K request in a second. What happens if is getting 100K requests in a second? No doubt it will crash because of high utilization of system resources.

Now let me redesign the architecture like below design.

With the new design, we put our EC2 (AWS virtual machine) instance behind an elastic load balancer (ELB) and EC2 will continuously monitor for resource utilization like CPU, memory etc. I have architectured in such a way that whenever CPU utilization reaches 60% then create another EC2 instance immediately and attach it automatically to the ELB. Now we have a flexible design to distribute the load to another server which spins automatically and attach to ELB. Like this, you can spin up automatically more and more servers based on load and other requirements. Happy days!. Now we don't need to bother about the application load or resource utilization as autoscaling will scale out on its own.

What if we don't need the servers spin up automatically when there isn't enough load or under resource utilization? Then do the reverse process

The CPU utilization reaches 59%, which means we don't need another server to distribute the load as the primary server can handle it. Now we can delete the EC2 instance created and detach it from ELB, which will do automatically. Whenever load reaches 60% again it will create new instance again automatically and attach to ELB and so on.

If you don't have elasticity then you will not get real benefits of cloud computing.

Friday, November 3, 2017

AWS API Gateway Now Supports Regional Endpoints

Good news! You can now choose from two types of API endpoints when creating REST APIs and custom domains with Amazon API Gateway. A regional API endpoint is a new type of endpoint that is accessed from the same AWS region in which your REST API is deployed. This helps you reduce request latency when API requests originate from the same region as your REST API. Additionally, you can now choose to associate your own Amazon CloudFront distribution with the regional API endpoint. The second type of API endpoint is the edge-optimized API. Edge-optimized APIs are endpoints that are accessed through a CloudFront distribution that is created and managed by API Gateway. Previously, edge-optimized APIs were the default option for creating APIs with API Gateway.

So what does that mean for you? 

One use case would be API requests predominantly originate from an EC2 instance or services within the same region as the API is deployed, a regional API endpoint will typically lower the latency of connections and is recommended for such scenarios.

For example, Say from an AWS Lambda invoking an API request and if both are hosted in the same region then the performance will be high

This feature is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), South America (São Paulo), EU (Ireland), EU (Frankfurt), EU (London), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai) AWS regions.

Thursday, November 2, 2017

Container model execution of AWS Lambda

When AWS Lambda executes your Lambda function on your behalf, it takes care of provisioning and managing resources needed to run your Lambda function. When you create a Lambda function, you specify configuration information, such as the amount of memory and maximum execution time that you want to allow for your Lambda function. When a Lambda function is invoked, AWS Lambda launches a container based on the configuration settings you provided. Container creation and deletion are completely handling by AWS and there is no way a user can manage it. 

You can expect a delay when Lambda is invoked the first time or if there is a delay between subsequent requests because it takes time to set up the container and does other setups. AWS Lambda tries to reuse the container for subsequent invocations of the Lambda function.   

After a Lambda function is executed, AWS Lambda maintains the container for some time. It means the service freeze the container for some time after a function execution completes and in that period if an invocation happens then Lambda will reuse the previous container. 

The container reuse has few implications, 

  • Any declarations in your Lambda function code outside the handler code remains initialized. For example, if your Lambda function establishes a dynamoDB connection in the first run then instead of re-establishing the connection, the original connection is used in subsequent invocations.

  • Background processes or callbacks initiated by your Lambda function that did not complete when the function ended resume if AWS Lambda chooses to reuse the container. You should make sure any background processes or callbacks in your code are complete before the code exits.
When you write your Lambda function code, do not assume that AWS Lambda always reuses the container because AWS Lambda may choose not to reuse the container. Depending on various other factors, AWS Lambda may simply create a new container instead of being reusing an existing container.