In this blog post, we will setup Kafka and Zookeeper cluster with following requirements:
Things that we will not consider in this blog:
Events are lifeline of an Event Driven Architecture. All microservices rely on the data in these events to be accurate and complete. One of the many ways to implement Event Driven Architecture is through Amazon Simple Notification Service (SNS) and Amazon Simple Queue Service (SQS).
Setting up the right logging infrastructure helps us in finding what happened, debugging and monitoring the application. At a very basic level, we should expect the following from our infrastructure:
statusCodeof all the APIs
This article will cover the following:
Our workflow will look like this
For every complex problem, there is an answer that is clear, simple, and wrong.
Let’s look at how a typical cache miss workflow looks like
There can be scenarios where we have some code which is used by many micro-services. For example, we can have a code on how to make interservice calls with the circuit breaker, logging, timeouts and much more. We want every service in our infrastructure to adhere to same guidelines. It would be great if we could install these as npm packages and version manage the changes. This will give more confidence in rolling out new and breaking changes thus enabling higher velocity in the product development.
We will be using Verdaccio as a npm registry. From its
Time series data in its most basic form is a sequence of data points measuring a particular thing over a period of time. For more information, you can read the first part of this post What the heck is time-series data?
Our use case is pretty simple. We have users who would like to store their medical observables either manually or through some device. At the very basic, think about saving heart rate/blood pressure/sugar several times a day with few hundred thousand active users a day (with a possibility to a much larger traffic). …
We were working for a telecom client where we were rolling out
offers for our customers. They had tons of business logic based on which the user will be given an
offer. In addition to that, they would create more categories of
offer as time passes by. We saw the code that looked like this
Every-time a new
offer type was introduced, or existing rules needed modification, we needed to change
Calling this class looked something like this
We have following issues with the code:
Choose between http/1.1 and gRPC. TL; DR: Use gRPC
How microservices communicate with each other can affect performance and scalability of the application. Communication between services can be synchronous or asynchronous. For this blog post, we will focus on synchronous. We have 2 common protocols at our disposal
Let’s look at some sample code and run some performance test to pick our choices.
Let’s start small and try to get one microservice talk to another one.
Design pattern for “when shit happens”
We have a
serviceA which has two APIs
/datawhich depends on
/data2does not depend on any external service
Senior Staff Engineer @freshworks. Ex-McKinsey/Microsoft/Slideshare/SAP, Tech Enthusiast, Passionate about India. Opinions are mine