Setting Up a Logging Infrastructure in Nodejs

How to set up a logging Infrastructureusing using ElasticSearch, Fluentd, and Kibana.

Abhinav Dhasmana
Bits and Pieces

--

Photo by Ugne Vasyliute on Unsplash

Setting up the right logging infrastructure helps us in finding what happened, debugging and monitoring the application. At a very basic level, we should expect the following from our infrastructure:

  • Ability to free text search on our logs
  • Ability to search for specific api logs
  • Ability to search per statusCode of all the APIs
  • System should scale as we add more data into our logs

Architecture

Architecture Digram using ElasticSearch, Fluentd and Kibana

Tip: Reuse JavaScript components

Use Bit (Github) to share and reuse JavaScript components across different projects. Collaborate over shared components as a team to build apps faster together. Let Bit do the heavy lifting so you can easily publish, install and update your individual components without any overhead. Click here to learn more.

Loader components with Bit: Easily share and sync across projects

Local Setup

We would be using Docker for managing our services.

Elastic Search

Let’s get the ElasticSearch up and running with the following command

docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" --name myES docker.elastic.co/elasticsearch/elasticsearch:7.4.1

We can check if our container is up and running by the following command

curl -X GET "localhost:9200/_cat/nodes?v&pretty"

Kibana

We can get our Kibana up and running with another docker run command.

docker run —-link myES:elasticsearch -p 5601:5601 kibana:7.4.1

Note that we are linking our kibana and elastic search server using the --link command

If we go to http://localhost:5601/app/kibana , we would be able to see our kibana dashboard.

We can now run all the queries against our elastic search cluster using kibana. We can navigate to

http://localhost:5601/app/kibana#/dev_tools/console?_g=()

and run the query we ran before (just a little less verbose)

query for elastic cluster nodes using kibana

Fluentd

Fluentd is the place where all the data formatting will happen.

Let’s first build our Dockerfile. It does two things:

  • install the necessary packages
  • Copy our config file into the docker file
Dockerfile for fluentd
Config file for the fluent

Let’s spin this docker machine up

docker build -t abhinavdhasmana/fluentd .docker run -p 9880:9880  --network host  abhinavdhasmana/fluentd

Node.js App

I have created a small Node.js app for demo purposes which you can find here. It’s a small express app created using express generator. It is using morgan to generate logs in the apache format. You can use your own app in your preferred language. As long as the output remains the same, our infrastructure does not care. Let’s build our docker image and run it.

docker build -t abhinavdhasmana/logging .

Of course we can get all the docker containers up by a single docker compose file given below

docker compose file for the EFK setup

That’s it. Our infrastructure is ready. Now we can generate some logs by going to http://localhost:3000

We now go to kibana dashboard again and define the index to use

setting up index for use in kibana

Note that in our fluent.conf , we mentioned logstash_prefix fluentd and hence we use the same string here. Next are some basic kibana settings

kibana configure settings

Elastic Search using dynamic mapping to guess the type of the fields that it indexes. The below snapshot shows these

Mapping example of Elastic Search

Let’s check on how are we doing with the requirements we mentioned at the start:

  • Ability to free text search on our logs: With the help of ES and kibana, we can search on any field and we are able to get the result.
  • Ability to search for specific api logs: In the “Available fields` section on the left of the kibana, we can see a field path . We can apply the filter on this to look for APIs that we are interested in.
  • Ability to search per statusCode of all the APIs: Same as above. Use code field and apply filter.
  • System should scale as we add more data into our logs: We started our elastic search in the single node mode with the following env variable discovery.type=single-node . We can start in a cluster mode, add more nodes or use a hosted solution on any cloud provider of our choice. I have tried AWS and its easy to set it up. AWS also give managed kibana instance for Elasticsearch at no extra cost.

--

--

Senior Staff Engineer @freshworks. Ex-McKinsey/Microsoft/Slideshare/SAP, Tech Enthusiast, Passionate about India. Opinions are mine