Logs

Logging is the first thing to implement in terms of monitoring. When debugging problems in applications, developers are mostly involved in coming through the logs to put together an event chronology of what happened and where things went wrong.

To maximize the benefit of logging, it is important that the logs follow a specific structure and contain important information, including the following:

  • A short, crisp description of the event that is being logged.
  • Relevant data like request ID, user ID, and so on. Things like a username (email), social security numbers, and customer name should not be logged (or be masked) to avoid leaking private information about customer.
  • A timestamp describing the time of occurrence of the event.
  • An identifier for the thread and host for the instance of the service.
  • File name, function name, line number.
  • Important request and response details for all APIs.
  • Tracing identifiers to identify service requests across microservices.

It is important to have a well thought out level for each log. Some information might be important in a debug environment, but in production, debug logs will become verbose and in certain situation create so much noise that the debuggability is hampered. Ideally there should be three levels:

  • Debug: Verbose information, useful when analyzing programs in a non-production environment
  • Info: Events that are useful for debugging in production
  • Fatal: A critical failure that requires the program to exit

Go's standard library has a log package: https://golang.org/pkg/log/ . It does not support leveled logging on its own but can be used to create different loggers for each level, like so:

Debug = log.New(os.Stdout, "DEBUG ", log.LstdFlags)

A common mistake is for the application to concern itself with managing log files. Instead, each process should write logs unbuffered (as an event stream), to Stdout. During development, the developer will view the logs in the terminal. In production, it's easy to redirect the same stream to a file ./my_service 2>> logfile (the default logger in Go writes to stderr - 2).

If the standard log library is unsatisfactory, there are a wide variety of logger libraries which add functionalities like leveling on top of logs.

A quick note on logging inside packages when you want to ship to multiple developers. Here, creating a logger instance inside the package is not an good idea, since now the client application is coupled with the logger library that the package is using. Even worse, if the final application has many packages, each with a different logger with a different format, then browsing through the different logs will not be easy—not to mention the unnecessary bloat of logger libraries. In this case, it would be much more elegant to take a logger as input from the application (client) code, and the package can just use the logger to emit events. The client could specify the logger in an initialization function that the library provides.

Logs from each instance need to be aggregated and available at a central place. A common architecture for this is called the Elasticsearch, Logstash, Kibana (ELK) stack. Elasticsearch is an inverted-index database service that is based on the Apache Lucene search engine. Logstash is an ingestor tool that accepts input from various sources, transforms, and exports the result to multiple sinks, Elasticsearch in this case. Kibana is a visualization layer on top of Elasticsearch. Typically, Logstash takes logs from a file and ships them to Elasticsearch where they are indexed into different indices with the format logstash-YYYY.MMM.DD. One can use regex closure for search, oo to explore all of the log data from say June 2018, one could specify the index pattern logstash-2018.06*.

It's important to note that log files can get big very quickly. If they eat up the disk space then the application might get affected. Thus, it's important to rotate the logs to keep the most recent ones and discard the old logs. Ingestors like logstash have tuneables to rotate log files as part of the ingestion process.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset