Install Kibana

Kibana is also available for all platforms; you can download it from https://www.elastic.co/downloads/kibana. If you're using a Windows machine, download the ZIP archive for Windows and unzip it.

If you want to customize the Kibana configuration, open config/kibana.yml in an editor and customize the given information according to your application infrastructure. Finally, you can run Kibana by using the following command:

bin/kibana

Let's see the following screenshot:

As you can see in the preceding screenshot, Kibana by default is running on port 5601. Let's access http://localhost:5601 in the browser:

Step 2: Change into our microservices (eureka, account, and customer services) by adding some log statements. We are using slf4j to generate log messages; let's add the following logs in the AccountController and CustomerController controller classes:

...
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
...
@RestController
public class AccountController {
private static final Logger logger = LoggerFactory.getLogger(AccountController.class);
...
@GetMapping(value = "/account")
public Iterable<Account> all (){
logger.info("Find all accounts information ");
return accountRepository.findAll();
}
...
}

As you can see in the preceding code snippet, I have added logs to all the request methods of AccountController. Similarly, I have added the CustomerController:

...
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
...
@RestController
public class CustomerController {
private static final Logger logger = LoggerFactory.getLogger(CustomerController.class);
...
@GetMapping(value = "/customer/{customerId}")
public Customer findByAccountId (@PathVariable Integer customerId){
Customer customer = customerRepository.findByCustomerId(customerId);
customer.setAccount(accountService.findByCutomer(customerId));
logger.info("Find Customer information by id: "+customerId);
return customer;
}
...
}

As you can see, we have added a logger with info level in each request method of this CustomerController. Let's add the Maven dependency for Logstash.

Step 3: Add the Logstash Maven dependency into Maven configuration file of each microservices:

<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>5.0</version>
</dependency>

As you can see in the preceding code snippet, we have added the Logstash dependency to integrate logback to Logstash in all microservices using the pom.xml file.

Step 4: We have to override the default logback configuration because we have to add appenders for logstash. You can add a new logback.xml under src/main/resources. Let's see the following logback.xml file be added to each microservices:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:4567</destination>
<!-- encoder is required -->
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE" />
<appender-ref ref="stash" />
</root>
</configuration>

As you can see in the preceding logback configuration file (logback.xml), this file overrides the default logback configuration. The custom logback configuration file has a new TCP socket appender. This appender streams all log messages to the Logstash service, which is running on port 4567. We have to configure this port into a Logstash configuration file. We can see in step 5. It is important to add an encoder, as mentioned in the preceding configuration.

Step 5: Create a Logstash configuration file:

input {
tcp {
port => 4567
host => localhost
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout {
codec => rubydebug
}
}

As you can see in the preceding logstash.conf file, we have configured the input and output. Logstash will use port 4567 to take input from the socket and also configure output, Elasticsearch will be used at port 9200stdout is optional and set for debugging. We can place this file anywhere and run the Logstash service.

Step 6: Run all services, Elasticsearch, Logstash, and Kibana, from their respective installation folders:

./bin/elasticsearch
./bin/kibana
./bin/logstash -f logstash.conf

Step 7: Run all microservices of the example, such as the Account microservice and the Customer service. The Access customer microservice will print logs into Logstash.

Step 8: Open the Kibana dashboard in the browser at http://localhost:5601 and go to the settings to create an index pattern:

As you can see in the preceding screenshot, we have set up indexes, logstash-*

Step 9: Click on the Discover option on the menu. It will render the log dashboard:

As you can see in the preceding screenshot of the Kibana UI, on the Kibana dashboard, the log messages are displayed. Kibana provides out-of-the-box features to build summary charts and graphs using log messages.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset