Best Elasticsearch practices in production

This section is dedicated to guide you on following the best practices and considerations to keep in mind when going into production.

Memory

  • Always choose ES_HEAP_SIZE 50% of the total available memory. Sorting and aggregations both can be memory hungry, so enough heap space to accommodate these is required. This property is set inside the /etc/init.d/elasticsearch file.
  • A machine with 64 GB of RAM is ideal; however, 32 GB and 16 GB machines are also common. Less than 8 GB tends to be counterproductive (you end up needing smaller machines), and greater than 64 GB has problems in pointer compression.

CPU

Choose a modern processor with multiple cores. If you need to choose between faster CPUs or more cores, choose more cores. The extra concurrency that multiple cores offer will far outweigh a slightly faster clock speed. The number of threads is dependent on the number of cores. The more cores you have, the more threads you get for indexing, searching, merging, bulk, or other operations.

Disks

  • If you can afford SSDs, they are far superior to any spinning media. SSD-backed nodes see boosts in both querying and indexing performance.
  • Avoid network-attached storage (NAS) to store data.

Network

  • The faster the network you have, the more performance you will get in a distributed system. Low latency helps to ensure that nodes communicate easily, while a high bandwidth helps in shard movement and recovery.
  • Avoid clusters that span multiple data centers even if the data centers are collocated in close proximity. Definitely avoid clusters that span large geographic distances.

General consideration

  • It is better to prefer medium-to-large boxes. Avoid small machines because you don't want to manage a cluster with a thousand nodes, and the overhead of simply running Elasticsearch is more apparent on such small boxes.
  • Always use a Java version greater than JDK1.7 Update 55 from Oracle and avoid using Open JDK.
  • A master node does not require much resources. In a cluster with 2 Terabytes of data having 100s of indexes, 2 GB of RAM, 1 Core CPU, and 10 GB of disk space is good enough for the master nodes. In the same scenario, the client nodes with 8 GB of RAM each and 2 Core CPUs is a very good configuration to handle millions of requests. The configuration of data nodes is completely dependent on the speed of indexing, the type of queries, and aggregations. However, they usually need very high configurations such as 64 GB of RAM and 8 Core CPUs.

Some other important configuration changes

  • Assign Names: Assign the cluster name and node name.
  • Assign Paths: Assign the log path and data path.
  • Recovery Settings: Avoid shard shuffles during recovery. The recovery throttling section should be tweaked in large clusters only; otherwise, it comes with very good defaults.

    Disable the deletion of all the indices by a single command:

    action.disable_delete_all_indices: false
  • Ensure by setting the following property that you do not run more than one Elasticsearch instance from a single installation:
    max_local_storage_nodes: "1"

    Disable HTTP requests on all the data and master nodes in the following way:

    http.enabled: false
  • Plugins installations: Always prefer to install the compatible plugin version according to the Elasticsearch version you are using and after the installation of the plugin, do not forget to restart the node.
  • Avoid storing Marvel indexes in the production cluster.
  • Clear the cache if the heap fills up when the node start-up and shards refuse to get initialized after going into red state This can be done by executing the following command:
    • To clear the cache of the complete cluster:
      curl -XPOST 'http://localhost:9200/_cache/clear'
    • To clear the cache of a single index:
      curl -XPOST 'http://localhost:9200/index_name/_cache/clear'
  • Use routing wherever beneficial for faster indexing and querying.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset