Elasticsearch Docker Configuration

  • Elasticsearch is a distributed search and analytics engine. It is used for web search, log monitoring, and real-time analytics. Ideal for Big Data applications.
  • Server.name: kibana server.host: '' elasticsearch.hosts: 'server ip:9200' #X. 6) 이제 docker를 실행할 docker-compos,yml 파일을.
  • To get a list of all predefined roles in Elasticsearch, refer to our. For instructions on using Docker, for example, see Docker security configuration.

On this page, you'll find all the resources — docker commands, links to product release notes, documentation. Elasticsearch is a distributed, RESTful. This sample Docker Compose file brings up a three-node Elasticsearch cluster.Node es01 listens on localhost:9200 and es02 and es03 talk to es01 over a Docker network. Please note that this configuration exposes port 9200 on all network interfaces, and given howDocker manipulates iptables on Linux, this means that your Elasticsearch cluster is publicly accessible,potentially ignoring any.

Easiest Way to Setup Elasticsearch Development Environment With Docker on Windows

In this article we will cover easiest way to setup standalone Elasticsearch development environment with Docker on windows.

Before we start with the actual setup, let’s review what tools we are going to use and know them a little bit more. We are going to setup Elasticsearch, Kibana and Logstash all inside your local machine using Docker. Here is brief introduction to all the tools that we will be using with this setup.

What is Elasticsearch?

Elasticsearch is an open source and free tool in the Search as a Service category of a tech stack and solves problem of storing and search data in nearly real time. Its being used by many reputed companies and popular in developers due to its simplicity of implementation, easy to scale and its good documentation and support.

What is Kibana?

Kibana is an open source and free monitoring tool. Its browser-based analytics and search dashboard tool for Elasticsearch. Its used along with Elasticsearch because its free and comes with features like easy setup, good user interface, ability to setup dashboards and user-friendly way to do so many things out of Elasticsearch data.

What is Logstash?


Logstash is an open source tool for event and log management. You can use Logstash to store Elasticsearch logs and later view or analyse them with Kibana.

What is Docker?

Docker is another opensource and the most popular virtual machine and container platform which enables organizations to seamlessly build and share any application and run securely.

Okay! So, What’s next?

Okay, so now that you know a little bit about Elasticsearch, Kibana, Logstash and Docker. We will try to setup local instances of all three as Docker containers so that you can use it for your development purpose.

What will you require?

We will require Docker setup in your windows machine. If you are on Windows 10, install Docker Desktop for windows. For any previous version of windows desktop operating system try to use Virtualbox and Docker Toolbox. In this article we are assuming you are using Docker Desktop.

Are you ready?

Once you have docker working locally, you can download docker-compose.yml that is shown below.

Start your powershell and switch to the folder where you have docker-compose.yml.

Enter this command to start: docker-compose up

This will download latest docker images of Elasticsearch, Kibana and Logstash and start containers that will be available at localhost.
Elastichsearch: http://localhost:9200
Kibana: http://localhost:5601
Logstash: http://localhost:9600


Docker-compose command along with docker-compose.yml is the easiest way to setup Elasticsearch development environment with docker on windows. Now that you have all three are ready, start using them in your software development. Let us know how you go with it!

This is the third part of a series looking at how easy Docker makes it to explore and experiment with open source software. The previous two parts are available here

Today we're going to look at Elasticsearch, and this will give us the chance to see some of the capabilities of Docker Compose.

To follow along with the commands in this tutorial I recommend that you use Play with Docker which allows you to run all these commands in the browser.

Start a new container running Elasticsearch

If you just want to try out Elasticsearch running in a single node, then we can do that with the docker run command shown below.

We're exposing ports 9200 (for the REST API), and setting up a single node cluster (using an environment variable), from the official elasticsearch 6.4.2 image. I'm also showing how to set up a volume to store the index data in.

And all the Elasticsearch commands we run with curl will work just fine on this single container. But for this tutorial, I'm going to use a cluster created with docker-compose instead.

Use Docker Compose to create an Elasticsearch cluster

With docker-compose we can declare all the containers that make up an application in a YAML format. For each container we can also configure the environment variables that should be set, any volumes that are required, and define a network to allow the services to communicate with each other.

Here's the first version of our docker-compose.yml file. It defines a simple two-node cluster, and each node in the cluster has a volume so that our indexes can live independently of our containers, and survive upgrades (which we'll be doing later). Notice that we're using the version of elasticsearch tagged 6.4.1.

To download this file locally as docker-compose-v1.yml you can use the following command:

And now we can use the docker-compose up command to start up the containers, and create all necessary resources like networks and volumes. We're using -d to run in the background just like we can with docker run

Check cluster health

We've exposed port 9200 on one of those containers, allowing us to query the cluster health with the following request:

Create an index

Now let's create an index called customer

Add a new document

And let's add a document to that index:

By the way, if you're following along with PowerShell instead of bash you can use Invoke-RestMethod to accomplish the same thing.

View documents in the index

There are lots of ways to query elasticsearch indexes and I recommend you check out the Elasticsearch 6.4 Getting Started Guide for more details. However, we can easily retrieve the documents in our existing customer index with:

Upgrade the cluster to 6.4.2

Suppose we now want to upgrade the nodes in our cluster from Elasticsearch 6.4.2 (we were previously running 6.4.1). What we can do is update our YAML file with new container version numbers.

I have an updated YAML file available here, which you can download to use locally with

Before we upgrade our cluster, take a look at the container ids that are currently running with docker ps. These containers are not going to be 'upgraded' - they're going to be disposed, and new containers running 6.4.2 will be created. However, the data is safe, because it's stored in the volumes. The volumes won't be deleted, and will be attached to the new containers.

To perform the upgrade we can use the following command.

We should see it saying 'recreating elasticsearch' and 'recreating elasticsearch2' as it discards the old containers and creates new ones.

Now if we run docker ps again we'll see new container ids and new image versions.

Check our index is still present

To ensure that our index is still present we can search again and check our document is still present.

Let's add another document into the index with:

Upgrade to a three node cluster

OK, let's take this to the next level. I've created a third version of my docker-compose YAML file that defines a third container, with its own volume. The YAML file is available here.

Something important to note is that I needed to set the discovery.zen.minimum_master_nodes=2 environment variable to avoid split brain problems.

You can download my example file with:

And then we can upgrade our cluster from two to three nodes with

The change of environment variable means that we will recreate both elasticsearch and elasticsearch2, and of course the new elasticsearch3 container and its volume will get created.

We should check the cluster status and if all went well, we'll see a cluster size of three:

Let's check our data is still intact by retrieving a document by id from our index

Add Kibana and head plugin

While I was preparing this tutorial I came across a really nice article by Ruan Bekker who takes this one step further by adding a couple more containers to the docker-compose file for an instance of Kibana and the Elasticsearch Head plugin.

So here's the final docker-compose.yaml file we'll be working with:

You can download my YAML file with

And now we can update our cluster again with

Try out Kibana

Once we've done this, we can visit the Kibana site by browsing to localhost:5601. If you were following along in 'Play with Docker' then you'll see special links appear for each port that is exposed (9200, 9100 and 5601).

If you click on the 5601 link, you'll be taken to an instance of Kibana. The first step will be to define an index pattern (e.g. customers*)

And then if you visit the discover tab, you'll see we can use Kibana to search the documents in our index:

Elasticsearch Docker Configuration Example

Try out Elasticsearch head plugin

You can also visit localhost:9100 (or in Play with Docker, click the 9100 link) to use the Elasticsearch head plugin. This gives you a visualization of the cluster health.

Elasticsearch Docker Configuration

Note that if you are using Play with Docker, you'll need to copy the port 9200 link and paste it into the Connect textbox to connect the head plugin to your Elasticsearch cluster.

Clean up

Docker Elasticsearch Cluster Configuration

To stop and delete all the containers:

Elasticsearch Docker Configuration

And if you want to delete the volumes as well (so all index data will be lost), add the -v flag:


In this tutorial we saw that not only is it really easy to get an instance of Elasticsearch running with Docker that we could use for experimenting with the API, but with Docker Compose we can define collections of containers that can communicate with one another and start them all easily with docker-compose up.

When we upgrade our YAML file, Docker Compose can intelligently decide which containers need to be replaced, and which can be left as they are.

Most Viewed Posts