Installing the Elastic Stack

edit

When installing the Elastic Stack, you must use the same version across the entire stack. For example, if you are using Elasticsearch 9.0.0-beta1, you install Beats 9.0.0-beta1, APM Server 9.0.0-beta1, Elasticsearch Hadoop 9.0.0-beta1, Kibana 9.0.0-beta1, and Logstash 9.0.0-beta1.

If you’re upgrading an existing installation, see Upgrading the Elastic Stack for information about how to ensure compatibility with 9.0.0-beta1.

For an example of installing and configuring the Elastic Stack, you can try out our Tutorial 1: Installing a self-managed Elastic Stack. After that, you can also learn how to secure your installation for production following the steps in Tutorial 2: Securing a self-managed Elastic Stack.

Network requirements

edit

To install the Elastic Stack on-premises, the following ports need to be open for each component.

Default port Component

3002

Enterprise Search

5044

Elastic Agent → Logstash
Beats → Logstash

5601

Kibana
Elastic Agent → Fleet
Fleet Server → Fleet

8220

Elastic Agent → Fleet Server
APM Server

9200-9300

Elasticsearch REST API

9300-9400

Elasticsearch node transport and communication

9600-9700

Logstash REST API

Each Elastic integration has its own ports and dependencies. Verify these ports and dependencies before installation. Refer to Integrations.

For more information on supported network configurations, refer to Elasticsearch Ingest Architectures.

Installation order

edit

Install the Elastic Stack products you want to use in the following order:

  1. Elasticsearch (install instructions)
  2. Kibana (install)
  3. Logstash (install)
  4. Elastic Agent (install instructions) or Beats (install instructions)
  5. APM (install instructions)
  6. Elasticsearch Hadoop (install instructions)

Installing in this order ensures that the components each product depends on are in place.

Installing on Elastic Cloud

edit

Our hosted Elasticsearch Service is available on AWS, GCP, and Azure, and you can try it for free.

Installing on Elastic Cloud is easy: a single click creates an Elasticsearch cluster configured to the size you want, with or without high availability. The subscription features are always installed, so you automatically have the ability to secure and monitor your cluster. Kibana is enabled automatically, and a number of popular plugins are readily available.

Some Elastic Cloud features can be used only with a specific subscription. For more information, see https://www.elastic.co/pricing/.

Tutorial 1: Installing a self-managed Elastic Stack

edit

This tutorial demonstrates how to install and configure the Elastic Stack in a self-managed environment. Following these steps, you’ll set up a three node Elasticsearch cluster, with Kibana, Fleet Server, and Elastic Agent, each on separate hosts. The Elastic Agent will be configured with the System integration, enabling it to gather local system logs and metrics and deliver them into the Elasticsearch cluster. Finally, you’ll learn how to view the system data in Kibana.

It should take between one and two hours to complete these steps.

If you’re using these steps to configure a production cluster that uses trusted CA-signed certificates for secure communications, after completing Step 6 to install Kibana we recommend jumping directly to Tutorial 2: Securing a self-managed Elastic Stack.

The second tutorial includes steps to configure security across the Elastic Stack, and then to set up Fleet Server and Elastic Agent with SSL certificates enabled.

Prerequisites and assumptions

edit

To get started, you’ll need the following:

  • A set of virtual or physical hosts on which to install each stack component.
  • On each host, a super user account with sudo privileges.

The examples in this guide use RPM packages to install the Elastic Stack components on hosts running Red Hat Enterprise Linux 8. The steps for other install methods and operating systems are similar, and can be found in the documentation linked from each section. The packages that you’ll install are:

For Elastic Agent and Fleet Server (both of which use the elastic-agent-9.0.0-beta1-linux-x86_64.tar.gz package) we recommend using TAR/ZIP packages over RPM/DEB system packages, since only the former support upgrading using Fleet.

Special considerations such as firewalls and proxy servers are not covered here.

For the basic ports and protocols required for the installation to work, refer to the following overview section.

Elastic Stack overview

edit

Before starting, take a moment to familiarize yourself with the Elastic Stack components.

Image showing the relationships between stack components

To learn more about the Elastic Stack and how each of these components are related, refer to An overview of the Elastic Stack.

Step 1: Set up the first Elasticsearch node

edit

To begin, use RPM to install Elasticsearch on the first host. This initial Elasticsearch instance will serve as the master node.

  1. Log in to the host where you’d like to set up your first Elasticsearch node.
  2. Create a working directory for the installation package:

    mkdir elastic-install-files
  3. Change into the new directory:

    cd elastic-install-files
  4. Download the Elasticsearch RPM and checksum file from the Elastic Artifact Registry. You can find details about these steps in the section Download and install the RPM manually.

    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.0.0-beta1-x86_64.rpm
    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.0.0-beta1-x86_64.rpm.sha512
  5. Confirm the validity of the downloaded package by checking the SHA of the downloaded RPM against the published checksum:

    shasum -a 512 -c elasticsearch-9.0.0-beta1-x86_64.rpm.sha512

    The command should return: elasticsearch-{version}-x86_64.rpm: OK.

  6. Run the Elasticsearch install command:

    sudo rpm --install elasticsearch-9.0.0-beta1-x86_64.rpm

    The Elasticsearch install process enables certain security features by default, including the following:

    • Authentication and authorization are enabled, including a built-in elastic superuser account.
    • Certificates and keys for TLS are generated for the transport and HTTP layer, and TLS is enabled and configured with these keys and certificates.
  7. Copy the terminal output from the install command to a local file. In particular, you’ll need the password for the built-in elastic superuser account. The output also contains the commands to enable Elasticsearch to run as a service, which you’ll use in the next step.
  8. Run the following two commands to enable Elasticsearch to run as a service using systemd. This enables Elasticsearch to start automatically when the host system reboots. You can find details about this and the following steps in Running Elasticsearch with systemd.

    sudo systemctl daemon-reload
    sudo systemctl enable elasticsearch.service

Step 2: Configure the first Elasticsearch node for connectivity

edit

Before moving ahead to configure additional Elasticsearch nodes, you’ll need to update the Elasticsearch configuration on this first node so that other hosts are able to connect to it. This is done by updating the settings in the elasticsearch.yml file. For details about all available settings refer to Configuring Elasticsearch.

  1. In a terminal, run the ifconfig command and copy the value for the host inet IP address (for example, 10.128.0.84). You’ll need this value later.
  2. Open the Elasticsearch configuration file in a text editor, such as vim:

    sudo vim /etc/elasticsearch/elasticsearch.yml
  3. In a multi-node Elasticsearch cluster, all of the Elasticsearch instances need to have the same name.

    In the configuration file, uncomment the line #cluster.name: my-application and give the Elasticsearch instance any name that you’d like:

    cluster.name: elasticsearch-demo
  4. By default, Elasticsearch runs on localhost. In order for Elasticsearch instances on other nodes to be able to join the cluster, you’ll need to set up Elasticsearch to run on a routable, external IP address.

    Uncomment the line #network.host: 192.168.0.1 and replace the default address with the value that you copied from the ifconfig command output. For example:

    network.host: 10.128.0.84
  5. Elasticsearch needs to be enabled to listen for connections from other, external hosts.

    Uncomment the line #transport.host: 0.0.0.0. The 0.0.0.0 setting enables Elasticsearch to listen for connections on all available network interfaces. Note that in a production environment you might want to restrict this by setting this value to match the value set for network.host.

    transport.host: 0.0.0.0

    You can find details about the network.host and transport.host settings in the Elasticsearch Networking documentation.

  6. Save your changes and close the editor.

Step 3: Start Elasticsearch

edit
  1. Now, it’s time to start the Elasticsearch service:

    sudo systemctl start elasticsearch.service

    If you need to, you can stop the service by running sudo systemctl stop elasticsearch.service.

  2. Make sure that Elasticsearch is running properly.

    sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200

    In the command, replace $ELASTIC_PASSWORD with the elastic superuser password that you copied from the install command output.

    If all is well, the command returns a response like this:

    {
      "name" : "Cp9oae6",
      "cluster_name" : "elasticsearch",
      "cluster_uuid" : "AT69_C_DTp-1qgIJlatQqA",
      "version" : {
        "number" : "{version_qualified}",
        "build_type" : "{build_type}",
        "build_hash" : "f27399d",
        "build_flavor" : "default",
        "build_date" : "2016-03-30T09:51:41.449Z",
        "build_snapshot" : false,
        "lucene_version" : "{lucene_version}",
        "minimum_wire_compatibility_version" : "1.2.3",
        "minimum_index_compatibility_version" : "1.2.3"
      },
      "tagline" : "You Know, for Search"
    }
  3. Finally, check the status of Elasticsearch:

    sudo systemctl status elasticsearch

    As with the previous curl command, the output should confirm that Elasticsearch started successfully. Type q to exit from the status command results.

Step 4: Set up a second Elasticsearch node

edit

To set up a second Elasticsearch node, the initial steps are similar to those that you followed for Step 1: Set up the first Elasticsearch node.

  1. Log in to the host where you’d like to set up your second Elasticsearch instance.
  2. Create a working directory for the installation package:

    mkdir elastic-install-files
  3. Change into the new directory:

    cd elastic-install-files
  4. Download the Elasticsearch RPM and checksum file:

    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.0.0-beta1-x86_64.rpm
    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-9.0.0-beta1-x86_64.rpm.sha512
  5. Check the SHA of the downloaded RPM:

    shasum -a 512 -c elasticsearch-9.0.0-beta1-x86_64.rpm.sha512
  6. Run the Elasticsearch install command:

    sudo rpm --install elasticsearch-9.0.0-beta1-x86_64.rpm

    Unlike the setup for the first Elasticsearch node, in this case you don’t need to copy the output of the install command, since these settings will be updated in a later step.

  7. Enable Elasticsearch to run as a service:

    sudo systemctl daemon-reload
    sudo systemctl enable elasticsearch.service

    Don’t start the Elasticsearch service yet! There are a few more configuration steps to do before restarting.

  8. To enable this second Elasticsearch node to connect to the first, you need to configure an enrollment token.

    Be sure to run all of these configuration steps before starting the Elasticsearch service.

    You can find additional details about these steps in Reconfigure a node to join an existing cluster and also in Enroll nodes in an existing cluster.

    Return to your terminal shell on the first Elasticsearch node and generate a node enrollment token:

    sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node
  9. Copy the generated enrollment token from the command output.

    Note the following tips about enrollment tokens:

    1. An enrollment token has a lifespan of 30 minutes. In case the elasticsearch-reconfigure-node command returns an Invalid enrollment token error, try generating a new token.
    2. Be sure not to confuse an Elasticsearch enrollment token (for enrolling Elasticsearch nodes in an existing cluster) with a Kibana enrollment token (to enroll your Kibana instance with Elasticsearch, as described in the next section). These two tokens are not interchangeable.
  10. In the terminal shell for your second Elasticsearch node, pass the enrollment token as a parameter to the elasticsearch-reconfigure-node tool:

    sudo /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <enrollment-token>

    In the command, replace <enrollment-token with the elastic generated token that you copied.

  11. Answer the Do you want to continue prompt with yes (y). The new Elasticsearch node will be reconfigured.
  12. In a terminal, run ifconfig and copy the value for the host inet IP address. You’ll need this value later.
  13. Open the second Elasticsearch instance configuration file in a text editor:

    sudo vim /etc/elasticsearch/elasticsearch.yml

    Notice that, as a result of having run the elasticsearch-reconfigure-node tool, certain settings have been updated. For example:

    • The transport.host: 0.0.0.0 setting is already uncommented.
    • The discovery_seed.hosts setting has the value that you added for network_host on the first Elasticsearch node. As you add each new Elasticsearch node to the cluster, the discovery_seed.hosts setting will contain an array of the IP addresses and port numbers to connect to each Elasticsearch node that was previously added to the cluster.
  14. In the configuration file, uncomment the line #cluster.name: my-application and set it to match the name you specified for the first Elasticsearch node:

    cluster.name: elasticsearch-demo
  15. As with the first Elasticsearch node, you’ll need to set up Elasticsearch to run on a routable, external IP address. Uncomment the line #network.host: 92.168.0.1 and replace the default address with the value that you copied. For example:

    network.host: 10.128.0.132
  16. Save your changes and close the editor.
  17. Start Elasticsearch on the second node:

    sudo systemctl start elasticsearch.service
  18. Optionally, to view the progress as the second Elasticsearch node starts up and connects to the first Elasticsearch node, open a new terminal into the second node and tail the Elasticsearch log file:

    sudo tail -f /var/log/elasticsearch/elasticsearch-demo.log

    Notice in the log file some helpful diagnostics, such as:

    • Security is enabled
    • Profiling is enabled
    • using discovery type [multi-node]
    • intialized
    • starting...

      After a minute or so, the log should show a message like:

      [<hostname2>] master node changed {previous [], current [<hostname1>...]}

      Here, hostname1 is your first Elasticsearch instance node, and hostname2 is your second Elasticsearch instance node.

      The message indicates that the second Elasticsearch node has successfully contacted the initial Elasticsearch node and joined the cluster.

  19. As a final check, run the following curl request on the new node to confirm that Elasticsearch is still running properly and viewable at the new node’s localhost IP address. Note that you need to replace $ELASTIC_PASSWORD with the same elastic superuser password that you used on the first Elasticsearch node.

    sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200
    {
      "name" : "Cp9oae6",
      "cluster_name" : "elasticsearch",
      "cluster_uuid" : "AT69_C_DTp-1qgIJlatQqA",
      "version" : {
        "number" : "{version_qualified}",
        "build_type" : "{build_type}",
        "build_hash" : "f27399d",
        "build_flavor" : "default",
        "build_date" : "2016-03-30T09:51:41.449Z",
        "build_snapshot" : false,
        "lucene_version" : "{lucene_version}",
        "minimum_wire_compatibility_version" : "1.2.3",
        "minimum_index_compatibility_version" : "1.2.3"
      },
      "tagline" : "You Know, for Search"
    }

Step 5: Set up additional Elasticsearch nodes

edit

To set up your next Elasticsearch node, follow exactly the same steps as you did previously in Step 4: Set up a second Elasticsearch node. The process is identical for each additional Elasticsearch node that you would like to add to the cluster. As a recommended best practice, create a new enrollment token for each new node that you add.

Step 6: Install Kibana

edit

As with Elasticsearch, you can use RPM to install Kibana on another host. You can find details about all of the following steps in the section Install Kibana with RPM.

  1. Log in to the host where you’d like to install Kibana and create a working directory for the installation package:

    mkdir kibana-install-files
  2. Change into the new directory:

    cd kibana-install-files
  3. Download the Kibana RPM and checksum file from the Elastic website.

    wget https://artifacts.elastic.co/downloads/kibana/kibana-9.0.0-beta1-x86_64.rpm
    wget https://artifacts.elastic.co/downloads/kibana/kibana-9.0.0-beta1-x86_64.rpm.sha512
  4. Confirm the validity of the downloaded package by checking the SHA of the downloaded RPM against the published checksum:

    shasum -a 512 -c kibana-9.0.0-beta1-x86_64.rpm.sha512

    The command should return: kibana-{version}-x86_64.rpm: OK.

  5. Run the Kibana install command:

    sudo rpm --install kibana-9.0.0-beta1-x86_64.rpm
  6. As with each additional Elasticsearch node that you added, to enable this Kibana instance to connect to the first Elasticsearch node, you need to configure an enrollment token.

    Return to your terminal shell into the first Elasticsearch node.

  7. Run the elasticsearch-create-enrollment-token command with the -s kibana option to generate a Kibana enrollment token:

    sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
  8. Copy the generated enrollment token from the command output.
  9. Back on the Kibana host, run the following two commands to enable Kibana to run as a service using systemd, enabling Kibana to start automatically when the host system reboots.

    sudo systemctl daemon-reload
    sudo systemctl enable kibana.service
  10. Before starting the Kibana service there’s one configuration change to make, to set Kibana to run on the Elasticsearch host IP address. This is done by updating the settings in the kibana.yml file. For details about all available settings refer to Configure Kibana.
  11. In a terminal, run the ifconfig command and copy the value for the host inet IP address.
  12. Open the Kibana configuration file for editing:

    sudo vim /etc/kibana/kibana.yml
  13. Uncomment the line #server.host: localhost and replace the default address with the inet value that you copied from the ifconfig command. For example:

    server.host: 10.128.0.28
  14. Save your changes and close the editor.
  15. Start the Kibana service:

    sudo systemctl start kibana.service

    If you need to, you can stop the service by running sudo systemctl stop kibana.service.

  16. Run the status command to get details about the Kibana service.

    sudo systemctl status kibana
  17. In the status command output, a URL is shown with:

    • A host address to access Kibana
    • A six digit verification code

      For example:

      Kibana has not been configured.
      Go to http://10.128.0.28:5601/?code=<code> to get started.

      Make a note of the verification code.

  18. Open a web browser to the external IP address of the Kibana host machine, for example: http://<kibana-host-address>:5601.

    It can take a minute or two for Kibana to start up, so refresh the page if you don’t see a prompt right away.

  19. When Kibana starts you’re prompted to provide an enrollment token. Paste in the Kibana enrollment token that you generated earlier.
  20. Click Configure Elastic.
  21. If you’re prompted to provide a verification code, copy and paste in the six digit code that was returned by the status command. Then, wait for the setup to complete.
  22. When you see the Welcome to Elastic page, provide the elastic as the username and provide the password that you copied in Step 1, from the install command output when you set up your first Elasticsearch node.
  23. Click Log in.

Kibana is now fully set up and communicating with your Elasticsearch cluster!

IMPORTANT: Stop here if you intend to configure SSL certificates.

For simplicity, in this tutorial we’re setting up all of the Elastic Stack components without configuring security certificates. You can proceed to configure Fleet, Elastic Agent, and then confirm that your system data appears in Kibana.

However, in a production environment, before going further to install Fleet Server and Elastic Agent it’s recommended to update your security settings to use trusted CA-signed certificates as described in Tutorial 2: Securing a self-managed Elastic Stack.

After new security certificates are configured any Elastic Agents would need to be reinstalled. If you’re currently setting up a production environment, we recommend that you jump directly to Tutorial 2, which includes steps to secure the Elastic Stack using certificates and then to set up Fleet and Elastic Agent with those certificates already in place.

Step 7: Install Fleet Server

edit

Now that Kibana is up and running, you can install Fleet Server, which will manage the Elastic Agent that you’ll set up in a later step. If you need more detail about these steps, refer to Deploy on-premises and self-managed in the Fleet and Elastic Agent Guide.

  1. Log in to the host where you’d like to set up Fleet Server.
  2. Create a working directory for the installation package:

    mkdir fleet-install-files
  3. Change into the new directory:

    cd fleet-install-files
  4. In the terminal, run ifconfig and copy the value for the host inet IP address (for example, 10.128.0.84). You’ll need this value later.
  5. Back to your web browser, open the Kibana menu and go to Management → Fleet. Fleet opens with a message that you need to add a Fleet Server.
  6. Click Add Fleet Server. The Add a Fleet Server flyout opens.
  7. In the flyout, select the Quick Start tab.
  8. Specify a name for your Fleet Server host, for example Fleet Server.
  9. Specify the host URL where Elastic Agents will reach Fleet Server, for example: http://10.128.0.203:8220. This is the inet value that you copied from the ifconfig output.

    Be sure to include the port number. Port 8220 is the default used by Fleet Server in an on-premises environment. Refer to Default port assignments in the on-premises Fleet Server install documentation for a list of port assignments.

  10. Click Generate Fleet Server policy. A policy is created that contains all of the configuration settings for the Fleet Server instance.
  11. On the Install Fleet Server to a centralized host step, for this example we select the Linux Tar tab, but you can instead select the tab appropriate to the host operating system where you’re setting up Fleet Server.

    Note that TAR/ZIP packages are recommended over RPM/DEB system packages, since only the former support upgrading Fleet Server using Fleet.

  12. Copy the generated commands and then run them one-by-one in the terminal on your Fleet Server host.

    These commands will, respectively:

    1. Download the Fleet Server package from the Elastic Artifact Registry.
    2. Unpack the package archive.
    3. Change into the directory containing the install binaries.
    4. Install Fleet Server.

      If you’d like to learn about the install command options, refer to elastic-agent install in the Elastic Agent command reference.

  13. At the prompt, enter Y to install Elastic Agent and run it as a service. Wait for the installation to complete.
  14. In the Kibana Add a Fleet Server flyout, wait for confirmation that Fleet Server has connected.
  15. For now, ignore the Continue enrolling Elastic Agent option and close the flyout.

Fleet Server is now fully set up!

Step 8: Install Elastic Agent

edit

Next, you’ll install Elastic Agent on another host and use the System integration to monitor system logs and metrics.

  1. Log in to the host where you’d like to set up Elastic Agent.
  2. Create a working directory for the installation package:

    mkdir agent-install-files
  3. Change into the new directory:

    cd agent-install-files
  4. Open Kibana and go to Management → Fleet.
  5. On the Agents tab, you should see your new Fleet Server policy running with a healthy status.
  6. Open the Settings tab.
  7. Reopen the Agents tab and select Add agent. The Add agent flyout opens.
  8. In the flyout, choose a policy name, for example Demo Agent Policy.
  9. Leave Collect system logs and metrics enabled. This will add the System integration to the Elastic Agent policy.
  10. Click Create policy.
  11. For the Enroll in Fleet? step, leave Enroll in Fleet selected.
  12. On the Install Elastic Agent on your host step, for this example we select the Linux Tar tab, but you can instead select the tab appropriate to the host operating system where you’re setting up Fleet Server.

    As with Fleet Server, note that TAR/ZIP packages are recommended over RPM/DEB system packages, since only the former support upgrading Elastic Agent using Fleet.

  13. Copy the generated commands.
  14. In the sudo ./elastic-agent install command, make two changes:

    1. For the --url parameter, check that the port number is set to 8220 (used for on-premises Fleet Server).
    2. Append an --insecure flag at the end.

      If you want to set up secure communications using SSL certificates, refer to Tutorial 2: Securing a self-managed Elastic Stack.

      The result should be like the following:

      sudo ./elastic-agent install --url=https://10.128.0.203:8220 --enrollment-token=VWCobFhKd0JuUnppVYQxX0VKV5E6UmU3BGk0ck9RM2HzbWEmcS4Bc1YUUM==
  15. Run the commands one-by-one in the terminal on your Elastic Agent host. The commands will, respectively:

    1. Download the Elastic Agent package from the Elastic Artifact Registry.
    2. Unpack the package archive.
    3. Change into the directory containing the install binaries.
    4. Install Elastic Agent.
  16. At the prompt, enter Y to install Elastic Agent and run it as a service. Wait for the installation to complete.

    If everything goes well, the install will complete successfully:

    Elastic Agent has been successfully installed.
  17. In the Kibana Add agent flyout, wait for confirmation that Elastic Agent has connected.
  18. Close the flyout.

Your new Elastic Agent is now installed an enrolled with Fleet Server.

Step 9: View your system data

edit

Now that all of the components have been installed, it’s time to view your system data.

View your system log data:

  1. Open the Kibana menu and go to Analytics → Dashboard.
  2. In the query field, search for Logs System.
  3. Select the [Logs System] Syslog dashboard link. The Kibana Dashboard opens with visualizations of Syslog events, hostnames and processes, and more.

View your system metrics data:

  1. Open the Kibana menu and return to Analytics → Dashboard.
  2. In the query field, search for Metrics System.
  3. Select the [Metrics System] Host overview link. The Kibana Dashboard opens with visualizations of host metrics including CPU usage, memory usage, running processes, and others.

    The System metrics host overview showing CPU usage, memory usage, and other visualizations

Congratulations! You’ve successfully set up a three node Elasticsearch cluster, with Kibana, Fleet Server, and Elastic Agent.

Next steps

edit

Now that you’ve successfully configured an on-premises Elastic Stack, you can learn how to configure the Elastic Stack in a production environment using trusted CA-signed certificates. Refer to Tutorial 2: Securing a self-managed Elastic Stack to learn more.

You can also start using your newly set up Elastic Stack right away:

  • Do you have data ready to ingest? Learn how to add data to Elasticsearch.
  • Use Elastic Observability to unify your logs, infrastructure metrics, uptime, and application performance data.
  • Want to protect your endpoints from security threats? Try Elastic Security. Adding endpoint protection is just another integration that you add to the agent policy!

Tutorial 2: Securing a self-managed Elastic Stack

edit

This tutorial is a follow-on to Tutorial 1: Installing a self-managed Elastic Stack. The first tutorial describes how to configure a multi-node Elasticsearch cluster and then set up Kibana, followed by Fleet Server and Elastic Agent. In a production environment, it’s recommended after completing the Kibana setup to proceed directly to this tutorial to configure your SSL certificates. These steps guide you through that process, and then describe how to configure Fleet Server and Elastic Agent with the certificates in place.

Securing the Elastic Stack

Beginning with Elastic 8.0, security is enabled in the Elastic Stack by default, meaning that traffic between Elasticsearch nodes and between Kibana and Elasticsearch is SSL-encrypted. While this is suitable for testing non-production viability of the Elastic platform, most production networks have requirements for the use of trusted CA-signed certificates. These steps demonstrate how to update the out-of-the-box self-signed certificates with your own trusted CA-signed certificates.

For traffic to be encrypted between Elasticsearch cluster nodes and between Kibana and Elasticsearch, SSL certificates must be created for the transport (Elasticsearch inter-node communication) and HTTP (for the Elasticsearch REST API) layers. Similarly, when setting up Fleet Server you’ll generate and configure a new certificate bundle, and then Elastic Agent uses the generated certificates to communicate with both Fleet Server and Elasticsearch. The process to set things up is as follows:

It should take between one and two hours to complete these steps.

Prerequisites and assumptions

edit

Before starting, you’ll need to have set up an on-premises Elasticsearch cluster with Kibana, following the steps in Tutorial 1: Installing a self-managed Elastic Stack.

The examples in this guide use RPM packages to install the Elastic Stack components on hosts running Red Hat Enterprise Linux 8. The steps for other install methods and operating systems are similar, and can be found in the documentation linked from each section.

Special considerations such as firewalls and proxy servers are not covered here.

Step 1: Generate a new self-signed CA certificate

edit

In a production environment you would typically use the CA certificate from your own organization, along with the certificate files generated for the hosts where the Elastic Stack is being installed. For demonstration purposes, we’ll use the Elastic certificate utility to configure a self-signed CA certificate.

  1. On the first node in your Elasticsearch cluster, stop the Elasticsearch service:

    sudo systemctl stop elasticsearch.service
  2. Generate a CA certificate using the provided certificate utility, elasticsearch-certutil. Note that the location of the utility depends on the installation method you used to install Elasticsearch. Refer to elasticsearch-certutil for the command details and to Update security certificates with a different CA for details about the procedure as a whole.

    Run the following command. When prompted, specify a unique name for the output file, such as elastic-stack-ca.zip:

    sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca -pem
  3. Move the output file to the /etc/elasticsearch/certs directory. This directory is created automatically when you install Elasticsearch.

    sudo mv /usr/share/elasticsearch/elastic-stack-ca.zip /etc/elasticsearch/certs/
  4. Unzip the file:

    sudo unzip -d /etc/elasticsearch/certs /etc/elasticsearch/certs/elastic-stack-ca.zip
  5. View the files that were unpacked into a new ca directory:

    sudo ls /etc/elasticsearch/certs/ca/
    ca.crt
    The generated certificate (or you can substitute this with your own certificate, signed by your organizations’s certificate authority)
    ca.key
    The certificate authority’s private key

    These steps to generate new self-signed CA certificates need to be done only on the first Elasticsearch node. The other Elasticsearch nodes will use the same ca.crt and ca.key files.

  6. From the /etc/elasticsearch/certs/ca/ directory, import the newly created CA certificate into the Elasticsearch truststore. This step ensures that your cluster trusts the new CA certificate.

    On a new installation a new keystore and truststore are created automatically. If you’re running these steps on an existing Elasticsearch installation and you know the password to the keystore and the truststore, follow the instructions in Update security certificates with a different CA to import the CA certificate.

    Run the keytool command as shown, replacing <password> with a unique password for the truststore, and store the password securely:

    sudo /usr/share/elasticsearch/jdk/bin/keytool -importcert -trustcacerts -noprompt -keystore /etc/elasticsearch/certs/elastic-stack-ca.p12 -storepass <password> -alias new-ca -file /etc/elasticsearch/certs/ca/ca.crt
  7. Ensure that the new key was added to the keystore:

    sudo /usr/share/elasticsearch/jdk/bin/keytool -keystore /etc/elasticsearch/certs/elastic-stack-ca.p12 -list

    The keytool utility is provided as part of the Elasticsearch installation and is located at: /usr/share/elasticsearch/jdk/bin/keytool on RPM installations.

    Enter your password when prompted. The result should show the details for your newly added key:

    Keystore type: jks
    Keystore provider: SUN
    Your keystore contains 1 entry
    new-ca, Jul 12, 2023, trustedCertEntry,
    Certificate fingerprint (SHA-256): F0:86:6B:57:FC...

Step 2: Generate a new certificate for the transport layer

edit

This guide assumes the use of self-signed certificates, but the process to import CA-signed certificates is the same. If you’re using a CA provided by your organization, you need to generate Certificate Signing Requests (CSRs) and then use the signed certificates in this step. Once the certificates are generated, whether self-signed or CA-signed, the steps are the same.

  1. From the Elasticsearch installation directory, using the newly-created CA certificate and private key, create a new certificate for your elasticsearch node:

    sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca-cert /etc/elasticsearch/certs/ca/ca.crt --ca-key /etc/elasticsearch/certs/ca/ca.key

    When prompted, choose an output file name (you can use the default elastic-certificates.p12) and a password for the certificate.

  2. Move the generated file to the /etc/elasticsearch/certs directory:

    sudo mv /usr/share/elasticsearch/elastic-certificates.p12 /etc/elasticsearch/certs/

    If you’re running these steps on a production cluster that already contains data:

    • In a cluster with multiple Elasticsearch nodes, before proceeding you first need to perform a Rolling restart beginning with the node where you’re updating the keystore. Stop at the Perform any needed changes step, and then proceed to the next step in this guide.
    • In a single node cluster, always stop Elasticsearch before proceeding.
  3. Because you’ve created a new truststore and keystore, you need to update the /etc/elasticsearch/elasticsearch.yml settings file with the new truststore and keystore filenames.

    Open the Elasticsearch configuration file in a text editor and adjust the following values to reflect the newly created keystore and truststore filenames and paths:

    xpack.security.transport.ssl:
       ...
       keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
       truststore.path: /etc/elasticsearch/certs/elastic-stack-ca.p12

Update the Elasticsearch keystore

edit

Elasticsearch uses a separate keystore to hold the passwords of the keystores and truststores holding the CA and node certificates created in the previous steps. Access to this keystore is through the use of a utility called elasticsearch-keystore.

  1. From the Elasticsearch installation directory, list the contents of the existing keystore:

    sudo /usr/share/elasticsearch/bin/elasticsearch-keystore list

    The results should be like the following:

    keystore.seed
    xpack.security.http.ssl.keystore.secure_password
    xpack.security.transport.ssl.keystore.secure_password
    xpack.security.transport.ssl.truststore.secure_password

    Notice that there are entries for:

    • The transport.ssl.truststore that holds the CA certificate
    • The transport.ssl.keystore that holds the CA-signed certificates
    • The http.ssl.keystore for the HTTP layer

      These entries were created at installation and need to be replaced with the passwords to the newly-created truststore and keystores.

  2. Remove the existing keystore values for the default transport keystore and truststore:

    sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.keystore.secure_password
    
    sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.truststore.secure_password
  3. Update the elasticsearch-keystore with the passwords for the new keystore and truststore created in the previous steps. This ensures that Elasticsearch can read the new stores:

    sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
    
    sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password

Step 3: Generate new certificate(s) for the HTTP layer

edit

Now that communication between Elasticsearch nodes (the transport layer) has been secured with SSL certificates, the same must be done for the communications that use the REST API, including Kibana, clients, and any other components on the HTTP layer.

  1. Similar to the process for the transport layer, on the first node in your Elasticsearch cluster use the certificate utility to generate a CA certificate for HTTP communications:

    sudo /usr/share/elasticsearch/bin/elasticsearch-certutil http

    Respond to the command prompts as follows:

    • When asked if you want to generate a CSR, enter n.
    • When asked if you want to use an existing CA, enter y.

      If you’re using your organization’s CA certificate, specify that certificate and key in the following two steps.

    • Provide the absolute path to your newly created CA certificate: /etc/elasticsearch/certs/ca/ca.crt.
    • Provide the absolute path to your newly created CA key: /etc/elasticsearch/certs/ca/ca.key.
    • Enter an expiration value for your certificate. You can enter the validity period in years, months, or days. For example, enter 1y for one year.
    • When asked if you want to generate one certificate per node, enter y.

      You’ll be guided through the creation of certificates for each node. Each certificate will have its own private key, and will be issued for a specific hostname or IP address.

      1. Enter the hostname for your first Elasticsearch node, for example mynode-es1.

        mynode-es1
      2. When prompted, confirm that the settings are correct.
      3. Add the network IP address that clients can use to connect to the first Elasticsearch node. This is the same value that’s described in Step 2 of Tutorial 1: Installing a self-managed Elastic Stack, for example 10.128.0.84:

        10.128.0.84
      4. When prompted, confirm that the settings are correct.
      5. When prompted, choose to generate additional certificates, and then repeat the previous steps to add hostname and IP settings for each node in your Elasticsearch cluster.
      6. Provide a password for the generated http.p12 keystore file.
      7. The generated files will be included in a zip archive. At the prompt, provide a path and filename for where the archive should be created.

        For this example we use: /etc/elasticsearch/certs/elasticsearch-ssl-http.zip:

        What filename should be used for the output zip file? [/usr/share/elasticsearch/elasticsearch-ssl-http.zip] /etc/elasticsearch/certs/elasticsearch-ssl-http.zip
  2. Earlier, when you generated the certificate for the transport layer, the default filename was elastic-certificates.p12. Now, when generating a certificate for the HTTP layer, the default filename is http.p12. This matches the name of the existing HTTP layer certificate file from when the initial Elasticsearch cluster was first installed.

    Just to avoid any possible name collisions, rename the existing http.p12 file to distinguish it from the newly-created keystore:

    sudo mv /etc/elasticsearch/certs/http.p12 /etc/elasticsearch/certs/http-old.p12
  3. Unzip the generated elasticsearch-ssl-http.zip archive:

    sudo unzip -d /usr/share/elasticsearch/ /etc/elasticsearch/certs/elasticsearch-ssl-http.zip
  4. When the archive is unpacked, the certificate files are located in separate directories for each Elasticsearch node and for the Kibana node.

    You can run a recursive ls command to view the file structure:

    ls -lR /usr/share/elasticsearch/{elasticsearch,kibana}
    elasticsearch:
    total 0
    drwxr-xr-x. 2 root root 56 Dec 12 19:13 mynode-es1
    drwxr-xr-x. 2 root root 72 Dec 12 19:04 mynode-es2
    drwxr-xr-x. 2 root root 72 Dec 12 19:04 mynode-es3
    
    elasticsearch/mynode-es1:
    total 8
    -rw-r--r--. 1 root root 1365 Dec 12 19:04 README.txt
    -rw-r--r--. 1 root root  845 Dec 12 19:04 sample-elasticsearch.yml
    
    elasticsearch/mynode-es2:
    total 12
    -rw-r--r--. 1 root root 3652 Dec 12 19:04 http.p12
    -rw-r--r--. 1 root root 1365 Dec 12 19:04 README.txt
    -rw-r--r--. 1 root root  845 Dec 12 19:04 sample-elasticsearch.yml
    
    elasticsearch/mynode-es3:
    total 12
    -rw-r--r--. 1 root root 3652 Dec 12 19:04 http.p12
    -rw-r--r--. 1 root root 1365 Dec 12 19:04 README.txt
    -rw-r--r--. 1 root root  845 Dec 12 19:04 sample-elasticsearch.yml
    
    kibana:
    total 12
    -rw-r--r--. 1 root root 1200 Dec 12 19:04 elasticsearch-ca.pem
    -rw-r--r--. 1 root root 1306 Dec 12 19:04 README.txt
    -rw-r--r--. 1 root root 1052 Dec 12 19:04 sample-kibana.yml
  5. Replace your existing keystore with the new keystore. The location of your certificate directory may be different than what is shown here, depending on the installation method you chose.

    Run the mv command, replacing <es1-hostname> with the hostname of your initial Elasticsearch node:

    sudo mv /usr/share/elasticsearch/elasticsearch/<es1-hostname>/http.p12 /etc/elasticsearch/certs/
  6. Because this is a new keystore, the Elasticsearch configuration file needs to be updated with the path to its location. Open /etc/elasticsearch/elasticsearch.yml and update the HTTP SSL settings with the new path:

    xpack.security.http.ssl:
      enabled: true
      #keystore.path: certs/http.p12
      keystore.path: /etc/elasticsearch/certs/http.p12
  7. Since you also generated a new keystore password, the Elasticsearch keystore needs to be updated as well. From the Elasticsearch installation directory, first remove the existing HTTP keystore entry:

    sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.http.ssl.keystore.secure_password
  8. Add the updated HTTP keystore password, using the password you generated for this keystore:

    sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
  9. Because we’ve added files to the Elasticsearch configuration directory during this tutorial, we need to ensure that the permissions and ownership are correct before restarting Elasticsearch.

    1. Change the files to be owned by root:elasticsearch:

      sudo chown -R root:elasticsearch /etc/elasticsearch/certs/
    2. Set the files in /etc/elasticsearch/certs to have read and write permissions by the owner (root) and read permission by the elastic user:

      sudo chmod 640 /etc/elasticsearch/certs/elastic-certificates.p12
      sudo chmod 640 /etc/elasticsearch/certs/elastic-stack-ca.p12
      sudo chmod 640 /etc/elasticsearch/certs/http_ca.crt
      sudo chmod 640 /etc/elasticsearch/certs/http.p12
    3. Change the /etc/elasticsearch/certs and /etc/elasticsearch/certs/ca directories to be executable by the owner:

      sudo chmod 750 /etc/elasticsearch/certs
      sudo chmod 750 /etc/elasticsearch/certs/ca
  10. Restart the Elasticsearch service:

    sudo systemctl start elasticsearch.service
  11. Run the status command to confirm that Elasticsearch is running:

    sudo systemctl status elasticsearch.service

    In the event of any problems, you can also monitor the Elasticsearch logs for any issues by tailing the Elasticsearch log file:

    sudo tail -f /var/log/elasticsearch/elasticsearch-demo.log

    A line in the log file like the following indicates that SSL has been properly configured:

    [2023-07-12T13:11:29,154][INFO ][o.e.x.s.Security         ] [es-ssl-test] Security is enabled

Step 4: Configure security on additional Elasticsearch nodes

edit

Now that the security is configured for the first Elasticsearch node, some steps need to be repeated on each additional Elasticsearch node.

  1. To avoid filename collisions, on each additional Elasticsearch node rename the existing http.p12 file in the /etc/elasticsearch/certs/ directory:

    mv http.p12 http-old.p12
  2. Copy the CA and truststore files that you generated on the first Elasticsearch node so that they can be reused on all other nodes:

    • Copy the /ca directory (that contains ca.crt and ca.key) from /etc/elasticsearch/certs/ on the first Elasticsearch node to the same path on all other Elasticsearch nodes.
    • Copy the elastic-stack-ca.p12 file from /etc/elasticsearch/certs/ to the /etc/elasticsearch/certs/ directory on all other Elasticsearch nodes.
    • Copy the http.p12 file from each node directory in /usr/share/elasticsearch/elasticsearch (that is, elasticsearch/mynode-es1, elasticsearch/mynode-es2 and elasticsearch/mynode-es3) to the /etc/elasticsearch/certs/ directory on each corresponding cluster node.
  3. On each Elasticsearch node, repeat the steps to generate a new certificate for the transport layer:

    1. Stop the Elasticsearch service:

      sudo systemctl stop elasticsearch.service
    2. From the /etc/elasticsearch/certs directory, create a new certificate for the Elasticsearch node:

      sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca-cert /etc/elasticsearch/certs/ca/ca.crt --ca-key /etc/elasticsearch/certs/ca/ca.key

      When prompted, choose an output file name and specify a password for the certificate. For this example, we’ll use /etc/elasticsearch/certs/elastic-certificates.p12.

    3. Update the /etc/elasticsearch/elasticsearch.yml settings file with the new truststore and keystore filename and path:

      xpack.security.transport.ssl:
         ...
         keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
         truststore.path: /etc/elasticsearch/certs/elastic-stack-ca.p12
    4. List the content of the Elasticsearch keystore:

      /usr/share/elasticsearch/bin/elasticsearch-keystore list

      The results should be like the following:

      keystore.seed
      xpack.security.http.ssl.keystore.secure_password
      xpack.security.transport.ssl.keystore.secure_password
      xpack.security.transport.ssl.truststore.secure_password
    5. Remove the existing keystore values for the default transport keystore and truststore:

      sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.keystore.secure_password
      
      sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.truststore.secure_password
    6. Update the elasticsearch-keystore with the passwords for the new keystore and truststore:

      sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
      
      sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
  4. For the HTTP layer, the certificates have been generated already on the first Elasticsearch node. Each additional Elasticsearch node just needs to be configured to use the new certificates.

    1. Update the /etc/elasticsearch/elasticsearch.yml settings file with the new truststore and keystore filenames:

      xpack.security.http.ssl:
        enabled: true
        #keystore.path: certs/http.p12
        keystore.path: /etc/elasticsearch/certs/http.p12
    2. Remove the existing HTTP keystore entry:

      sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.http.ssl.keystore.secure_password
    3. Add the updated HTTP keystore password:

      sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
    4. Change the certificate files to be owned by the root:elasticsearch group:

      sudo chown -R root:elasticsearch /etc/elasticsearch/certs/
    5. Set the files in /etc/elasticsearch/certs to have read and write permissions by the owner (root) and read permission by the elastic user:

      chmod 640 *
    6. Change the /etc/elasticsearch/certs and /etc/elasticsearch/certs/ca directories to be executable by the owner:

      chmod 750 /etc/elasticsearch/certs
      chmod 750 /etc/elasticsearch/certs/ca
  5. Restart the Elasticsearch service.

    sudo systemctl start elasticsearch.service
  6. Run the status command to confirm that Elasticsearch is running.

    sudo systemctl status elasticsearch.service

Step 5: Generate server-side and client-side certificates for Kibana

edit

Now that the transport and HTTP layers are configured with encryption using the new certificates, there are two more tasks that must be accomplished for end-to-end connectivity to Elasticsearch: Set up certificates for encryption between Kibana and Elasticsearch, and between the client browser and Kibana. For additional details about any of these steps, refer to Mutual TLS authentication between Kibana and Elasticsearch and Encrypt traffic between your browser and Kibana.

  1. In Step 3, when you generated a new certificate for the HTTP layer, the process created an archive elasticsearch-ssl-http.zip.

    From the kibana directory in the expanded archive, copy the elasticsearch-ca.pem private key file to the Kibana host machine.

  2. On the Kibana host machine, copy elasticsearch-ca.pem to the Kibana configuration directory (depending on the installation method that you used, the location of the configuration directory may be different from what’s shown):

    mv elasticsearch-ca.pem /etc/kibana
  3. Stop the Kibana service:

    sudo systemctl stop kibana.service
  4. Update the /etc/kibana/kibana.yml settings file to reflect the location of the elasticsearch-ca.pem:

    elasticsearch.ssl.certificateAuthorities: [/etc/kibana/elasticsearch-ca.pem]
  5. Log in to the first Elasticsearch node and use the certificate utility to generate a certificate bundle for the Kibana server. This certificate will be used to encrypt the traffic between Kibana and the client’s browser. In the command, replace <DNS name> and <IP address> with the name and IP address of your Kibana server host:

    sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --name kibana-server --ca-cert /etc/elasticsearch/certs/ca/ca.crt --ca-key /etc/elasticsearch/certs/ca/ca.key  --dns <DNS name> --ip <IP address> --pem

    When prompted, specify a unique name for the output file, such as kibana-cert-bundle.zip.

  6. Copy the generated archive over to your Kibana host and unpack it:

    sudo unzip kibana-cert-bundle.zip

    The unpacked archive will create a directory, kibana-server, containing the new Kibana key and certificate:

    ls -l kibana-server/
    total 8
    -rw-r--r--. 1 root root 1208 May  3 16:08 kibana-server.crt
    -rw-r--r--. 1 root root 1675 May  3 16:08 kibana-server.key
  7. Copy the certificate and key into /etc/kibana:

    sudo cp kibana-server.crt /etc/kibana/
    sudo cp kibana-server.key /etc/kibana/
  8. Update the permissions on the certificate files to ensure that they’re readable. From inside the /etc/kibana directory, run:

    sudo chmod 640 *.crt
    sudo chmod 640 *.key
  9. Open /etc/kibana/kibana.yml and make the following changes:

    server.ssl.certificate: /etc/kibana/kibana-server.crt
    server.ssl.key: /etc/kibana/kibana-server.key
    server.ssl.enabled: true

    Keep the file open for the next step.

  10. To ensure that Kibana sessions are not invalidated, set up an encryption key by assigning any string of 32 characters or longer to the xpack.security.encryptionKey setting (this string will be configured in kibana.yml and does not need to be remembered). To generate a random string, you can use the following bash commands:

    cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 32 | head -n 1

    Using your own string or the output of the above command sequence, add the encryption key setting to /etc/kibana/kibana.yml:

    xpack.security.encryptionKey: previously_create_string

    Save and close the file.

  11. Restart the Kibana service:

    sudo systemctl start kibana.service
  12. Confirm that Kibana is running:

    sudo systemctl status kibana.service

    If everything is configured correctly, connection to Elasticsearch will be established and Kibana will start normally.

  13. You can also view the Kibana log file to gather more detail:

    tail -f /var/log/kibana/kibana.log

    In the log file you should find a Kibana is now available message.

  14. You should now have an end-to-end ecnrypted deployment with Elasticsearch and Kibana that provides encryption between both the cluster nodes and Kibana, and HTTPS access to Kibana.

    Open a web browser to the external IP address of the Kibana host machine: https://<kibana-host-address>:5601. Note that the URL should use the https and not the http protocol.

  15. Log in using the elastic user and password that you configured in Step 1 of Tutorial 1: Installing a self-managed Elastic Stack.

Congratulations! You’ve successfully updated the SSL certificates between Elasticsearch and Kibana.

Step 6: Install Fleet with SSL certificates configured

edit

Now that Kibana is up and running, you can proceed to install Fleet Server, which will manage the Elastic Agent that we’ll set up in a later step.

If you’d like to learn more about these steps, refer to Deploy on-premises and self-managed in the Fleet and Elastic Agent Guide. You can find detailed steps to generate and configure certificates in Configure SSL/TLS for self-managed Fleet Servers.

  1. Log in to the first Elasticsearch node and use the certificate utility to generate a certificate bundle for Fleet Server. In the command, replace <DNS name> and IP address with the name and IP address of your Fleet Server host:

    sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --name fleet-server --ca-cert /etc/elasticsearch/certs/ca/ca.crt --ca-key /etc/elasticsearch/certs/ca/ca.key  --dns <DNS name> --ip <IP address> --pem

    When prompted, specify a unique name for the output file, such as fleet-cert-bundle.zip.

  2. On your Fleet Server host, create a directory for the certificate files:

    sudo mkdir /etc/fleet
  3. Copy the generated archive over to your Fleet Server host and unpack it into /etc/fleet/:

    • /etc/fleet/fleet-server.crt
    • /etc/fleet/fleet-server.key`
  4. From the first Elasticsearch node, copy the ca.crt file, and paste it into the /etc/fleet/ directory on the Fleet Server host. Just to help identify the file we’ll also rename it to es-ca.crt:

    • /etc/fleet/es-ca.crt
  5. Update the permissions on the certificate files to ensure that they’re readable. From inside the /etc/fleet directory, run:

    sudo chmod 640 *.crt
    sudo chmod 640 *.key
  6. Now that the certificate files are in place, on the Fleet Server host create a working directory for the installation package:

    mkdir fleet-install-files
  7. Change into the new directory:

    cd fleet-install-files
  8. In the terminal, run the ifconfig command and copy the value for the host inet IP address (for example, 10.128.0.84). You’ll need this value later.
  9. Back in your web browser, open the Kibana menu and go to Management → Fleet. Fleet opens with a message that you need to add a Fleet Server.
  10. Click Add Fleet Server. The Add a Fleet Server flyout opens.
  11. In the flyout, select the Advanced tab.
  12. On the Create a policy for Fleet Server step, keep the default Fleet Server policy name and all advanced options at their defaults.

    Leave the option to collect system logs and metrics selected. Click Create policy. The policy takes a minute or so to create.

  13. On the Choose a deployment mode for security step, select the Production option. This enables you to provide your own certificates.
  14. On the Add your Fleet Server host step:

    1. Specify a name for your Fleet Server host, for example Fleet Server.
    2. Specify the host URL and where Elastic Agents will reach Fleet Server, including the default port 8220. For example, https://10.128.0.203:8220.

      The URL is the inet value that you copied from the ifconfig output.

      For details about default port assignments, refer to Default port assignments in the on-premises Fleet Server install documentation.

    3. Click Add host.
  15. On the Generate a service token step, generate the token and save the output. The token will also be propagated automatically to the command to install Fleet Server.
  16. On the Install Fleet Server to a centralized host step, for this example we select the Linux Tar tab, but you can instead select the tab appropriate to the host operating system where you’re setting up Fleet Server.

    Note that TAR/ZIP packages are recommended over RPM/DEB system packages, since only the former support upgrading Fleet Server using Fleet.

  17. Run the first three commands one-by-one in the terminal on your Fleet Server host.

    These commands will, respectively:

    1. Download the Fleet Server package from the Elastic Artifact Registry.
    2. Unpack the package archive.
    3. Change into the directory containing the install binaries.
  18. Before running the provided elastic-agent install command, you’ll need to make a few changes:

    1. Update the paths to the correct file locations:

      • The Elasticsearch CA file (es-ca.crt)
      • The Fleet Server certificate (fleet-server.crt)
      • The Fleet Server key (fleet-server.key)
    2. The fleet-server-es-ca-trusted-fingerprint also needs to be updated. On any of your Elasticsearch hosts, run the following command to get the correct fingerprint to use:

      grep -v ^- /etc/elasticsearch/certs/ca/ca.crt | base64 -d | sha256sum

      Save the fingerprint value. You’ll need it in a later step.

      Replace the fleet-server-es-ca-trusted-fingerprint setting with the returned value. Your updated command should be similar to the following:

      sudo ./elastic-agent install -url=https://10.128.0.208:8220 \
        --fleet-server-es=https://10.128.0.84:9200 \
        --fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmPyL6Rva2VuLTE5OTg4NzAxOTM4NDU6X1I0Q1RrRHZTSWlyNHhkSXQwNEJoQQ \
        --fleet-server-policy=fleet-server-policy \
        --fleet-server-es-ca-trusted-fingerprint=92b51cf91e7fa311f8c84849224d448ca44824eb \
        --certificate-authorities=/etc/fleet/es-ca.crt \
        --fleet-server-cert=/etc/fleet/fleet-server.crt \
        --fleet-server-cert-key=/etc/fleet/fleet-server.key \
        --fleet-server-port=8220

      For details about all of the install command options, refer to elastic-agent install in the Elastic Agent command reference.

  19. After you’ve made the required updates, run the elastic-agent install command to install Fleet Server.

    When prompted, confirm that Elastic Agent should run as a service. If everything goes well, the install will complete successfully:

    Elastic Agent has been successfully installed.

    Wondering why the command refers to Elastic Agent rather than Fleet Server? Fleet Server is actually a subprocess that runs inside Elastic Agent with a special Fleet Server policy. Refer to What is Fleet Server to learn more.

  20. Return to the Kibana Add a Fleet Server flyout and wait for confirmation that Fleet Server has connected.
  21. Once the connection is confirmed, ignore the Continue enrolling Elastic Agent option and close the flyout.

Fleet Server is now fully set up!

Before proceeding to install Elastic Agent, there are a few steps needed to update the kibana.yml settings file with the Elasticsearch CA fingerprint:

  1. On your Kibana host, stop the Kibana service:

    sudo systemctl stop kibana.service
  2. Open /etc/kibana/kibana.yml for editing.
  3. Find the xpack.fleet.outputs setting.
  4. Update ca_trusted_fingerprint to the value you captured earlier, when you ran the grep command on the Elasticsearch ca.crt file.

    The updated entry in kibana.yml should be like the following:

    xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: [`https://10.128.0.84:9200`], ca_trusted_fingerprint: 92b51cf91e7fa311f8c84849224d448ca44824eb}]
  5. Save your changes.
  6. Restart Kibana:

    sudo systemctl start kibana.service

    Kibana is now configured with the correct fingerprint for Elastic Agent to access Elasticsearch. You’re now ready to set up Elastic Agent!

Step 7: Install Elastic Agent

edit

Next, we’ll install Elastic Agent on another host and use the System integration to monitor system logs and metrics. You can find additional details about these steps in Configure SSL/TLS for self-managed Fleet Servers.

  1. Log in to the host where you’d like to set up Elastic Agent.
  2. Create a directory for the Elasticsearch certificate file:

    sudo mkdir /etc/agent
  3. From the first Elasticsearch node, copy the ca.crt file, and paste it into the /etc/agent/ directory on the Fleet Server host. Just to help identify the file we’ll also rename it to es-ca.crt:

    • /etc/fleet/es-ca.crt
  4. Create a working directory for the installation package:

    mkdir agent-install-files
  5. Change into the new directory:

    cd agent-install-files
  6. Open Kibana and go to Management → Fleet.
  7. On the Agents tab, you should see your new Fleet Server policy running with a healthy status.
  8. Click Add agent. The Add agent flyout opens.
  9. In the flyout, choose an agent policy name, for example Demo Agent Policy.
  10. Leave Collect system logs and metrics enabled. This will add the System integration to the Elastic Agent policy.
  11. Click Create policy.
  12. For the Enroll in Fleet? step, leave Enroll in Fleet selected.
  13. On the Install Elastic Agent on your host step, for this example we select the Linux Tar tab, but you can instead select the tab appropriate to the host operating system where you’re setting up Fleet Server.

    As with Fleet Server, note that TAR/ZIP packages are recommended over RPM/DEB system packages, since only the former support upgrading Elastic Agent using Fleet.

  14. Run the first three commands one-by-one in the terminal on your Elastic Agent host.

    These commands will, respectively:

    1. Download the Elastic Agent package from the Elastic Artifact Registry.
    2. Unpack the package archive.
    3. Change into the directory containing the install binaries.
  15. Before running the provided elastic-agent install command, you’ll need to make a few changes:

    1. For the --url parameter, confirm that the port number is 8220 (this is the default port for on-premises Fleet Server).
    2. Add a --certificate-authorities parameter with the full path of your CA certificate file. For example, --certificate-authorities=/etc/agent/es-ca.crt.

      The result should be like the following:

      sudo ./elastic-agent install \
      --url=https://10.128.0.203:8220 \
      --enrollment-token=VWCobFhKd0JuUnppVYQxX0VKV5E6UmU3BGk0ck9RM2HzbWEmcS4Bc1YUUM== \
      --certificate-authorities=/etc/agent/es-ca.crt
  16. Run the elastic-agent install command.

    At the prompt, enter Y to install Elastic Agent and run it as a service. wait for the installation to complete.

    If everything goes well, the install will complete successfully:

    Elastic Agent has been successfully installed.
  17. In the Kibana Add agent flyout, wait for confirmation that Elastic Agent has connected.
  18. Wait for the Confirm incoming data step to complete. This may take a couple of minutes.
  19. Once data is confirmed to be flowing, close the flyout.

Your new Elastic Agent is now installed an enrolled with Fleet Server.

Step 8: View your system data

edit

Now that all of the components have been installed, it’s time to view your system data.

View your system log data:

  1. Open the Kibana menu and go to Analytics → Dashboard.
  2. In the query field, search for Logs System.
  3. Select the [Logs System] Syslog dashboard link. The Kibana Dashboard opens with visualizations of Syslog events, hostnames and processes, and more.

View your system metrics data:

  1. Open the Kibana menu and return to Analytics → Dashboard.
  2. In the query field, search for Metrics System.
  3. Select the [Metrics System] Host overview link. The Kibana Dashboard opens with visualizations of host metrics including CPU usage, memory usage, running processes, and more.

    The System metrics host overview showing CPU usage, memory usage, and other visualizations

Congratulations! You’ve successfully configured security for Elasticsearch, Kibana, Fleet, and Elastic Agent using your own trusted CA-signed certificates.

What’s next?

edit
  • Do you have data ready to ingest into your newly set up Elastic Stack? Learn how to add data to Elasticsearch.
  • Use Elastic Observability to unify your logs, infrastructure metrics, uptime, and application performance data.
  • Want to protect your endpoints from security threats? Try Elastic Security. Adding endpoint protection is just another integration that you add to the agent policy!