Quantcast
Channel: Ubuntu – LinOxide
Viewing all articles
Browse latest Browse all 167

How to Setup ELK Stack to Centralize Logs on Ubuntu 16.04

$
0
0

The ELK stack consists of Elasticsearch, Logstash, and Kibana used to centralize the the data. ELK is mainly used for log analysis in IT environments. The ELK stack makes it easier and faster to search and analyze large volume of data to make real-time decisions-all the time.

In this tutorial we will use the following versions of ELK stack.

Elasticsearch 2.3.4
Logstash 2.3.4
Kibana 4.5.3
Oracle Java version 1.8.0_91
Filebeat version 1.2.3 (amd64)

Before you start installing ELK stack, check the LSB release of Ubuntu server.

# lsb_release -a

Ubuntu LSB release

1. Install Java

The requirement for elasticsearch and logstash is to first install Java. We will install Oracle java since elasticsearch recommends it. However it works with OpenJDK also.

Add the Oracle Java PPA to apt:

# sudo add-apt-repository -y ppa:webupd8team/java

Add Oracle JAVA to apt repository

Update apt database

# sudo apt-get update

Now install the latest stable version of Oracle Java 8 using following command.

# sudo apt-get -y install oracle-java8-installer

Accept JAVA licence

Java 8 is installed, Check the version of Java using the command java -version

Check JAVA Version

2. Install Elasticsearch

To install Elasticsearch, first import  its public GPG key into apt database. Run the following command to import the Elasticsearch public GPG key into apt

# wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Now create the Elasticsearch source list

# echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list

Create elasticsearch source list

Update apt  database

# sudo apt-get update

Now install Elasticsearch using following command

# sudo apt-get -y install elasticsearch

Install elasticsearch using apt get

Next edit the elasticsearch configuration file

# sudo vi /etc/elasticsearch/elasticsearch.yml

To restrict outside access to Elasticsearch instance (port 9200) uncomment the line that says network.host and replace its value with "localhost" .

network.host: localhost

Edit elasticsearch network host

Now start Elasticsearch

# sudo service elasticsearch restart

To start Elasticsearch on boot up, execute the following command.

# sudo update-rc.d elasticsearch defaults 95 10

Test elasticsearch using following command.

# curl localhost:9200

Test Elasticsearch using CURL

3. Install logstash

Create the Logstash source list. We have already imported public key as logstash and elasticsearch are from same repository.

# wget https://download.elastic.co/logstash/logstash/packages/debian/logstash_2.3.4-1_all.deb

Download Logstash using WGET

# dpkg -i logstash_2.3.4-1_all.deb

Install Logstash using dpkg

# sudo update-rc.d logstash defaults 97 8

# sudo service logstash start

To check the status of logstash, execute the following command in the terminal.

# sudo service logstash status

Start Logstash

You may find that the logstash is active but you cannot stop/restart logstash properly using service or systemctl command.  In that case you have to configure the systemd logstash daemon script by yourself. First, backup the logstash startup script inside /etc/init.d/ and /etc/systemd/system and remove it from there. Now install this “pleaserun” script from https://github.com/elastic/logstash/issues/3606 The prerequisite for installing this script is ruby.

Install Ruby

# sudo apt install ruby

Install Ruby

Now install please run gem

# gem install pleaserun

Install Pleaserun

You are now ready to create the systmd daemon file for logstash. Use the following command to do this.

# pleaserun -p systemd -v default --install /opt/logstash/bin/logstash agent -f /etc/logstash/logstash.conf

Now that systemd daemon for logstash has been created, start it and check the status of logstash.

# sudo systemctl start logstash

# sudo systemctl status logstatus

Logstash Status

4. Configure logstash

Let us now configure Logstash. The logstash configuration  files  resides inside /etc/logstash/conf.d and are in JSON-format.  The configuration consists of three parts and they are  inputs, filters, and outputs. First, create a directory for storing certificate and key for logstash.

# mkdir -p /var/lib/logstash/private

# sudo chown logstash:logstash /var/lib/logstash/private

# sudo chmod go-rwx /var/lib/logstash/private

Change ownership of logstash dir

Now create certificates and key for logstash

# openssl req -config /etc/ssl/openssl.cnf -x509  -batch -nodes -newkey rsa:2048 -keyout /var/lib/logstash/private/logstash-forwarder.key -out /var/lib/logstash/private/logstash-forwarder.crt -subj /CN=172.31.13.29

Change /CN=172.31.13.29 to your server's private IP address. To avoid “TLS handshake error” add  the following line in /etc/ssl/openssl.cnf.

[v3_ca] subjectAltName = IP:172.31.13.29

Create SSL certificate for ELK server

Keep in mind that we have to copy this certificate to every clients whose logs you want send to ELK server through filebeat.

Next, we will first create “filebeat” input by the name 02-beats-input.conf

# sudo vi /etc/logstash/conf.d/02-beats-input.conf

input {
beats {
port => 5044
ssl => true
ssl_certificate => "/var/lib/logstash/private/logstash-forwarder.crt"
ssl_key => "/var/lib/logstash/private/logstash-forwarder.key"
}
}

Filebeat input section

Now we will create “filebeat” filter by the name 10-syslog-filter.conf to add a filter for syslog messages.

# sudo vi /etc/logstash/conf.d/10-syslog-filter.conf

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

Filebeat Filter section

At last, we will create “filebeat” output by the name 30-elasticsearch-output.conf

# sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Filebeat output section

Test your Logstash configuration with the following command.

# sudo service logstash configtest

It will display Configuration OK  if there are no syntax errors otherwise check the logstash log files in /var/log/logstash

Logstash configuration test

To test the logstash, execute the following command from the terminal.

# cd /opt/logstash/bin && ./logstash -f /etc/logstash/conf.d/02-beats-input.conf

You will find that the logstash has started a pipeline and processing the syslogs. Once you are sure that logstash is processing the syslogs, combine  02-beats-input.conf, 10-syslog-filter.conf and 30-elasticsearch-output.conf as logstash.conf in the directory /etc/logstash/

Restart logstash to reload new configuration.

# sudo systemctl restart logstash

5.  Install sample dashboard

Download sample Kibana dashboards and Beats index patterns. We are not going to use this dashboard but will load them so that we can use filebeat index pattern in it. Download the sample dashboards and unzip it.

# curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip

# unzip beats-dashboards-1.1.0.zip

Download sample beat dashboard

Load the sample dashboards, visualizations and Beats index patterns into Elasticsearch using following commands.

# cd beats-dashboards-1.1.0 # ./load.sh

You will find the following index patterns in the the kibana dashboard's left sidebar. We will use only filebeat index pattern.

packetbeat-*
topbeat-*
filebeat-*
winlogbeat-*

Since we will use filebeat to forward logs to Elasticsearch, therefore we will load a filebeat index template into the elasticsearch.

First, download the filebeat index template

# curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/ raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json

Now load the following template with the following CURL command.

# curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json

If the template loaded properly, you should see a message like this:

Output:
{
"acknowledged" : true
}

xput JSON template using CURL

The ELK Server is now ready to receive filebeat data, let's configure filebeat in client server. For more information about loading beat dashboard check this link https://www.elastic.co/guide/en/beats/libbeat/current/load-kibana-dashboards.html

6. Install filebeat in clients

Create the Beats source list in the clients whose logs you want send to ELK server. Update the apt database and install filebeat using apt-get

# echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list # sudo apt-get update && sudo apt-get install filebeat

Install filebeat using apt-get

Start filebeat

# /etc/init.d/filebeat start

Start filebeat

Now edit the file /etc/filebeat/filebeat.yml . Modify the existing prospector to send syslog to logstash. In the paths section, comment out the - /var/log/*.log file and add new entries for syslog - /var/log/syslog.

Edit filebeat syslog path

Next, specifies that the logs in the prospector are of type syslog.

Edit filebeat document type

Uncomment the Logstash: output section and hosts: ["SERVER_PRIVATE_IP:5044"] section. Edit localhost to the private IP address or hostname of your ELK server. Now uncomment the line that says certificate_authorities, and modify its value to  /var/lib/logstash/private/logstash-forwarder.crt that we have created in the ELK server in step you must copy this certificate to all client machine.

Restart filebeat and check its status.

# sudo /etc/init.d/filebeat restart # sudo service filebeat status

Check filebeat status

To test the filebeat, execute the following command from the terminal.

# filebeat -c /etc/filebeat/filebeat.yml -e -v

Test filebeat from terminal

The filebeat will send the logs to logstash for indexing the logs. Enable the filebeat to start during every boot.

# sudo update-rc.d filebeat defaults 95 10

Now open your favorite browser and point URL to http://ELK-SERVER-IP:5601 or http://ELK-SERVER-DOMAIN-NAME:5601, you will find the syslogs when clicked file-beats-* in the left sidebar.

This is our final filebeat configuration for filebeat -

filebeat:
prospectors:
-
paths:
- /var/log/auth.log
- /var/log/syslog

input_type: log

document_type: syslog

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["172.31.13.29:5044"]
bulk_max_size: 1024

tls:
certificate_authorities: ["/var/lib/logstash/private/logstash-forwarder.crt"]

shipper:

logging:
files:
rotateeverybytes: 10485760

Final filebeat configuration

7. Configure firewall

Add firewall rules to allow traffic to the following ports.

For IPTABLE users:

# sudo iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 5601 -j ACCEPT ( Kibana )

# sudo iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 9200 -j ACCEPT ( Elastic Search)

# sudo iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 80 -j ACCEPT ( NGINX)

# sudo iptables -A INPUT -m state –state NEW -m tcp -p tcp –dport 5044 -j ACCEPT ( Filebeat )

Save the rules.

# service iptables save

Restart IPTABLE

# service iptables restart

For UFW users:

# sudo ufw allow 5601/tcp

# sudo ufw allow 9200/tcp

# sudo ufw allow 80/tcp

# sudo ufw allow 5044/tcp

# sudo ufw reload

8. Install/Configure Kibana

Download latest kibana fom https://download.elastic.co/

# cd /opt

# wget https://download.elastic.co/kibana/kibana/kibana-4.5.3-linux-x64.tar.gz

# tar -xzf kibana-4.5.3-linux-x64.tar.gz # cd kibana-4.5.3-linux-x64/

# mv kibana-4.5.3-linux-x64 kibana # cd /opt/kibana/config # vi kibana.yml

Now change these parameter in /opt/kibana/config/kibana.yml

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://localhost:9200"

Edit Kibana configuration file

For testing purpose you can run the kibana using following commands.

# cd /opt/kibana/bin

# ./kibana & # netstat -pltn

Start kibana from terminal

Now we will create systemd daemon for kibana using “pleaserun” in the same way that we have created for logstash.

# pleaserun -p systemd -v default –install /opt/kibana/bin/kibana -p 5601 -H 0.0.0.0 -e http://localhost:9200

where
-p specify the port no that kibana will bind
-H specify the host IP address where Kibana will run.
-e option specify the  elasticsearch IP address.

Start the kibana

# systemctl start kibana

Check the status of kibana

# systemctl status kibana

Check whether port no 5601 has been occupied by kibana

# netstat -pltn| grep '5601'

Create systemd daemon script for kibana

9. Install/Configure NGINX

Since Kibana is configured to listen on localhost, we need to set up a reverse proxy to allow external access to it. We will use NGINX as a reverse proxy. Install NGINX and apache utils using following command.

# sudo apt-get install nginx apache2-utils php-fpm

Install NGINX and Apache utils

Edit php-fpm configuration file www.conf  inside /etc/php/7.0/fpm/pool.d

listen.allowed_clients = 127.0.0.1,172.31.13.29

Configure php fpm

Restart php-fpm

# sudo service php-fpm restart

Using htpasswd create an admin user by the name "kibana"  to access the Kibana web interface.

# sudo htpasswd -c /etc/nginx/htpasswd.users kibana

Create kibana http user

Enter a password at the prompt. Remember this password, we will use it to access the Kibana web interface.
Create a certificate for NGINX

# sudo openssl req -x509 -batch -nodes -days 365 -newkey rsa:2048  -out /etc/ssl/certs/nginx.crt -keyout /etc/ssl/private/nginx.key -subj /CN=demohost.com

Create NGINX SSL certificate and key

Edit NGINX default server block .

# sudo vi /etc/nginx/sites-available/default

Delete the file's contents, and paste the following configuration into the file.

server_tokens off;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
server {
listen 443 ssl;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
ssl_certificate /etc/ssl/certs/nginx.crt;
ssl_certificate_key /etc/ssl/private/nginx.key;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
add_header Strict-Transport-Security "max-age=31536000;";
}
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}

NGINX configuration file

We are not using the server_name directive as we have configured our domain name in /etc/hosts and /etc/hostname as demohost.com. Also since, we have edited the NGINX default host (  /etc/nginx/sites-available/default ). Therefore once NGINX started demohost.com will be available in the browser.

Save and exit. From now onward NGINX will direct server's HTTP traffic to the Kibana application in port no 5601.
Now restart NGINX to put our changes into effect:

# sudo service nginx restart

Now you can access Kibana by visiting the FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. Enter "kibana" credentials that you have created earlier, you will be redirected to Kibana welcome page which will ask you to configure an index pattern.

Login to Kibana dashboard

Click filebeat* in the top left sidebar, you will see the logs from the clients flowing into the dashboard.

View Kibana Dashboard

Click the status of the ELK server

ELK Server Status

Conclusion:

That's all for ELK server, install filebeat in any number of client systems and ship the logs to the ELK server for analysis. To make the unstructured log data more functional, parse it properly and make it structured using grok. There are also few awesome plug-ins available for use along with kibana, to visualize the logs in a systematic way.

The post How to Setup ELK Stack to Centralize Logs on Ubuntu 16.04 appeared first on LinOxide.


Viewing all articles
Browse latest Browse all 167

Trending Articles