How to do a basic install of an ELK stack on Ubuntu for log stashing?

Install and configure elasticsearch

Use a machine with at least 4GiB of RAM. This guide will install Elasticsearch, Logstash and Kibana all on the same machine, so this is suited only for a small-scale setup.

# apt-get install apt-transport-https software-properties-common wget
# add-apt-repository ppa:webupd8team/java
# apt-get update
# apt-get install oracle-java8-installer
# wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
# echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
# apt-get update
# apt-get install elasticsearch
# systemctl enable elasticsearch
# systemctl start elasticsearch

Then edit `/etc/elasticsearch/elasticsearch.yml` and do the following:

  • Set `node.name` to something descriptive
  • Set `cluster.name` to something unique to avoid issues with auto-discovery
  • Set `network.host` to “localhost”
  • Change `path.data` and `path.logs` if necessary

Edit `/etc/elasticsearch/jvm.options` and set the heap size according to your machine RAM, for example (to set it to 2GiB):

-Xms2g
-Xmx2g

A setting of 1GiB is probably the minimum you should use.

Make sure you disable swapping.

Install and configure Kibana

Run this: `$ sudo apt-get install kibana`. Then edit `/etc/kibana/kibana.yml` and set `server.host` to “localhost”.

Then: `# systemctl enable kibana && systemctl start kibana`

Create a DNS entry for your kibana host and an SSL certificate.

Install nginx as reverse proxy with authentication:

# apt-get install nginx
# rm /etc/nginx/sites-enabled/default
# echo "admin:$(openssl passwd -apr1 PASSWORD)" | tee -a /etc/nginx/kibana.htpasswd

Then edit a nginx site file, eg: `/etc/nginx/sites-available/kibana` and make it look like this:

server {
    listen 80 default_server;
    server_name _;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 default_server ssl http2;
    server_name _;

    ssl_certificate /etc/letsencrypt/live/YOURDOMAIN/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/YOURDOMAIN/privkey.pem;

    auth_basic "My Kibana";
    auth_basic_user_file /etc/nginx/kibana.htpasswd;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Activate the site like so:

# ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/kibana
# nginx -t
# systemctl reload nginx

Install and configure logstash

Just run: `$ sudo apt-get install logstash`

Edit `/etc/logstash/jvm.options` and set the heap size according to your machine RAM, for example (to set it to 2GiB):

-Xms2g
-Xmx2g

A setting of 1GiB is probably the minimum you should use.

Add a file `/etc/logstash/conf.d/GIVE_ME_A_NAME.conf` and edit it so it looks like this:

input {
    beats {
        port => "5044"
    }
}

filter {
    if [fields][log_type] == "apache-access" {
        grok {
            match => { "message" => "%{IPORHOST:vhost}:%{NUMBER:vhost_port} %{COMBINEDAPACHELOG}" }
        }
        geoip {
            source => "clientip"
        }
    } else if [fields][log_type] == "apache-error" {
       grok {
           match => { "message" => "%{IPORHOST:vhost} \[%{TIMESTAMP_ISO8601:timestamp}\] \[%{DATA:module}:%{LOGLEVEL}\] \[pid: %{POSINT:pid}:tid %{DATA:tid}\] \[OS error: %{DATA:oserror}\] \[client %{DATA:clientip}\] %{GREEDYDATA:error_message}" }
       }
       geoip {
           source => "clientip"
       }
    }
}

output {
    elasticsearch {
        hosts => [ "localhost:9200" ]
        index => "%{[fields][log_type]}-%{+YYYYMMdd}"
    }
}

Then run: `# systemctl restart logstash`

NB: The grok patterns assume the following log formats for apache:

ErrorLogFormat “%-v [%{cu}t] [%-m:%l] [pid: %-P:tid %-T] [OS error: %-E] [client %-a] %M” LogFormat “%v:%p %h %l %u %t \”%r\” %>s %O \”%{Referer}i\” \”%{User-Agent}i\”” vhost_combined

NB: Use this to test your grok patterns.

Install and configure filebeat

Filebeat goes on the machine where your service is running.

To install filebeat:

# wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
# apt-get install apt-transport-https
# echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | 
tee -a /etc/apt/sources.list.d/elastic-6.x.list
# apt-get update
# apt-get install filebeat

Then edit `/etc/filebeat/filebeat.yml` so it looks like this:

filebeat.inputs:

- type: log
  paths:
   - /var/log/apache2/access.log
  fields:
   log_type: apache-access

- type: log
  paths:
   - /var/log/apache2/error.log
  fields:
   log_type: apache-error

output.logstash:
 hosts: ["ELK_HOSTNAME_OR_IP:5044"]

#output.console:
# pretty: true

Then run `# systemctl restart filebeat`

How to install the dns-route53 plugin for certbot on Ubuntu?

There are no instruction on how to install the dns route53 plugin for certbot. Here is how to do it for Ubuntu.

To install certbot:


$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install certbot

To install the dns-route53 plugin:

# apt-get install python3-pip
$ sudo pip3 install certbot-dns-route53

You can then create a new certificate with something like that:


$ sudo certbot certonly --dns-route53 -d YOUR-DOMAIN.com --deploy-hook 'systemctl reload apache2'

Are you puzzled about why ssh does not use your ssh-agent?

It could be that you are using the IdentityFile combined with the IdentitiesOnly option. When IdentitiesOnly is set to yes, ssh will not try to use the ssh agent, but only the key that you specified with IdentityFile. Consequently, you have to enter the passphrase every time.

Instead, use the CertificateFile option (and specify the corresponding public key). This way, ssh will use the ssh agent properly.

Please note that in that case, you will need to load the private key in the ssh-agent first, eg: $ ssh-add /PATH/TO/PRIVATE/KEY. If you don’t, ssh will not find the private key in the ssh agent, and has no real way to know what is the private key to use otherwise.

How to install a MySQL NDB Cluster on Ubuntu

Official documentation here.

Installing and configuring MySQL Cluster on Ubuntu

There are 3 different types of nodes for MySQL Cluster: NDB manager, NDB node and mysql server. Please note the standard mysql server does not support the NDB database engine, so the mysql server must be the one compiled with NDB support.

This page assumes the following configuration:

  • Only one NDB manager node is running, which is enough for a small setup like this. Please note the NDB manager does not have to be present for the cluster to work. It is used essentially once at startup, and thereafter when a node dies for some reason.
  • Only one MySQL server is running, and on the same droplet as the NDB manager. Having a single MySQL server is a single point of failure, but that’s good enough for now. It is possible to add more MySQL servers for redundancy and load balancing. Ideally, the MySQL servers should run on their own droplets.
  • Two NDB data nodes; that’s the minimum required to get an NDB cluster up and running. This will provide fault tolerance (in the sense that if one node dies, the cluster itself will continue running), but will not provide performance improvements.

APT repository

MySQL conveniently provides an apt repository for Ubuntu 14.04 and 16.04. Follow these steps on all 3 machines:

  • Go to http://dev.mysql.com/downloads/repo/apt/
  • Download the relevant mysql-apt-config .deb file
  • Install it like so: $ sudo dpkg -i mysql-apt-config*.deb
  • Select ‘MySQL Server & Cluster’
  • Select ‘mysql-cluster-7.5’ (currently the latest GA release)
  • Select ‘OK’ in the scroll down list
  • Run $ sudo apt-get update

Installing and configuring the NDB manager node

Provided you installed mysql-apt-config with the right options, just run:

$ sudo apt-get install mysql-cluster-community-management-server

Then create a directory /var/lib/mysql-cluster, and create a text file in that directory named config.ini. Edit this file so the content looks like this (replace the IP addresses with the real ones):

[ndb_mgmd]
HostName=10.132.156.195
DataDir=/var/lib/mysql-cluster

[ndbd]
HostName=10.132.156.202
NodeId=2
DataDir=/usr/local/mysql/data

[ndbd]
HostName=10.132.156.217
NodeId=3
DataDir=/usr/local/mysql/data

[mysqld]
HostName=10.132.156.195

You should be able to run the NDB manager like so (replace the IP address with the local private IP address):

$ sudo ndb_mgmd -f /var/lib/mysql-cluster/config.ini --bind-address=10.132.156.195

Important security note: The --bind-address option forces the NDB manager to bind to the specified address. In our case, we ensure it binds to the private IP only, which prevents anyone from the outside world to connect to the NDB manager.

In order to get the NDB manager to start automatically, the simplest is to add it to rc.local. Edit the /etc/rc.local file and add the line ndb_mgmd -f /var/lib/mysql-cluster/config.ini --bind-address=10.132.156.195 (before the exit 0 line if there is one). Reboot the machine and run ps -ef, you should see the ndb_mgmd process running.

There is also a command-line utility to access the NDB manager (replace the IP address with the local private IP address):

$ sudo apt-get install mysql-cluster-community-client
$ sudo ndb_mgm 10.132.156.195
ndb_mgm> show
ndb_mgm> exit

NDB data nodes

This should be repeated for each data node.

Provided you installed mysql-apt-config with the right options, just run:

$ sudo apt-get install mysql-cluster-community-data-node

Edit the file /etc/my.cnf (create it if necessary), and make it look like:

[mysql_cluster]
ndb-connectstring=10.132.156.195

Replace the IP address with the private IP address of the droplet on which the NDB manager is installed.

You will also need to create the data directory: $ sudo mkdir -p /usr/local/mysql/data

Start the NDB data node daemon like so (replace the IP address with the local private IP address):

$ sudo ndbd --bind-address=10.132.156.202`

Important security note: The --bind-address option forces the NDB manager to bind to the specified address. In our case, we ensure it binds to the private IP only, which prevents anyone from the outside world to connect to the NDB data node.

In order to get the NDB data node to start automatically, the simplest is to add it to rc.local. Edit the /etc/rc.local file and add the line ndbd --bind-address=10.132.156.202 (before the exit 0 line if there is one). Reboot the machine and run ps -ef, you should see the ndbd process running.

NB: If the machine the NDB data node is running on has 2 or more CPUs, use ndbmtd for better performances. This is the multi-threaded version of ndbd. Please note that you still need to enable multi-threading in the config file. Read the documentation for ndbmtd for more information.

MySQL server

This should be repeated for each MySQL server, if applicable.

Provided you installed mysql-apt-config with the right options, just run:

$ sudo apt-get install mysql-cluster-community-server

Edit the file /etc/mysql/mysql.conf.d/mysqld.cnf and do the following under the [mysqld] section:

  • Set the bind-address parameter to the private IP address of this local machine
  • Add the keyword ndbcluster
  • Add a section [mysql_cluster] if it does not exist
  • Under the section [mysql_cluster], set the ndb-connectstring parameter to the private IP address of the NDB manager node (in our example, the MySQL server and the NDB manager both runs on the same machine)

The file should look similar to this:

[mysqld]
...
bind-address = 10.132.156.195
ndbcluster

[mysql_cluster]
ndb-connectstring = 10.132.156.195

Important security note: The bind-address option forces the MySQL server to bind to the specified address. In our case, we ensure it binds to the private IP only, which prevents anyone from the outside world to connect to the MySQL server.

Restart the MySQL server so it can pick up the new configuration: sudo systemctl restart mysql

You can check everything looks good by starting ndb_mgm and issuing the command show. You should see that both data nodes and the MySQL server are connected.

DEPRECATED: Download MySQL Cluster from here (select the bundle .tar file).

How to loop over a role in Ansible?

Don’t use “roles:” but “tasks:” and use the include_role module.

For example:

---
# roles/myrole/tasks/main.yml
- debug: var=bla

---
# playbook
- hosts: localhost
  connection: local
  gather_facts: no
  tasks:
   - name: Loop over role
     include_role:
      name: myrole
     vars:
      bla: "{{ item }}"
     with_items:
      - hello world
      - hi