Skip to main content
Elasticsearch in garbage collection hell

After several weeks of intense testing, fixing configuration problems, re-indexing data and experiencing problems when upgrading our Kibana indices, we managed to upgrade our 36 Kibana instances and our Elasticsearch cluster in production from version 5.6.16 to 6.7.1 a couple of weeks ago.

We could not believe it but finally we had over 1.300 indices, 100TB of data, 102 000 000 000 documents and 18 Elasticsearch nodes running the last version of the elastic 6.x series at the moment.

Elasticsearch - Common maintenance tasks

If you have to administrate an Elasticsearch cluster, there are some common maintenance tasks that you will have to run to keep your data growth under control, backup your indexes and keep the cluster updated.

At the University of Oslo we have a 14 nodes Elasticsearch 5.x cluster (3 master + 2 clients + 4 SSD data+ 5 SAS data). We use it to manage, search, analyze, and explore our logs. It has around 100TB of total storage, around 1,300 indexes and we keep from 3 to 6-12 months of data per index type depending of the type of data they have. 

Access to Elasticsearch with Cerebro via SSL+LDAP

One of the main plugins we were using with our 2.x Elasticsearch cluster was KOPF. This plugin was a web interface to the Elasticsearch API and it was an easy way of performing common tasks on our Elasticsearch cluster.

When we upgraded our Elasticsearch cluster to version 5.x, we could not continue using this plugin because it was not longer supported. The good thing was that the author of KOPF, Leonardo Menezes, had a new project called "CEREBRO" to offer an alternative to KOPF when running Elasticsearch 5.x.

Sentralisert logging ved UiO

A presentation about why and how we are using data analysis of log information at the University of Oslo with elasticsearch, logstash and kibana.

Subscribe to elasticsearch