Table of Contents
FREE course
This FREE tutorial is part of the video course 18+ Secrets From Elasticsearch Golden Contributor where I am giving you best tips and tricks about ELK stack that I have collected during my professional carrier. Check it out in Course tab.
1. Introduction
You can do smooth Upgrade of your node to higher Elasticsearch version but you cannot come back easily to older version. This is due to the way of Apache Lucene interacts with the data. Only way is to clean up your /data catalog and restore indexes from the snapshot. Therefore making backup before an upgrade is must.
Without cleaning after downgrade Elasticsearch will not start:
Caused by: org.apache.lucene.index.IndexFormatTooNewException: Format version is not supported (resource BufferedChecksumIndexInput(NIOFSIndexInput(path="/var/lib/elasticsearch/_state/_811.cfs") [slice=_811.fnm])): 1 (needs to be between 0 and 0)
Suppressed: org.apache.lucene.index.CorruptIndexException: checksum passed (2d4fdddb). possibly transient resource issue, or a Lucene or JVM bug (resource= BufferedChecksumIndexInput(NIOFSIndexInput(path="/var/lib/elasticsearch/_state/_811.cfs") [slice=_811.fnm]))
"explanation": "cannot allocate replica shard to a node with version [8.12.0] since this is older than the primary version [8.13.3]"
2. Start ELK cluster
Download zip package with project. Unpack it. You should see the structure:
elk3nodes-ipv4
- env_variables.sh
- compose.yml
- elasticsearch-certutil-instances
- elasticsearch-certutil-instances.yml
- elkConfig01
- elasticsearch.yml
- stop_docker_compose_cluster.sh
- start_docker_compose_cluster.sh
Go inside catalog elk3nodes-ipv4 and execute.
./start_docker_compose_cluster.sh
wait to see below entries in logs
installation_container-1 | Elasticsearch cluster started with status green
installation_container-1 exited with code 0
3. Load example data
Run to create mapping and load the data
curl -k -u elastic:123456 -XPUT "https://localhost:9200/products" -H 'Content-Type: application/json' -d'
{
"settings": {
"index": {
"number_of_replicas": 0
}
},
"mappings": {
"properties": {
"description": {
"type": "text"
},
"product_id": {
"type": "text"
}
}
}
}
'
curl -k -u elastic:123456 -XPOST "https://localhost:9200/_bulk" -H 'Content-Type: application/json' -d'
{"index":{"_index":"products","_id":1}}
{"description":"Running Shoes, Red","product_id":"RS-001"}
{"index":{"_index":"products","_id":2}}
{"description":"Red Running Shoes","product_id":"RS-002"}
{"index":{"_index":"products","_id":3}}
{"description":"Blue Hiking Boots","product_id":"HB-001"}
{"index":{"_index":"products","_id":4}}
{"description":"Hiking Boots, Leather, Brown","product_id":"HB-002"}
{"index":{"_index":"products","_id":5}}
{"description":"Running-Shoes Special Edition","product_id":"RS-003"}
'
3.1. Check cluster health
Please confirm that cluster is up and running with GREEN status.
curl -k -u elastic:123456 -XGET "https://localhost:9200/_cluster/health?pretty"
curl -k -u elastic:123456 -XGET "https://localhost:9200/_cat/indices?v"
4. Upgrade cluster
Time to upgrade cluster
4.1. Shutdown container
docker stop elastic-three-nodes-cluster-dockercompose-ip4-es01-1
4.2. Change version to higher
Modify file env_variables.sh to change ELK version to higher
STACK_VERSION=8.8.1
#STACK_VERSION=8.4.1
4.3. Deploy cluster again
Cluster will be deployed on top of the previous one reusing same data catalog.
./start_docker_compose_cluster.sh
4.4. Check data
curl -k -u elastic:123456 -XGET "https://localhost:9200/products/_doc/1"
Data is present.
5. Downgrade cluster
In next steps you will come back to previous version of Elasticsearch and try to start the cluster having data catalog preserved in volume.
5.1. Shutdown container
docker stop elastic-three-nodes-cluster-dockercompose-ip4-es01-1
5.2. Change version back to lower version
Modify file env_variables.sh to change ELK version to lower
#STACK_VERSION=8.8.1
STACK_VERSION=8.4.1
5.3. Deploy cluster again
Cluster will be deployed on top of the previous one reusing same data catalog.
./start_docker_compose_cluster.sh
5.4. Error
You should see an error which proves to you that you cannot come back to previous version after upgrade. Data become incompatible so only way to run lower version now with same data is to restore from backup. Whenever you planning upgrade with rollback possibility please assure backup is done before.
During startup you will see an error. This is because Elasticsearch does not support downgrades.
{"@timestamp":"2024-12-20T13:27:51.696Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"docker-compose-elk","error.type":"java.lang.IllegalArgumentException","error.message":"Could not load codec 'Lucene95'. Did you forget to add lucene-backward-codecs.jar?","error.stack_trace":"java.lang.IllegalArgumentException: Could not load codec 'Lucene95'. Did you forget to add lucene-backward-codecs.jar?\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:515)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.SegmentInfos.parseSegmentInfos(SegmentInfos.java:404)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:363)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:299)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:88)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:77)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:809)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:109)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:67)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:60)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.gateway.PersistedClusterStateService.nodeMetadata(PersistedClusterStateService.java:322)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.env.NodeEnvironment.loadNodeMetadata(NodeEnvironment.java:599)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:326)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.node.Node.(Node.java:456)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.node.Node.(Node.java:311)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.bootstrap.Elasticsearch$2.(Elasticsearch.java:214)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)\n\tat org.elasticsearch.server@8.4.1/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\n\tSuppressed: org.apache.lucene.index.CorruptIndexException: checksum passed (61cb6719). possibly transient resource issue, or a Lucene or JVM bug (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path=\"/usr/share/elasticsearch/data/_state/segments_1h\")))\n\t\tat org.apache.lucene.core@9.3.0/org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:500)\n\t\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:370)\n\t\t... 15 more\nCaused by: java.lang.IllegalArgumentException: An SPI class of type org.apache.lucene.codecs.Codec with name 'Lucene95' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath. The current classpath supports the following names: [Lucene92, Lucene70, Lucene80, Lucene84, Lucene86, Lucene87, Lucene90, Lucene91, BWCLucene70Codec, Lucene62, Lucene60, SimpleText]\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:113)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.codecs.Codec.forName(Codec.java:118)\n\tat org.apache.lucene.core@9.3.0/org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:511)\n\t... 17 more\n"}
5.5. What to do?
You have to restore data from backup. That’s why it is important to make backup before upgrade to higher versions.
Coming back again to higher version will work, just in case you are happy with it.
6. Clean up
After exercises, run stop script to remove containers and connected volumes
./stop_docker_compose_cluster.sh
7. Summary
In this tutorial you have run several commands to create Elasticsearch cluster with particular version then you made an upgrade and finally you tried downgrade cluster to lower version which failed. During this exercise you have learned that it is very important to backup your data before upgrade if you want to keep rollback option on the table.
With that knowledge you can better plan your decisions now.
Have a nice coding!
FREE course
This FREE tutorial is part of the video course 18+ Secrets From Elasticsearch Golden Contributor where I am giving you best tips and tricks about ELK stack that I have collected during my professional carrier. Check it out in Course tab.