If you don’t have any data persistence policies set for your Redis cache which has been deployed in your Kubernetes cluster and you intend to move your data from one Redis pod to another or maybe you’re just curious about how to backup your Redis cache data and import it to a new Kubernetes Redis pod, this guide is for you.
How do we even get here?
If we don’t have any persistence policies for the Redis instances, data is lost if the Kubernetes Redis pod gets deleted or the Redis server is rebooted.
Steps to backup and restore Redis cache
# Set kubernetes namespace where redis is deployed $ kubens redis-service # To access redis cli you need to do a port forwarding to the redis port, default port is 6379 $ kubectl port-forward pods/redis-master-0 6379:6379 # Connect with Redis DB from another window with redis-cli $ redis-cli -a REDIS_PASSWORD # Use redis-cli commands to check the working directory & to take a snapshot 127.0.0.1:6379> CONFIG GET dir # this command will outputs something like: 1) "dir" 2) "/data" # this is where redis db is being persisted # Take a snapshot with save, if your data is GBs run bgsave, these operations will create a file .rdb file at location of dir 127.0.0.1:6379>save # this saves database at that timestamp into a file called dump.rdb # Fire up a new terminal to log into the redis pod with bash to retrieve the file $ kubectl exec -it redis-master-0 -c redis /bin/bash # Copy file with kubectl cp from pod to the local folder say stg-data on your local machine # kubectl cp copies file from source to destination, syntax is # kubernetes namespace/NameOfThePOD:/file destination_Path $ kubectl cp redis-service/redis-master-0:data/dump.rdb /Users/NAME/repos/stg-data # Additional precautionary step you can take at this point is, # verify if the dump file is valid for reimporting # fire up a local Redis instance and add the file to local redis's DIR # location and restart Redis then run keys* command to check if the data is valid in that dump.rdb # After taking the back up of the data you're now ready to move the data to your new Redis-cluster $ cd stg-data # Set up or update your kubectx to point to new cluster # new environment # During this step since you've switched to a new cluster# we need to refresh port-forwarding # and need to reconnect to redis-cli as we did earlier # Push backed up redis db to the new cluster, # this command replaces rdb of the new cluster which is stored at DIR config $ kubectl cp dump.rdb redis-service/redis-master-0:data/dump.rdb # Redis needs to be restarted, simply go to the redis-cli window kick off a shutdown 127.0.0.1:6379> shutdown # Last Step is to verify if the data is successfully imported from the rdb file we replaced 127.0.0.1:6379> keys * # this should give the list of all keys , verify your keys from the list [/code]
If you reached here, Congratulations! You’ve successfully exported and imported your cached data to your new Redis Cluster or the new pod you deployed.
But it is more important to have data persistence policies for Redis Cluster running in production environments:
This can be addressed with number of ways including:
- Using persistence volume API from Kubernetes: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
- Exploring data export option to S3 for disaster recovery etc
- Storing backup data to a DB?