In this blog we will explore how Objects help you in data management and we will explore some of the options to control your data.
Nutanix Objects cluster can scale to billions of object and PB's of data. While demand for storage grows with time, it soon become very difficult to manage such high volume of data at this scale.
Keeping watch on every incoming data, and deleting unwanted data manually is almost impossible task as scale grows.
Let's take one of the typical Example in dev-ops/build workload : You have jenkins build system which generates the build on every code commit coming from your eng team. And then jenkins dump those build (which could be from few MBs to GBs) to your Objects cluster. While these builds are very important but after a day or two, they can be safely deleted, so basically you may want to keep those build just for short (of specific) amount of time. And then delete them to free up storage resources. But deleting those builds , becomes manual task for admin. And consider the case you are using your Objects cluster for hosting company wide data, application critical data and also dev-ops workload. When you have 100s or 1000s of users/application consuming Objects cluster, amount of data pushed to Objects becomes real problem. And deleting so much of data can be real task.
And thats where you need native policies which can help you to auto-clean up unwanted data and free up valuable storage resources. And this is exactly what Objects LifeCycle policies do for you.
With this feature, you can configure object expiry policies on the bucket, and cleanup of all applicable data will be handled by Objects. So you don't have to worry about manually cleaning millions or billions of object or free'ing up storage space. Lets dig deeper.
We will explore :
- Objects LifeCycle Policies.
- Configuring expiration:
- From Objects UI.
- From S3 client.
- From S3 API
What you need :
- Objects Cluster
- Valid IAM credentials.
- Access to Objects UI.
I have 7 Objects cluster deployed on my Prism Central.