Storage optimization

After compute, storage costs typically dominate cloud bills. Some of these costs include the virtual disks/volumes attached to each VM; the object storage services hosting application artifacts, pictures/videos, and documents; archival data service usage; and the backup of this data to backup services.

For all forms of data storage, it is important to establish a life cycle that is as automated as possible. Data and backups should naturally flow down this life cycle until they are put into archives for compliance purposes or deleted. Tags should be used to designate which data stores or objects are needed long-term for compliance purposes. Scripts should be used to manage these, though cloud native services exist to aid users in this effort.

The AWS Simple Storage Service (S3) is a highly durable service for storing objects. S3 supports life cycle policies, which allow users to designate a transition from S3-standard to cheaper and lower performant services that Standard-IA (infrequent access), One Zone-IA, and Glacier (long-term archival cold storage). All three of these target transition services are cheaper than the standard S3 service. Objects can also be set to permanently delete after X days of being created. This can be set at the bucket level, which cascades down to all folders and objects.

The data life cycle manager for AWS EBS also provides a native solution to managing hundreds or thousands of volume level backups. For large enterprises, this task would prove cumbersome as automated tools would have to be built to query, collect information, and delete old or expired snapshots. This service from AWS allows users to natively build these life cycles.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset