Author

Ankur Mandal

March 11, 2024

How To Reduce Costs In AWS

Author

Ankur Mandal

4 min read
March 11, 2024

While there is no doubt that AWS leads the way as a cloud service provider, your AWS bill can quickly escalate if the factors contributing to it are not kept in check. 

Several factors affect how much you will pay for your AWS bill, such as storage resources allocation and compute resource allocations, of which storage resource is a prominent contributor. 

Hence, it is essential to have strategies that can take appropriate action towards unwanted storage resources and compute resource allocation. This blog will walk you through some practical ways to reduce AWS costs.

Introduction To AWS

Renowned for its reliability, scalability, and cost-effectiveness, AWS boasts a suite of over 200 comprehensive services. Recent statistics from the second quarter of 2022 affirm AWS's dominance, claiming a formidable 34% share in the ever-evolving cloud market.

Understanding the intricate cost structure of AWS becomes pivotal in this dynamic landscape. Such comprehension enables informed decision-making, efficient resource management, and the implementation of robust cost-saving strategies.

Deciphering the cost implications of various AWS resources is key to strategic resource provisioning and utilization. This understanding lays the groundwork for optimization strategies like right-sizing instances, judiciously selecting storage options, and leveraging computing resources optimally.

Exploring the diverse factors influencing AWS costs is vital for effective cost control. These factors encompass computing resource utilization, storage resource usage, and data transfer. 

Let us talk about these factors in detail.

Computing resource utilization: The following compute resources affect the AWS cost.

  • On-Demand Instances: It delivers scalable compute capacity without initial expenses or long-term obligations. Users are charged based on per hour or per second usage, depending on the instance type.
  • Reserved Instances (RIs): It offers significant cost savings compared to On-Demand Instances. In exchange for a commitment of one or three years, RIs provide a discount. Users can choose between Standard RIs, Convertible RIs, and Scheduled RIs based on their requirements for flexibility and potential modifications.
  • Spot Instances: It allows users to bid for unused EC2 capacity, potentially reducing costs. However, AWS may terminate these instances if it needs the capacity back.
  • Dedicated Hosts: It enables users to exclusively utilize a physical server. Pricing depends on the number of dedicated hosts and the instance type.
  • AWS Lambda: It allows users to execute code without provisioning or managing servers. Pricing is based on the number of requests and the consumed compute time.

Storage resource usage: The following are the storage resources that impact the overall AWS bill.

  • Amazon S3 (Simple Storage Service): S3 is a scalable service designed for storing objects, offering a range of storage classes. The pricing structure incorporates the storage used, data transfer, and additional features like versioning and data retrieval.
  • Amazon EBS (Elastic Block Store): EBS provides block-level storage volumes that EC2 instances can utilize. The cost is determined by the provisioned storage capacity, with various volume types available for selection.
  • Amazon Glacier: Glacier is a cost-effective storage service mainly used for long-term data backup and archiving purposes. The pricing model is based on the amount of data stored and retrieval time.
  • Amazon RDS (Relational Database Service): Amazon RDS offers managed relational databases, with costs encompassing database instance hours, storage capacity, and data transfer.

Data Transfer: Data transfer expenses arise whenever data is transferred to AWS. The pricing structure varies depending on the geographical region and the volume of data being transferred.
Similarly, data transfer costs are incurred when data is moved out of AWS towards the internet or other AWS regions. The pricing model for this depends on the region involved and the quantity of data being transferred.

Among these, storage resource usage emerges as a particularly impactful element in the overall AWS billing structure.

A study by Virtana's The State of Hybrid Cloud Storage revealed that 95% of cloud decision-makers in the US and UK acknowledged the surge in storage costs. Alarmingly, over 54% reported that their storage expenses were escalating faster than their overall cloud bills.

At Lucidity, we delved deeper into understanding the correlation between storage expenses and overall cloud expenditures through an extensive storage audit conducted across leading companies. Our analysis revealed that a significant 40% of these organizations' total cloud spending is directed toward storage services.

In our research involving over 100 enterprise clients, we observed that 15% of their total cloud expenses derived from EBS (Elastic Block Store) usage within AWS. Additionally, we noted an average disk utilization rate of 25%. Despite significant overprovisioning, these enterprises faced quarterly downtimes due to insufficient disk space.

Further analysis revealed the primary reasons behind the escalating EBS costs within AWS:

  • Over-utilized volumes, like Amazon EBS volumes, represent unused or accessed storage. Although idle, they consume resources and incur costs, contributing to unnecessary expenses. Identifying and decommissioning such idle volumes becomes crucial for cost optimization, freeing up resources and reducing expenses linked to unused storage.
  • Idle volumes: Idle volumes emerge when existing storage consistently operates near maximum capacity. This scenario indicates overutilization, possibly requiring additional resources or more efficient storage types to meet demand. In our client audits, we found that idle resources, whether unattached to the VM or connected to a stopped VM, were significant causes of cost wastage.
    Over-utilized volumes lead storage solutions to accommodate increased demand, resulting in elevated costs. Optimizing costs and performance involves monitoring and adjusting volume capacity to match actual workload requirements, mitigating unnecessary expenses.
  • Overprovisioned volumes: An overprovisioned storage volume has been provisioned with more capacity than is needed to meet workload requirements. This excess provisioning can result from inaccurate estimates of storage needs or evolving usage patterns. Our storage audit further revealed that 95% of the wastage was due to overprovisioned resources.
    Overprovisioned volumes contribute to unnecessary costs, as you are paying for greater storage capacity than your actual usage. For cost optimization, it is essential to size volumes based on accurate assessments of data storage needs. With AWS tools and features for dynamic scaling, provisioned capacity can be adjusted based on real-time demand, minimizing overprovisioning.

While organizations focus heavily on optimizing compute resources, they overlook storage resources, which significantly contribute to the overall AWS cost, especially EBS. A provisioned EBS volume that is not actively used but is still provisioned contributes to unnecessary costs. 

The organization considers storage optimization a hassle because cloud service providers (CSPs) have limited functionality. Hence, optimizing storage demands often necessitates developing a custom tool. However, this approach will lead to a significant increase in DevOps efforts and time investment.

On the other hand, using CSP tools exclusively may result in a less-than-optimal, manual, and resource-intensive process that cannot be sustained daily.

Thus, organizations may tolerate over-provisioning to protect critical applications while acknowledging the tangible impact on their day-to-day operations. The compromise results from the challenges of balancing the need for efficient storage management and the practical constraints imposed by the available tools and resources.

Under the guise of "storage optimization," this seemingly straightforward approach comes with several cost-related consequences, including:

  • Overprovisioning EBS volumes means allotting more storage space than necessary, which leads to higher costs.
  • Bills for Amazon Web Services are based upon provisioned storage capacity. Overprovisioning reduces cost efficiency since you pay for storage you do not use.
  • These overprovisioned EBS volumes need to be utilized more, contributing to inefficient budgeting and infrastructure utilization. They could be allocated more effectively to meet other business requirements.
  • When overprovisioned volumes are backed up, additional costs can arise. Storing unnecessary data increases the size of backups, leading to higher backup and disaster recovery costs.
  • The cost of overprovisioned volumes can increase when data must be transferred between volumes or instances, especially when data must be shared across regions or availability zones.
  • Overprovisioning cloud storage also affects the total cost of running cloud-based applications and services, which drives up overall cloud bills.

Assessing and adjusting EBS volumes regularly is essential. By implementing monitoring, automation, and scaling strategies, you can optimize storage resources dynamically, avoiding unnecessary expenses associated with overprovisioning and paying only for the resources you use. Regularly reviewing and optimizing your AWS storage strategy ensures your infrastructure is aligned with your applications' evolving needs, resulting in improved operational efficiency and cost savings. This strategic implementation ensures continued performance and reliability.

Tips to Reduce Costs in AWS

Now that we know the basics of AWS and its cost structure, highlight how overprovisioning is the major culprit behind the growing AWS cost. Let us talk about the various tips you can implement to reduce costs In AWS.

Assessing And Monitoring AWS Usage

The first step to reducing costs in AWS is to assess and monitor AWS usage. While multiple ways and tools can help assess and monitor compute resources, you will unfortunately not find many ways to assess and monitor AWS storage resources. You can follow the tips mentioned below to assess storage in AWS.

  • Choose AWS storage services that best meet your specific requirements. The available options include: Amazon S3, which is suitable for object storage, Amazon EBS is designed for block storage in EC2 instances, Amazon RDS and Amazon Aurora are both suitable for relational databases and Amazon Glacier is ideal for long-term archival needs.
  • Evaluate the frequency of data access to make informed decisions.
  • Consider if low-latency access is necessary or occasional high-latency is acceptable.
  • Implement lifecycle policies for managing data stored in Amazon S3. This allows transitioning between storage classes based on access patterns (e.g., Standard, Intelligent-Tiering, Glacier).
  • Leverage AWS tools to automate data archiving or deletion when it is appropriate.
  • Ensure the implementation of robust security measures, including encryption in transit and at rest.
  • Maintain compliance with relevant data governance and regulatory requirements.
  • Develop a clear understanding of the cost structure associated with the selected storage services.

Another crucial aspect of reducing costs in AWS is monitoring EBS utilization. Monitoring EBS metrics assists in detecting potential performance hindrances within your storage infrastructure. These hindrances may involve elevated disk I/O, latency, or throughput issues. By scrutinizing EBS metrics, you can optimize your storage settings accordingly to align with your applications' performance requisites. Moreover, keeping track of the EBS usage helps in the detection of activities that could be elevating expenses, such as excessive I/O or use of high provisioning storage resources.

When assessing and monitoring AWS usage to reduce the overall cost, we advise against taking the manual route or using aws monitoring tools. This is because this will involve too much work for the DevOps teams or more money for implementing such tools. Manual intervention and utilization of 3-4 tools make the whole process cumbersome and prone to low productivity and numerous downtimes. In addition, the complexities developed in storage environments become another issue that makes monitoring costs even more challenging.

So what should you do?

We suggest going the automation way!

With Lucidity's Storage Audit, an executable automated auditing tool, you can gain comprehensive insights into your disk health. Once deployed with just a click of a button, Storage Audit offers details about:

  • Total Disk Expenditure: Our analysis aims to facilitate an understanding of your current disk-related expenditures. We will help you gain insight into your present disk spend , your optimized bill, and how to reduce your storage costs by as much as 70%.
  • Total Disk Inefficiency: We aim to eliminate unnecessary waste and enhance overall resource efficiency by pinpointing the root causes of overprovisioning within your disk utilization.
  • Disk Downtime Mitigation: To mitigate the potential risks associated with downtime regarding reputation and finances, collaborate with you to identify instances and locations where such disruptions could occur. Any downtime events will be addressed proactively and minimized to the extent possible.

Resizing The Disk

Once you have the audit report about the disk health, it's time to move ahead with resizing the disk as per the changing requirements. Resizing a disk plays a crucial role in cost optimization in AWS

While resizing the disk has significant benefits in terms of optimizing AWS cost. AWS only allows the expansion of storage resources. Follow the steps mentioned below to expand your EBS storage.

  • To determine the type of EC2 instance being utilized and the corresponding storage (EBS volumes) linked to it, analyze the current storage utilization on your EBS volumes through the AWS Management Console or command-line tools.
  • Access the EC2 dashboard in the AWS Management Console to navigate to the desired location.
  • Identify the EBS volume associated with your particular EC2 instance.
  • Make modifications to the volume to increase its size as required.
  • For Linux instances, it might be necessary to utilize commands such as resize2fs to extend the file system appropriately.
  • Keep a close eye on the status of the volume modification to ensure progress.
  • Once the modification is completed, perform a verification process to confirm that the EC2 instance now reflects the updated storage capacity accurately.

Talking about AWS EBS volume shrinkage, there is no direct way of shrinking it. While a live system is running, shrinking storage requires rearranging data while it is running. This makes it challenging to do so without affecting availability and reliability. Moreover, this data rearranging could also have a degrading impact on the I/O performance of the storage volume, leading to disruption.  

This way of manually shrinking the storage volume would involve forcing your DevOps team to navigate multiple tools, a time-consuming process that will impact productivity.

Hence, instead of manually working your way through resizing storage resources, we suggest opting for an automated way. We at Lucidity bring an EBS Auto-scaler, which automates the resizing process and seamlessly expands and shrinks the disk as the requirements change. 

In contrast to conventional scaling methods that often lead to overprovisioning, resulting in resource waste, or underprovisioning, resulting in performance issues, Lucidity offers a cutting-edge cloud storage solution. We ensure: 

  • Effortless increase or decrease of resource capacity without experiencing any interruptions or performance setbacks. 
  • A utilization level of up to 80% leads to substantial cost savings.

What Are The Benefits Of Lucidity's Auto Scaler?

Lucidity Auto Scaler offers the following benefits.

Minimized storage cost: Our storage solutions redefine the benchmarks for return on investment through cost efficiency. By using our EBS Auto-Scaler, you can save up to 70% on your storage costs. Unlike conventional on-premise block storage optimization solutions that require a minimum of 100 TB to deliver ROI visibility, Lucidity's EBS Auto-Scaler delivers tangible ROI with as little as 50GB of storage.

Automated expansion and shrinkage: We bring automation to the forefront of capacity planning management, distancing it from the conventional approach involving four manuals, three tools, and DevOps team intervention.

Our EBS Auto-Scaler automates the expansion and shrinkage of EBS, eliminating any potential downtime, buffer time, or performance lag. By automating the process, we ensure efficiency without requiring manual involvement.

The EBS Auto-Scaler also offers a customizable policy feature, allowing you to define specific parameters for optimized EBS management. Set disk utilization, maximum disk thresholds, and buffer sizes as you see fit.

No downtime: In less than a minute following an unexpected spike in traffic or workload, Lucidity swiftly expands the disk to meet the heightened demand seamlessly. Shrinking and expanding occur without downtime, and neither operation affects performance, ensuring optimal performance throughout both operations.

We helped Bobble AI bring down its EBS cost!

Bobble AI, a prominent tech company, has been using AWS Auto-scaling groups with more than 600 instances running on average per ASG. While optimizing their AWS Auto Scaling Groups (ASGs), they encountered challenges caused by limitations in Elastic Block Storage (EBS). Their inefficient provisioning of EBS volumes led to significant operational complexities and cost overruns.

This is when they reached out to Lucidity. Bobble's Amazon Machine Image (AMI) seamlessly integrates with Lucidity's Autoscaler agent, facilitating effortless deployment within their Auto Scaling Group (ASG). Lucidity's seamless integration ensures it maintains a healthy utilization range of 70-80% by dynamically scaling each volume in response to workload demands.

With Lucidity, Bobble no longer has to code, create new AMIs, or refresh its entire cycle. In just a few clicks, Lucidity can provision Elastic Block Storage (EBS) volumes and scale over 600 instances per month.

With us, Bobble was able to:

  • Reduce their storage cost by 48%
  • Save 3 to 4 hours per week on DevOps efforts

Lucidity revolutionized their Auto Scaling Group (ASG) management by automating and optimizing storage resources, enabling them to minimize operational overheads and costs while maintaining high-performance standards.

Resource Tagging

Resource tagging is a fundamental practice for optimizing costs in AWS. It provides transparency, enables targeted cost management strategies, and allows organizations to tailor cloud spending to their business strategy.

Here are the following ways you can use tags to reduce AWS costs:

  • Employ tags to identify and cease or halt inactive resources.
  • Evaluate resource usage by considering tags and appropriately adjust instance sizes to optimize costs.
  • Specify scaling guidelines for Auto-Scaling Groups using tags.
  • Allocate resources dynamically using tags, promptly adapting capacity to cater to evolving demands.
  • Employ automation with tags to identify and handle redundant resources.
  • Establish budgetary limits based on tags, enabling monitoring of expenses for specific projects or departments.
  • Configure notifications to alert when tagged resources exceed predefined spending thresholds.

Delete Unattached EBS Volume

Unattached volumes continue to incur storage costs without providing any value. Identifying any EBS volumes no longer attached to any instances is essential. Delete unneeded volumes to ensure you only pay for storage actively contributing to your infrastructure. 

You can delete unattached EBS Volume using the following method.

Using Console

  • To access the Amazon EC2 console, go to https://console.aws.amazon.com/ec2/.
  •  On the left-hand navigation pane, click on "Volumes."
  • Find the particular volume you want to remove and click on it to highlight.
  • Choose "Actions" from the menu and then select "Delete volume."
  • Confirm your decision in the confirmation dialog box by selecting "Delete."

Use Storage Tiers

Amazon S3 offers a variety of storage classes or tiers, such as Standard, Intelligent-Tiering, Glacier, etc. Select the suitable storage class based on the type of access patterns and performance requirements of your data. As a result, you can optimize costs by choosing the level of durability and retrieval time you need. You can use storage tiers in the following ways to reduce AWS costs.

  • To effectively manage your data, it is crucial to analyze the access patterns. This entails identifying the data that is accessed frequently and distinguishing it from rarely accessed data. For frequently accessed data, opting for a storage class that offers low latency and high throughput would be advantageous. Conversely, considering a lower-cost storage class would be more appropriate for infrequently accessed data.
  • AWS offers the capability to define lifecycle policies, which enable the automatic transfer of objects from one storage class to another based on predefined time intervals. For instance, you can set up a policy that transfers objects to a more cost-effective storage class once a specific period of inactivity has lapsed.
  • You might want to consider utilizing the Amazon S3 Intelligent-Tiering storage class. It allows automatically shifting objects between two access tiers (frequent and infrequent access) according to evolving access patterns. This feature can effectively trim down costs for you, eliminating the necessity for any manual handling.
  • The Amazon S3 Storage Class Analysis tool assists in analyzing access patterns of stored data and provides recommendations for cost optimization by selecting appropriate storage classes. This valuable information enables users to make informed decisions regarding data transition to different storage classes.

Use Reserved Instances

With Reserved Instances (RIs), you can save significant money over on-demand instances. You'll receive lower hourly rates by committing to one- or three-year terms. To maximize savings while maintaining flexibility, analyze your long-term resource needs and purchase RIs strategically. You can significantly save on AWS costs by utilizing Reserved Instances in the following way.

  • Reserved Instances (RIs) are most suitable for workloads that exhibit steady and predictable usage patterns. In scenarios where your application consistently maintains a certain level of usage, RIs can provide substantial cost savings. 
  • It is advisable to assess your historical usage patterns to pinpoint instances that consistently remain active, and subsequently align them with appropriate Reserved Instances.
  • Leverage AWS tools such as AWS Cost Explorer to determine suitable instances for Reserved Instances.
  • Make a selection between Standard RIs, which grant a discount in return for a commitment, or Convertible RIs, which offer greater flexibility to switch instance types within the same instance family.
  • When it comes to reserving instances, it is not necessary to reserve all of them. It is advisable to employ a combination of Reserved and On-Demand instances to ensure flexibility in accommodating fluctuating workloads. By reserving instances for steady-state workloads and utilizing On-Demand instances for dynamic or variable workloads, you can achieve an optimal balance in managing your workload requirements.

Use Spot Instances

With Spot Instances, you can utilize spare AWS capacity at a lower cost. These instances are well suited for fault-tolerant workloads that require flexibility. AWS may interrupt Spot Instances at short notice if the capacity is needed back. However, using them can save you substantial money, particularly in batch processing and testing environments.

Spot Instances are suitable for fault-tolerant workloads or batch processing tasks that can handle interruptions. Spot Instances are well-matched for applications that can effortlessly recover from interruptions, such as stateless web servers or parallel processing jobs.

Utilize a combination of spot instances and auto-scaling groups to dynamically adapt your capacity according to demand. This approach guarantees the utilization of cost-effective Spot prices when surplus capacity is available.

Identify And Delete Orphaned Snapshots

An orphan snapshot is created when the snapshot is not associated with an existing Amazon Elastic Block Store (EBS) volume. AWS retains snapshots until they are explicitly deleted. To optimize storage costs, regularly audit and delete orphaned snapshots created for EBS volumes and those not associated with any volumes.

Follow the steps below to identify orphaned snapshots

  • To access the Amazon EC2 console, visit https://console.aws.amazon.com/ec2/.
  • In the navigation pane, choose the "Snapshots" option located within the "Elastic Block Store" section.
  • Examine the list of snapshots and identify any that lack associated volumes.

Follow the steps mentioned below to delete orphaned snapshots

To remove a snapshot that is no longer associated with any data, follow these steps:

  • Pick the orphaned snapshot(s).
  • Click on "Actions" and navigate to "Delete snapshot."
  • Proceed to confirm the deletion in the dialog box.

You can also automate the process using the following

  • With AWS Lambda, it is possible to develop a function that automates detecting and removing orphaned snapshots. This function can be scheduled to execute periodically. 
  • CloudWatch Events can initiate the Lambda function according to a predefined schedule or when specific events occur.

Before deleting any snapshot, it is imperative to ensure that it is not essential for backup or data recovery purposes. Consider adopting a tagging strategy to categorize your snapshots, indicating their intended use and the responsible individual or team.

Maximize AWS Potential And Gain A Competitive Edge

Effective cloud cost management goes beyond financial need; it's pivotal in unlocking AWS's full potential while retaining a competitive edge in a swiftly evolving tech landscape. Optimizing costs becomes a strategic necessity as organizations leverage AWS for innovation and efficiency. Achieving cost efficiency involves maximizing resource use, selecting services thoughtfully, and implementing robust monitoring, tagging, and governance practices.

AWS cost optimization practices ensure prudent financial management and sensible resource allocation, nurturing a resilient and sustainable cloud infrastructure. Regular assessments, fine-tuning, and adherence to AWS best practices are integral to this approach.

Experiencing low disk utilization or unexpected EBS cost hikes? Reach out to Lucidity for a demo and discover how our automation-driven solutions can uncover cost-saving opportunities for you.

You may also like!