It is an almost impossible task to try write about Amazon Web Services (AWS) without being either unbelievable shallow or unbelievably geeky. The breadth and depth of AWS’s functionality is so extensive that even it experts start specialising on certain elements. So, what’s the point anyway?
Well, I am going to focus on the technicality only to the extent that is required to explain the business model innovation of it, similar to how we looked at the business models of the Kindle platform, Alexa platform and Prime Video. Contrary to those, however, AWS is not fuelled by the platform business model (which I have covered so extensively) but by the Infrastructure-as-a-Service (IaaS) model which is similar to the better known Software-as-a-Service (SaaS) business model.
Users: Everybody
Netflix: 100% on AWS
Netflix is one of the most envied innovators themselves. It speaks volumes of AWS’s capabilities that Netflix is fully hosted by AWS. (To be entirely correct, recently Netflix started using Google Cloud for some new features at small scale.)
In early 2016, Netflix reported having completed their move to the cloud. We can learn a lot about AWS from Netflix’s Vice President, Cloud and Platform Engineering about the migration to the cloud:
- “Our journey to the cloud at Netflix began in August of 2008, when we experienced a major database corruption and for three days could not ship DVDs to our members. That is when we realized that we had to move […] towards highly reliable, horizontally scalable, distributed systems in the cloud […]
- We chose Amazon Web Services (AWS) as our cloud provider because it provided us with the greatest scale and the broadest set of services and features […]
- The Netflix product itself has continued to evolve rapidly, incorporating many new resource-hungry features and relying on ever-growing volumes of data. Supporting such rapid growth would have been extremely difficult out of our own data centers; we simply could not have racked the servers fast enough. Elasticity of the cloud allows us to add thousands of virtual servers and petabytes of storage within minutes, making such an expansion possible […]
- We rely on the cloud for all of our scalable computing and storage needs — our business logic, distributed databases and big data processing/analytics, recommendations, transcoding, and hundreds of other functions that make up the Netflix application […]
- The cloud also allowed us to significantly increase our service availability […] it is possible to survive failures in the cloud infrastructure and within our own systems without impacting the member experience […]
- Cost reduction was not the main reason we decided to move to the cloud. However, our cloud costs per streaming start ended up being a fraction of those in the data center — a welcome side benefit. This is possible due to the elasticity of the cloud, enabling us to continuously optimize instance type mix and to grow and shrink our footprint near-instantaneously without the need to maintain large capacity buffers. We can also benefit from the economies of scale that are only possible in a large cloud ecosystem […]
- Arguably, the easiest way to move to the cloud is to forklift all of the systems, unchanged, out of the data center and drop them in AWS. But in doing so, you end up moving all the problems and limitations of the data center along with it. Instead, we chose the cloud-native approach, rebuilding virtually all of our technology and fundamentally changing the way we operate the company.“
Everybody uses AWS
But the list of AWS customers is long and full of innovative companies. Based on one market intelligence company, these are the (likely) largest customers of AWS and their annual spend:
- Netflix – $19 million
- Twitch – $15 million
- LinkedIn – $13 million
- Facebook – $11 million
- Turner Broadcasting – $10 million
- BBC – $9 million
- Baidu – $9 million
- ESPN – $8 million
- Adobe – $8 million
- Twitter – $7 million
A who-is-who of other large companies using the AWS can be found here (basically any large company you can name).
There are an estimated total of >2.5m companies using AWS.
(2) The platform – an overview
The building blocks (=services)
One of the strongest selling points for using AWS is its extensive set of services (I call them building blocks) such as computing, queuing of jobs, database services, storage, email and notification services and a lot more.
- Compute Services
- Storage
- Database
- Migration
- Networking and Content Delivery
- Developer Tools
- Management Tools
- Security, Identity, and Compliance
- Analytics
- Artificial Intelligence
- Mobile Services
- Application Services
- Messaging
- Business Productivity”
- Desktop & App Streaming
- Internet of Things (IoT)
- Game Development
Or a different way to represent the services in layers. It distinguishes between the physical layer, a layer of foundational services, applications services and management tools.

Architectures for common problems
How the individual blocks can be put together to solve for common problems is part of the architectural design. The diagram below shows Amazon’s recommendation for a media sharing functionality that could, for example, be part of a social media platform.
With this massive set of service comes one of AWS’s the challenge of building the best solution for a given problem. Any complex as-a-service provider needs to help customers with this. Here are just a few approaches that Amazon uses:
- Extensive online documentation of each service and API
- Videos, webinars, tutorials, blogs, helpful evangelists and more
- AWS recommends architectures for various types of common problems
- A community of practices where others can showcase their solutions to practical problems (“This is my architecture”)
- GitHub (a large developer community) enlisting tried and tested AWS solutions and ongoing projects
- The AWS partner network of service providers, consultants, etc
Learn from the best companies & develop innovation ideas you can be proud of!
As-a-Service: IaaS, PaaS, SaaS
There are three commonly differentiated types of as-a-service architecture.

Most of AWS’ services fall under into the Infrastructure as a Service category.
- On-premises: this is the traditional architecture where the in-house team manages everything (though there are again differences depending on in-house servers or usage of data centres). Most AWS clients would have a hybrid solution of some on-premises and AWS
- IaaS (Infrastructure as a Service): This type of service is the one closest to the hardware level without giving access to the hardware itself. It gives access to the operating system layer (in AWS you can choose between Windows or Linux OS) and take care of the layers between the OS and your applications
- PaaS (Platform as a Service): With PaaS, users can focus on the application and data layer only. AWS beanstalk is a service that falls under this category. It allows users to easily deploy and manage their applications on AWS without worrying about the layers below
- SaaS (Software as a Service): SaaS is become increasingly popular, e.g. Microsoft Office 365 is the SaaS version of Microsoft’s traditional Office applications. Other well-known examples are Google Apps, Slack, Zendesk, Dropbox, Salesforce, etc. AWS itself is not an SaaS service but they have offer solutions for SaaS providers to build their services on AWS

I will cover Software-as-a-Service in more depth in a number of articles in the near future as they have become an important part of the IT landscape.
Use cases
What do you do once your services cover a certain amount of foundational elements and a range of typical application layer elements? You start going into verticals. That is what AWS has done in recent years.
Example: Vehicle to vehicle communication
An innovative way to increase car safety is vehicle-to-vehicle communication. It could be used to “look around the corner” or as in the AWS example below to warn other cars of rainy conditions.

The desired functionality can be built by combining a number of AWS services:
- Required computing capacity can be scaled up and down quickly: e.g. more cars transmitting and receiving data in peak hours does not need to translate into having to own servers for peak demand. AWS has a few solutions for these situations as we will discuss below
- The same holds true for increasing the number of cars that are part of the services which does not need to translate into having to acquire new physical servers
- Storage requirements scale up as more cars join and as more data gets collected over time (AWS S3)
- Analytical computing power through scalable machine learning modules to steadily improve accuracy over time (SageMaker)
- Routing of data securely between data sources and consumers as well as defining rules to control the connected devices (AWS IoT Core)
- And much more
With over 2.5m customers, the list of use cases is almost endless (you scroll for a very long time through this list of AWS use cases).
Personally, I find it inspiring to see the opportunities that have been created for developers within a mere 10 years. Thinking back of my own times of developing it is exciting how far things have come in such a short time frame.
It ultimately helps firms to focus more on the customer’s needs than the IT system’s needs!
Benefits for customers
AWS could not have become the success that it has if it wasn’t for its tangible and immediate customer benefits. Let’s look at the advertised benefits of using AWS (and its challenges):
- Trading Capex for variable opex: Instead of building one’s own data centres incurring large cash flows upfront, AWS promises that customers only pay for the computing resources that they actually use
- Economies of scale: AWS has achieved a scale that no individual company will achieve by itself. This allows AWS to offer their services at lower costs than firms could achieve in an in-house solution
- Flexible capacity: It can be difficult for firms to predict how much computing or storage capacity they need for new services. Risks are to over provision (thus sitting on idle capacity) or under provisioning (e.g. providing a poor customer experience). AWS provides high flexibility in scaling up or down computing capacity within minutes
- Agility: Cloud computing services simplify the process of developing new services/offerings for companies by not having to worry about the lower level infrastructure considerations
- Focus on differentiating projects: By being able to reduce the IT infrastructure tasks of project teams can focus on the differentiating parts of a new service/offering
- Global reach: AWS with its servers across the globe allows their customers to offer their services worldwide without incurring latency losses from connecting to servers in one (on-premise) location

Cost control
Costs will be one of the most important factors in evaluating cloud services. Any as-a-service platform needs to build a strong case. Here is what AWS does:
- AWS offers a total cost of ownership (TCO) tool so their customers can build an investment case. (In order to have a good comparison, prospective clients need to guess the capacity they will be using. Not having to do was one of the advantages stated by Amazon)
- In an example, that I have run, AWS generated a comprehensive report (since it’s 2.2 MB I won’t link to it here). It is a useful approach for an SaaS provider to arm their prospective customers with numbers so they can to convince decision makers within their organisation
- SaaS, IaaS, PaaS need to be open about costs. Opaqueness about costs will put off a lot of prospects
- AWS has got a lot of tools and even APIs for cost tracking/control as well as budgeting, see some of the links below
Challenges
Here are some common challenges of AWS:
- Lock-in effects: bringing things back in-house (if that decision is ever made) or even to migrate to a different cloud service provider will come with cost and time efforts. Therefore, there is some exposure to price increases over time
- Cost control: Many companies seem to have more stringent cost control requirements to outsourced services than in-house work (possibly due to the sunk cost fallacy). AWS offers a suite of cost control tools and even a Budget API to help their customers. Despite all this, understanding AWS bills is frequently mentioned as a challenging task for
- Keeping up: Gartner consultancy points out that keeping up with the changes, enhancements and best practices in AWS requires constant efforts and that it “may challenge even highly agile, expert IT organizations, including AWS partners”
- Other: some sources mention security, privacy, availability as downtime as a disadvantage. Others oppose this view and stated that AWS will beat most on-premise infrastructures on these dimensions. The differing opinions may be going back to what Netflix pointed out: “forklifting” existing applications into the cloud may lead to inefficient implementations whereas developing a cloud-native approach would be the best way to take advantage of the potential benefits
Pricing models
AWS uses a number of pricing models:
- Pay-as-you-go (On-demand): This is the most flexible option but also the most expensive. It allows ramping capacity up or down as required without any forward planning. It avoids the risks of buying up too much capacity that remains unused or under provisioning
- Reserved Instances (RI): Translate RIs into reserved computing or storage capacity. By reserving computing/storage capacity upfront, the user can save somewhere in the vicinity of 35%-75%. The savings depend on a number of factors including how much the user is willing to pay upfront. This pricing model is available for a few core offerings only
- Volume discounts: Amazon offers tiered pricing that reduces the price per unit depending on the volume of the purchase. This too applies only on certain services
- Spot pricing: With discounts of 90% off the on-demand price for purchasing on the spot (similar to commodity spot prices on commodities markets). This is how Amazon sells unused capacity. However, AWS reserves the right take away this type of computing capacity on short notice (2 minutes)
While these principles sound straightforward, looking into the details you would find an amazing complexity behind these few pricing principles. The exact price depends on a lot of factors, including the respective service, the region and more.
We are not just scratching the surface. But we are also not going down rabbit holes. We offer valuable innovation knowledge to fuel your ideas!
Business model
One of the most important customer value propositions is that AWS allows for high flexibility of capacity usage for their customers. However, if flexibility was unconstrained it would transfer the risk of getting capacity wrong from the customer to Amazon. Think of the largest chunk of internet traffic: streaming. Like many other use cases, streaming is not spread evenly throughout the day.
Computing capacity is a perishable commodity like hotel rooms, aircraft seats or food. When unused they expire. Idle sitting processor or memory capacity makes no money to offset the initial capital costs, the ongoing operating or overhead costs. If it was a pure on-demand model, AWS would risk spikes, bullwhip effects and underutilised capacity in the long term.
Amazon’s pricing models aim to incentivise customer behaviour in a way that utilises Amazon’s infrastructure evenly while maximising revenue. With this in mind, here are a few more examples on the above-mentioned pricing models:
Utilising capacity
- Pay-as-you-go gives customers the most flexibility but they pay for it with the highest rate by comparison to the other pricing models
- Reserved Instances (RI) (=reserved capacity) sound straight-forward at first, but as you look into the details, there is complexity behind it in order to cater for different user needs
- RIs can be reserved for 1 or 3 years. They are most useful for predictable steady-state usage. Some users may have predictable capacity needs. Others may have at least a predictable ground capacity needs for which they can use RIs
- Scheduled RIs can be useful in this context: customers can commit to using a certain amount of capacity at given time-of-day and/or day-of-week. E.g. Netflix could reserve a base capacity via RIs and then add more capacity via schedule RIs for predictable peak hours (e.g. weekday evenings and then more for weekend evenings, etc). It is a matter of capturing good usage data
- Convertible RIs can be converted into other RIs (i.e. computing or storage capacity) of equal or higher value. The discounts on these types of RIs are still up to 54%
- RIs can be fully paid upfront to get the maximum discount or on monthly instalments with no further discounts
- RIs can be acquired by 3rd parties (AWS partners) and then sold onto end customers mimicking the wholesaler approach known in many other industries
- Spot Instances are an option to get even more discounts of up to 90% off on-demand prices. Amazon reserves the right to take away the computing capacity from the customer with as less as 2 minutes notification. The customer can define the interruption behaviour
- Spot instances are not a spot price as you know from commodity spot markets where the price is determined in real-time based on bids and asks. Rather AWS spot instance prices are still set by Amazon based on longer-term supply and demand patterns and differentiating between regions and computing power, etc
With all the above AWS can incentivise their customers to use AWS capacity in order to avoid that they sit on excessive unused capacity themselves which would unravel the whole business model.
The strategic view
Amazon is often described as an economies of scale company that is so successful and cost-effective because they scale everything up ad infinitum (or more than others anyway). In reality, however, economies of scale don’t go down asymptotically as you scale up. They reach an optimal point and then start increasing like a bathtub. If you scale beyond good utilisation you have spent Capex at low ROIC and incur unnecessarily high ongoing maintenance costs.
Thus, economies of scale require a thoughtful management, a healthy growth of new customer (to cater for churn and then some), useful functionality for existing customer so they expand their solution, cost management tools, incentives to achieve optimal utilisation and more.

Among the strategic success factors, AWS is one of the leading providers (if not the thought leader per se):
- Expand into new markets (industry verticals as well as functionality) via new services as well as acquisitions
- Fast increasing the portfolio of services
- Low risk with high security, privacy, reliability standards and a high likelihood of being around in another 10 years
- A strong ecosystem of AWS partners for various services, such as consulting, development, integration, support
AWS has started as a way to manage Amazon’s in-house IT. They have then opened it to external customers. This is a very similar pattern to Fulfilment by Amazon and to Shipping with Amazon. Within 10 years, AWS has grown to one of Amazon’s biggest revenue streams and the most profitable one. Not many (if anyone) would have foreseen this level of success. Whatever you think of Amazon this is inspiring for innovators.
Stay tuned for our future articles!
This article by Murat Uenlue is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Nice Blog| this blog is a very interesting to read and good for people who are ready to learn about AWS
Thanks,
much appreciated!
Murat
(links removed as we are not a advertising platform nor a back link farm. Thanks for respecting our rules.)
Nice Information, Great Inspirational content, have achieved a good knowledge from the above content on AWS Training useful for all the aspirants of AWS Training.
https://www.kellytechno.com/Hyderabad/Course/amazon-web-services-training