This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
The Multicore Era Over the past ~15 years, server processors from Intel and AMD have evolved from the early quad-core processors to the current monsters with over 50 cores per socket. The example below is for a 2005-era processor with 60 ns memory latency and 6.4 If we want to sustain full bandwidth, we need 64/2 =32 cache lines.
A brief history of IPC at Netflix Netflix was early to the cloud, particularly for large-scale companies: we began the migration in 2008, and by 2010, Netflix streaming was fully run on AWS. In 2010, however, nearly none of it existed: the CNCF wasn’t formed until 2015!
I summarized these topics and more as a plenary conference talk, including my own predictions (as a senior performance engineer) for the future of computing performance, with a focus on back-end servers. Ford, et al., “TCP
By Werner Vogels on 05 December 2010 02:00 PM. It would not be a first that a customer thinks that his EC2 instance is down when in reality it is some name server somewhere that is not functioning correctly. There are two main types of DNS servers: authoritative servers and caching resolvers. All Things Distributed.
By Werner Vogels on 14 November 2010 04:00 PM. For example, the most fundamental abstraction trade-off has always been latency versus throughput. The throughput of this pipeline is more important than the latency of the individual operations. No Server Required - Jekyll & Amazon S3. Comments (). Where to go from here?
This new Region has been highly requested by companies worldwide, and it provides low-latency access to AWS services for those who target customers in South America. The new Sao Paulo Region provides better latency to South America, which enables AWS customers to deliver higher performance services to their South American end-users.
My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Ford, et al., “TCP
The Multicore Era Over the past ~15 years, server processors from Intel and AMD have evolved from the early quad-core processors to the current monsters with over 50 cores per socket. The example below is for a 2005-era processor with 60 ns memory latency and 6.4 If we want to sustain full bandwidth, we need 64/2 =32 cache lines.
By Werner Vogels on 12 July 2010 05:00 PM. In particular this has been true for applications based on algorithms - often MPI-based - that depend on frequent low-latency communication and/or require significant cross sectional bandwidth. No Server Required - Jekyll & Amazon S3. All Things Distributed. Comments ().
Japanese companies and consumers have become used to low latency and high-speed networking available between their businesses, residences, and mobile devices. The advanced Asia Pacific network infrastructure also makes the AWS Tokyo Region a viable low-latency option for customers from South Korea. Spot Instances - Increased Control.
This new Region consists of multiple Availability Zones and provides low-latency access to the AWS services from for example the Bay Area. As we announced earlier this month a Region with multiple Availability Zones will come online in Singapore in the first half of 2010, with other regions in Asia to follow later in 2010.
By Werner Vogels on 19 November 2010 07:51 AM. for November 2010 was released and an Amazon EC2 Cluster Compute Instance based cluster came in at #231. Understanding Throughput-Oriented Architectures - background article in CACM on massively parallel and throughput vs latency oriented architectures. All Things Distributed.
By Werner Vogels on 28 April 2010 11:00 AM. There are four main reasons to do so: Performance - For many applications and services, data access latency to end users is important. The new Singapore Region offers customers in APAC lower-latency access to AWS services. No Server Required - Jekyll & Amazon S3. Comments ().
As a part of that process, we also realized that there were a number of latency sensitive or location specific use cases like Hadoop, HPC, and testing that would be ideal for Spot. No Server Required - Jekyll & Amazon S3. The Amazon.com 2010 Shareholder Letter Focusses on Technology. Driving down the cost of Big-Data analytics.
By Werner Vogels on 24 February 2010 07:00 AM. Achieving strict consistency can come at a cost in update or read latency, and may result in lower throughput. Lowest read latency. Higher read latency. No Server Required - Jekyll & Amazon S3. The Amazon.com 2010 Shareholder Letter Focusses on Technology.
There are different considerations when deciding where to allocate resources with latency and cost being the two obvious ones, but compliance sometimes plays an important role as well. No Server Required - Jekyll & Amazon S3. The Amazon.com 2010 Shareholder Letter Focusses on Technology. Spot Instances - Increased Control.
I summarized these topics and more as a plenary conference talk, including my own predictions (as a senior performance engineer) for the future of computing performance, with a focus on back-end servers. Ford, et al., “TCP
My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Ford, et al., “TCP
With Helsinki being the capital and most populous municipality of Finland, it makes for a great edge server location. Although both countries are relatively close to one another, they are separated by a distance of approximately 500km, which adds up in terms of latency.
Sadly, data on latency is harder to get, even from Google's perch, so progress there is somewhat more difficult to judge. We only need to hold the line on script bloat for a few years for devices and networks to overtake the extreme, unconscionable excesses of the 2010's. Time (and chips) can heal these wounds if we only let it.
We organize all of the trending information in your field so you don't have to. Join 5,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content