Remove 2010 Remove Design Remove Latency
article thumbnail

Zero Configuration Service Mesh with On-Demand Cluster Discovery

The Netflix TechBlog

A brief history of IPC at Netflix Netflix was early to the cloud, particularly for large-scale companies: we began the migration in 2008, and by 2010, Netflix streaming was fully run on AWS. Today we have a wealth of tools, both OSS and commercial, all designed for cloud-native environments.

Traffic 229
article thumbnail

Amazon EC2 Cluster GPU Instances - All Things Distributed

All Things Distributed

By Werner Vogels on 14 November 2010 04:00 PM. For example, the most fundamental abstraction trade-off has always been latency versus throughput. These trade-offs have even impacted the way the lowest level building blocks in our computer architectures have been designed. Comments (). Where to go from here? Recent Entries.

AWS 125
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

USENIX LISA2021 Computing Performance: On the Horizon

Brendan Gregg

## References I've reproduced the talk references below, so you can click on links: - [Gregg 08] Brendan Gregg, “ZFS L2ARC,” [link] Jul 2008 - [Gregg 10] Brendan Gregg, “Visualizations for Performance Analysis (and More),” [link] 2010 - [Greenberg 11] Marc Greenberg, “DDR4: Double the speed, double the latency? Ford, et al., “TCP

article thumbnail

The evolution of single-core bandwidth in multicore processors

John McCalpin

Looking at sustained single-core bandwidth for a kernel composed of 100% reads, the trends for a large set of high-end AMD and Intel processors are shown in the figure below: So from 2010 to 2023, the sustainable single-core bandwidth increased by about 2x on Intel processors and about 5x on AMD processors.

article thumbnail

Expanding the Cloud with DNS - Introducing Amazon Route 53 - All.

All Things Distributed

By Werner Vogels on 05 December 2010 02:00 PM. We have designed Route 53 to propagate updates very quickly and give the customer the tools to find out when all changes have been propagated. This achieves very low-latency for queries which is crucial for the overall performance of internet applications. All Things Distributed.

Cloud 111
article thumbnail

Introducing the AWS South America - All Things Distributed

All Things Distributed

This new Region has been highly requested by companies worldwide, and it provides low-latency access to AWS services for those who target customers in South America. The new Sao Paulo Region provides better latency to South America, which enables AWS customers to deliver higher performance services to their South American end-users.

AWS 103
article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Ford, et al., “TCP