Remove 2010 Remove Hardware Remove Latency
article thumbnail

Single-core memory bandwidth: Latency, Bandwidth, and Concurrency

John McCalpin

In my previous post , I reviewed historical data on single-core/single-thread memory bandwidth in multicore processors from Intel and AMD from 2010 to the present. The increase in single-core memory bandwidth has been rather slow overall, with Intel processors only showing about a 2x increase over the 13 years from 2010 to 2023.

Latency 71
article thumbnail

USENIX LISA2021 Computing Performance: On the Horizon

Brendan Gregg

This was a chance to talk about other things I've been working on, such as the present and future of hardware performance. The video is on [youtube]: The slides are on [slideshare] or as a [PDF]: I work on many areas of performance, but recently I've had a lot of demand to talk about BPF. Ford, et al., “TCP

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Amazon EC2 Cluster GPU Instances - All Things Distributed

All Things Distributed

By Werner Vogels on 14 November 2010 04:00 PM. For example, the most fundamental abstraction trade-off has always been latency versus throughput. The throughput of this pipeline is more important than the latency of the individual operations. Werner Vogels weblog on building scalable and robust distributed systems. Comments ().

AWS 125
article thumbnail

Single-core memory bandwidth: Latency, Bandwidth, and Concurrency

John McCalpin

In my previous post , I reviewed historical data on single-core/single-thread memory bandwidth in multicore processors from Intel and AMD from 2010 to the present. The increase in single-core memory bandwidth has been rather slow overall, with Intel processors only showing about a 2x increase over the 13 years from 2010 to 2023.

Latency 40
article thumbnail

Expanding the Cloud - Cluster Compute Instances for Amazon EC2.

All Things Distributed

By Werner Vogels on 12 July 2010 05:00 PM. In particular this has been true for applications based on algorithms - often MPI-based - that depend on frequent low-latency communication and/or require significant cross sectional bandwidth. The Amazon.com 2010 Shareholder Letter Focusses on Technology. All Things Distributed.

Cloud 96
article thumbnail

The evolution of single-core bandwidth in multicore processors

John McCalpin

Looking at sustained single-core bandwidth for a kernel composed of 100% reads, the trends for a large set of high-end AMD and Intel processors are shown in the figure below: So from 2010 to 2023, the sustainable single-core bandwidth increased by about 2x on Intel processors and about 5x on AMD processors.

article thumbnail

USENIX SREcon APAC 2022: Computing Performance: What's on the Horizon

Brendan Gregg

My personal opinion is that I don't see a widespread need for more capacity given horizontal scaling and servers that can already exceed 1 Tbyte of DRAM; bandwidth is also helpful, but I'd be concerned about the increased latency for adding a hop to more memory. Ford, et al., “TCP