Remove 2005 Remove Cache Remove Systems
article thumbnail

Single-core memory bandwidth: Latency, Bandwidth, and Concurrency

John McCalpin

Understanding sustained memory bandwidth in these systems starts with assuming 100% utilization and then reviewing the factors that get in the way (e.g., This requires a completely different approach to modeling the memory system — one based on Little’s Law from queueing theory.

Latency 71
article thumbnail

No Server Required - Jekyll & Amazon S3 - All Things Distributed

All Things Distributed

Werner Vogels weblog on building scalable and robust distributed systems. Amazon S3 is much more than just storage; the network and distributed systems infrastructure to ensure that content can be served fast and at high rates without customers impacting each other, is amazing. All Things Distributed. Comments ().

Servers 111
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Back-to-Basics Weekend Reading - A Decomposition Storage Model

All Things Distributed

Not everybody agreed that the "N-ary Storage Model" (NSM) was the best approach for all workloads but it stayed dominant until hardware constraints, especially on caches, forced the community to revisit some of the alternatives. The first practical modern implementation is probably C-Store by Stonebraker, et al.

Storage 72
article thumbnail

The Return of the Frame Pointers

Brendan Gregg

The problem is that this system has a default libc that has been compiled without frame pointers, so any stack walking stops at the libc layer, producing a partial stack that's missing the application frames. It shouldn't be 10%, unless it's cache effects. Don't blame the straw, in this case, don't blame the frame pointers.

Java 137
article thumbnail

How Parallel Plans Start Up – Part 1

SQL Performance

The fundamentals of row mode parallel execution haven’t changed since SQL Server 2005, so the following discussion is broadly applicable. A parallel query might start out requesting DOP 8, but be progressively downgraded to DOP 4, DOP 2, and finally DOP 1 due to a lack of system resources at that moment. MonthlyPosts AS. (

Cache 98
article thumbnail

The Amazing Evolution of In-Memory Computing

ScaleOut Software

From Distributed Caches to Real-Time Digital Twins. Going back to the mid-1990s, online systems have seen relentless, explosive growth in usage, driven by ecommerce, mobile applications, and more recently, IoT.

article thumbnail

The Amazing Evolution of In-Memory Computing

ScaleOut Software

From Distributed Caches to Real-Time Digital Twins. Going back to the mid-1990s, online systems have seen relentless, explosive growth in usage, driven by ecommerce, mobile applications, and more recently, IoT.