article thumbnail

Optimising for High Latency Environments

CSS Wizardry

This gives fascinating insights into the network topography of our visitors, and how much we might be impacted by high latency regions. Round-trip-time (RTT) is basically a measure of latency—how long did it take to get from one endpoint to another and back again? What is RTT? RTT isn’t a you-thing, it’s a them-thing.

Latency 234
article thumbnail

Comparing Approaches to Durability in Low Latency Messaging Queues

DZone

This is the first time I have benchmarked it with a realistic example. Little’s Law and Why Latency Matters. In many cases, the assumption is that as long as throughput is high enough, the latency won’t be a problem. However, latency is often a key factor in why the throughput isn’t high enough.

Latency 275
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Single-core memory bandwidth: Latency, Bandwidth, and Concurrency

John McCalpin

Latency” is the duration from the execution of a load instruction (to an address that misses in all the caches), and the completion of that load instruction when the data is returned from memory. The example below is for a 2005-era processor with 60 ns memory latency and 6.4 cache lines -> 5.6

Latency 71
article thumbnail

Netflix’s Distributed Counter Abstraction

The Netflix TechBlog

By: Rajiv Shringi , Oleksii Tkachuk , Kartik Sathyanarayanan Introduction In our previous blog post, we introduced Netflix’s TimeSeries Abstraction , a distributed service designed to store and query large volumes of temporal event data with low millisecond latencies. Today, we’re excited to present the Distributed Counter Abstraction.

Latency 251
article thumbnail

Next-level interaction and customization of data visualizations in Dynatrace Dashboards and Notebooks

Dynatrace

For example, it supports string and numerical values, enabling a multitude of different use cases. For example, set the value range for CPU consumption from 0% to 100%. They have become a quasi-standard in the industry, especially for infrastructure monitoring visualizations. Min and max limits.

Latency 246
article thumbnail

Timestone: Netflix’s High-Throughput, Low-Latency Priority Queueing System with Built-in Support…

The Netflix TechBlog

Timestone: Netflix’s High-Throughput, Low-Latency Priority Queueing System with Built-in Support for Non-Parallelizable Workloads by Kostas Christidis Introduction Timestone is a high-throughput, low-latency priority queueing system we built in-house to support the needs of Cosmos , our media encoding platform. Over the past 2.5

Latency 221
article thumbnail

Service level objective examples: 5 SLO examples for faster, more reliable apps

Dynatrace

Certain service-level objective examples can help organizations get started on measuring and delivering metrics that matter. Teams can build on these SLO examples to improve application performance and reliability. In this post, I’ll lay out five SLO examples that every DevOps and SRE team should consider. or 99.99% of the time.

Traffic 173