Remove Cache Remove Latency Remove Open Source
article thumbnail

Predictive CPU isolation of containers at Netflix

The Netflix TechBlog

Because microprocessors are so fast, computer architecture design has evolved towards adding various levels of caching between compute units and the main memory, in order to hide the latency of bringing the bits to the brains. This avoids thrashing caches too much for B and evens out the pressure on the L3 caches of the machine.

Cache 258
article thumbnail

AI-driven analysis of Spring Micrometer metrics in context, with typology at scale

Dynatrace

To remain flexible in observing all technologies used in their organization, some companies choose open-source solutions, which allow them to stay vendor-neutral. Dynatrace news. Every company has its own strategy as to which technologies to use. That’s a large amount of data to handle. of Micrometer.

Metrics 246
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Dynatrace supports Azure Managed Instance for Apache Cassandra

Dynatrace

Apache Cassandra is an open-source, distributed, NoSQL database. With the Dynatrace Data Explorer, you can easily analyze metrics, such as client read/write latency by Cassandra nodes and disk space usage by keyspaces. You can also analyze table metrics, such as cache hits and misses.

Azure 246
article thumbnail

Pushy to the Limit: Evolving Netflix’s WebSocket proxy for the future

The Netflix TechBlog

Dynomite is a Netflix open source wrapper around Redis that provides a few additional features like auto-sharding and cross-region replication, and it provided Pushy with low latency and easy record expiry, both of which are critical for Pushy’s workload. As Pushy’s portfolio grew, we experienced some pain points with Dynomite.

Latency 234
article thumbnail

Supporting Diverse ML Systems at Netflix

The Netflix TechBlog

The Machine Learning Platform (MLP) team at Netflix provides an entire ecosystem of tools around Metaflow , an open source machine learning infrastructure framework we started, to empower data scientists and machine learning practitioners to build and manage a variety of ML systems.

Systems 235
article thumbnail

How RevenueCat Manages Caching for Handling over 1.2 Billion Daily API Requests

InfoQ

RevenueCat extensively uses caching to improve the availability and performance of its product API while ensuring consistency. The team at RevenueCat created an open-source memcache client that provides several advanced features. The company shared its techniques to deliver the platform, which can handle over 1.2

Cache 111
article thumbnail

Re-Architecting the Video Gatekeeper

The Netflix TechBlog

The Tech Hollow , an OSS technology we released a few years ago, has been best described as a total high-density near cache : Total : The entire dataset is cached on each node?—?there there is no eviction policy, and there are no cache misses. Near : the cache exists in RAM on any instance which requires access to the dataset.

Cache 184