article thumbnail

Rebuilding Netflix Video Processing Pipeline with Microservices

The Netflix TechBlog

The Netflix video processing pipeline went live with the launch of our streaming service in 2007. This architecture shift greatly reduced the processing latency and increased system resiliency. The service also provides options that allow fine-tuning latency, throughput, etc., divide the input video into small chunks 2.

article thumbnail

The Netflix Cosmos Platform

The Netflix TechBlog

It supports both high throughput services that consume hundreds of thousands of CPUs at a time, and latency-sensitive workloads where humans are waiting for the results of a computation. The first generation of this system went live with the streaming launch in 2007. Warm capacity. End-users can request compute resources (e.g.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

So many bad takes?—?What is there to learn from the Prime Video microservices to monolith story

Adrian Cockcroft

I don’t advocate “Serverless Only”, and I recommended that if you need sustained high traffic, low latency and higher efficiency, then you should re-implement your rapid prototype as a continuously running autoscaled container, as part of a larger serverless event driven architecture, which is what they did.

article thumbnail

DevOps automation: From event-driven automation to answer-driven automation [with causal AI]

Dynatrace

The evolution of DevOps automation Since the concept of DevOps emerged around 2007 and 2008 in response to pain points with Agile development, DevOps automation has been continuously evolving. They can also see how the change can affect critical objectives like SLOs and golden signals, such as traffic, latency, saturation, and error rate.

DevOps 231
article thumbnail

The Surprising Effectiveness of Non-Overlapping, Sensitivity-Based Performance Models

John McCalpin

This data is from the 2007 presentation. It is not surprising that there is a lot of scatter, but the factor of four range in Peak MFLOPS at fixed SPECfp_rate2000/core and the factor of four range in SPECfp_rate2000/core at fixed Peak MFLOPS was higher than I expected… (Also from the 2007 presentation.)

article thumbnail

A Decade of Dynamo: Powering the next wave of high-performance, internet-scale applications

All Things Distributed

The success of our early results with the Dynamo database encouraged us to write Amazon's Dynamo whitepaper and share it at the 2007 ACM Symposium on Operating Systems Principles (SOSP conference), so that others in the industry could benefit. This was the genesis of the Amazon Dynamo database.

Internet 113
article thumbnail

InnoDB Performance Optimization Basics

Percona

This blog is in reference to our previous ones for ‘Innodb Performance Optimizations Basics’ 2007 and 2013. Although there have been many blogs about adjusting MySQL variables for better performance since then, I think this topic deserves a blog update since the last update was a decade ago, and MySQL 5.7