Remove Artificial Intelligence Remove Hardware Remove Tuning
article thumbnail

What is serverless computing? Driving efficiency without sacrificing observability

Dynatrace

This allows teams to sidestep much of the cost and time associated with managing hardware, platforms, and operating systems on-premises, while also gaining the flexibility to scale rapidly and efficiently. Performing updates, installing software, and resolving hardware issues requires up to 17 hours of developer time every week.

article thumbnail

10 tips for migrating from monolith to microservices

Dynatrace

Limits of a lift-and-shift approach A traditional lift-and-shift approach, where teams migrate a monolithic application directly onto hardware hosted in the cloud, may seem like the logical first step toward application transformation. However, the move to microservices comes with its own challenges and complexities.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Why log monitoring and log analytics matter in a hyperscale world

Dynatrace

Logs can include data about user inputs, system processes, and hardware states. Log analysis can reveal potential bottlenecks and inefficient configurations so teams can fine-tune system performance. “Logging” is the practice of generating and storing logs for later analysis. Optimized system performance.

Analytics 264
article thumbnail

AI’s Future: Not Always Bigger

O'Reilly

These smaller distilled models can run on off-the-shelf hardware without expensive GPUs. And they can do useful work, particularly if fine-tuned for a specific application domain. Spending a little money on high-end hardware will bring response times down to the point where building and hosting custom models becomes a realistic option.

article thumbnail

Bringing the Magic of Amazon AI and Alexa to Apps on AWS.

All Things Distributed

Effectively applying AI involves extensive manual effort to develop and tune many different types of machine learning and deep learning algorithms (e.g. automatic speech recognition, natural language understanding, image classification), collect and clean the training data, and train and tune the machine learning models.

AWS 153
article thumbnail

Generative AI in the Enterprise

O'Reilly

Even with cloud-based foundation models like GPT-4, which eliminate the need to develop your own model or provide your own infrastructure, fine-tuning a model for any particular use case is still a major undertaking. That pricing won’t be sustainable, particularly as hardware shortages drive up the cost of building infrastructure.

article thumbnail

Structural Evolutions in Data

O'Reilly

Doubly so as hardware improved, eating away at the lower end of Hadoop-worthy work. And then there was the other problem: for all the fanfare, Hadoop was really large-scale business intelligence (BI). Google goes a step further in offering compute instances with its specialized TPU hardware.

Hardware 135