This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
The demand for more IT resource-intensive applications has significantly increased today, whether it is to process quicker transactions, gain real-time insight, crunch big data sets, or to meet customer expectations. Many businesses select non-volatile memory express (NVMe) storage when their data-intensive applications demand fast access to data. That’s because NVMe provides 6x higher bandwidth and IOPS advantage compared to SAS/SATA SSD.
Big data is like the pollution of the information age. There is an overwhelming amount of it—who manages it, where is it being stored, who is translating it, and what does it all mean? The Big Data Struggle and Performance Reporting. Many larger organizations leverage teams of data scientists to manage, configure and analyze the data pulled into BI and dashboarding solutions like Datadog and Grafana in an effort to get a single view into their performance.
Over the past few years I have focused much of my learning and work choices around learning about the design of sociotechnical systems?—?how to design software architectures and organise teams around them. Looking through the history of my talks and my posts you can see evolutions in my thinking. One of my current working models is that there are five main categories of criteria for designing boundaries: Business Value: design decisions aligned to the business strategy Domain: design decisions a
Dynatrace news. Mainframe environments power 30 billion transactions a day and are used by 72 of Fortune 500 companies in 34 countries across the globe. Whilst moving applications from mainframe to a modern cloud stack provides better agility and competitive advantage, frequently the mainframe remains as the back-end workhorse to cloud native applications due to the costs and risks of migration.
We all know that testing your application is important, as it ensures security, customer satisfaction, and saves money in the long run. Sometimes it can save lives as well: China Airlines plane crashed due to a software bug on April 26, 1994, which took 264 lives. In software testing, unit testing is the first level of testing in which where most of the issues can be rectified, which saves time.
Lerner?—?using RL agents for test case scheduling By: Stanislav Kirdey , Kevin Cureton , Scott Rick , Sankar Ramanathan Introduction Netflix brings delightful customer experiences to homes on a variety of devices that continues to grow each day. The device ecosystem is rich with partners ranging from Silicon-on-Chip (SoC) manufacturers, Original Design Manufacturer (ODM) and Original Equipment Manufacturer (OEM) vendors.
Customers often ask me how AWS maintains security at scale as we continue to grow so rapidly. They want to make sure that their data is secure in the AWS Cloud, and they want to understand how to better secure themselves as they grow.
Dynatrace news. We’re back in Barcelona for our second European Perform Summit, where the theme of autonomous cloud operations is traversing every aspect of the event. The foundational step on the journey to operations automation starts with adopting a new approach to monitoring – to paraphrase the adage: you can’t automate what you can’t measure. Traditional monitoring tools (we call them 2nd Generation) quickly hit a wall when up against the scale, dynamic nature and new technologies that powe
Sign up to get articles personalized to your interests!
Technology Performance Pulse brings together the best content for technology performance professionals from the widest variety of industry thought leaders.
Dynatrace news. We’re back in Barcelona for our second European Perform Summit, where the theme of autonomous cloud operations is traversing every aspect of the event. The foundational step on the journey to operations automation starts with adopting a new approach to monitoring – to paraphrase the adage: you can’t automate what you can’t measure. Traditional monitoring tools (we call them 2nd Generation) quickly hit a wall when up against the scale, dynamic nature and new technologies that powe
What fascinates me most about the volatile keyword is that it is still necessary, for me, because my software still runs on a silicon chip. Even if my application runs in the cloud on the JVM, despite all of those software layers abstracting away the underlying hardware, the volatile keyword is still needed due to the cache of the processor that my software runs on.
RPCValet: NI-driven tail-aware balancing of µs-scale RPCs Daglis et al., ASPLOS’19. Last week we learned about the [increased tail-latency sensitivity of microservices based applications with high RPC fan-outs. Seer uses estimates of queue depths to mitigate latency spikes on the order of 10-100ms, in conjunction with a cluster manager. Today’s paper choice, RPCValet, operates at latencies 3 orders of magnitude lower, targeting reduction in tail latency for services that themselves have se
I’ve been playing around with an Arduino Uno recently, something new to me since I’ve always only used Raspberry Pi hardware. Many Arduino devices, or at least the Uno like I have are inexpensive and a lot of fun to play around with. However, the development experience out of the box isn’t exactly what I was familiar with or happy about.
Dynatrace news. One critical success factor for any business is how effectively it bases its business decisions on data. Beyond just performance and availability statistics, end-user experience and business metrics are also required. And it’s not enough to simply capture all such critical business data—data consumers expect to have access to data in its most granular form, whenever and wherever they need it.
Compatibility is the capacity to exist together. As a real-life example, water is not compatible with oil, but milk is. The same thing happens with software or apps that we build. Compatibility Testing. Compatibility testing is a crucial QA task which guarantees that the software or product that is being tested is compatible, as desired over a broad set of client frameworks and configurations.
Software-defined far memory in warehouse-scale computers Lagar-Cavilla et al., ASPLOS’19. Memory (DRAM) remains comparatively expensive, while in-memory computing demands are growing rapidly. This makes memory a critical factor in the total cost of ownership (TCO) of large compute clusters, or as Google like to call them “Warehouse-scale computers (WSCs).” This paper describes a “far memory” system that has been in production deployment at Google since 2016.
SQL Server will ship Azure SQL Database Edge: [link]. With the announcement I can tell you more about one of the things we have been working on; SQL Server running on IoT Edge and Developer machines in under 500MB of memory. The effort goes beyond IoT Edge devices and extends to the common developer experience. The effort focuses attention on memory usage and disk space requirements of SQL Server.
We have a long drive ahead, 12 hours in fact, and it will only take us a short way along the Eastern coastline of Australia. Having just arrived back from Tasktop HQ in Vancouver, BC where we have been discussing the Flow Framework , I’m excited to tell my passengers (my wife and family) about the framework (a captive audience). I started with the notion of the four Flow items then I stopped.
Enterprise Resource Planning (ERP) is a very important aspect of any modern. ERP systems allow businesses to achieve a certain level of automation so that they can maintain business operations, finances, and human resources. It is an outstanding platform where you can synchronize your backend workflow for maximum efficiency and cost-effectiveness. ERP systems are important and implementing them right is critical.
Compress objects, not cache lines: an object-based compressed memory hierarchy Tsai & Sanchez, ASPLOS’19. Last time out we saw how Google have been able to save millions of dollars though memory compression enabled via zswap. One of the important attributes of their design was easy and rapid deployment across an existing fleet. Today’s paper introduces Zippads , which compared to a state of the art compressed memory hierarchy is able to achieve a 1.63x higher compression ratio and impr
Over the past few months, I’ve been invited to be a guest on several podcasts that focus on DevOps and software engineering. I’m posting links to those below, for those who are interested. You can find these podcasts on iTunes or anywhere else you listen to podcasts. Real World DevOps. The Real World DevOps podcast is hosted by Mike Julian, with a theme of introducing listeners to the most interesting people in the DevOps community.
Last year, Mary Thengvall and I embarked on a journey to produce the software development and operations industry’s first conference on how to sustainably build and promote resilience in code & technology, teams, and individual people. And thus was born REdeploy. Resilience Engineering requires we look at our technology, our teams, and ourselves Today, I’m happy to announce: REdeploy is REturning for 2019 !
Android now occupies the number one place in the global smartphone market, with a market share of 87% at the end of 2016. That means nine out of ten smartphones in the world run on Android. With such dominance in the space, the creation of mobile apps has reached never-before-seen heights. But the constant innovation that fuels this market has major problems in terms of the development and testing timelines.
I have had a lot of conversations recently about types of workloads – specifically understanding whether a workload is parameterized, adhoc, or a mixture. It’s one of the things we look at during a health audit, and Kimberly has a great query from her Plan cache and optimizing for adhoc workloads post that’s part of our toolkit. I’ve copied the query below, and if you’ve never run it against any of your production environments before, definitely find some time to do so.
The TPC Council recently announced that the TPC is now hosting the HammerDB open source projects GitHub repository. The TPC is now hosting the HammerDB open source project’s Github repository. Take a look at [link]. — tpcbenchmarks (@tpcbenchmarks) May 22, 2019. HammerDB has increased dramatically in popularity and use and has been identified as the industry default for database benchmarking illustrating both the popularity of open source and TPC based benchmarks.
The beginning of my experience as a Junior Software Engineer on one of Tasktop’s ‘Integrations Teams’ marked a definitive transition in the way I learned and practiced computer science and software development. With just a year at UBC’s Computer Science program and a couple of personal projects under my belt, I was initially uncertain of how my skills and experience would translate into a professional work environment.
How do you tune the Snowflake data warehouse when there are no indexes, and few options available to tune the database itself? Snowflake was designed for simplicity, with few performance tuning options. This article summarizes the top five best practices to maximize query performance.
Customers with Kafka clusters struggle to understand what is happening in their Kafka implementations. There are out-of-the-box solutions like Ambari and Cloudera Manager which provide some high-level monitoring; however, most customers find these tools to be insufficient for troubleshooting purposes. These tools also fail to provide insight/visibility down to the applications acting as consumers that are processing Kafka data streams.
Today when we create a Hive table, it is a common technique to partition the table across different values and ranges to improve query performance and reduce maintenance cost. However, Hive cannot access a single table directly using a single query with the data of this Hive table across different mediums of storage and different clusters. This becomes a need when the data volume grows too large to fit a single medium of storage or cluster, and also when the users need to take into account the f
The aim of usability testing is very simple: ask participants to test the application, collect quantitative data from the test results and figure out how the application can be improved. Often, testers or observers make certain mistakes that can lead to a critical defect from where recovering the application can be costly and time-consuming. In this article, we will discuss 13 common mistakes that happen during usability testing and should be taken care of.
In the series, we will define the basic terms that every developer needs to know about testing. The purpose is to give all team members a shared understanding of the fundamental terminology of quality assurance and all related processes. Later, this will improve communication and reviews quality. It will further increase the testing capabilities of each member.
Have you ever wondered how Hibernate keeps track of changes made to an entity, what exactly happens at a flush time, and why is Hibernate is sometimes too slow? In this article, I’m going to explain what actually happens under the hood and how to tune it up.
Introduction. SD-WAN (software-defined wide area network) is probably the most successful business adoption of SDN architecture and is continuously growing, as last year IDC forecast it to reach $4.5 billion by 2022. As with many new technologies, SD-WAN deployment also comes with new challenges, especially for network operations team, who are well-trained and experts in traditional WAN management but not very familiar with this “new” WAN.
We organize all of the trending information in your field so you don't have to. Join 5,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content