LeadershIP 2019

The sixth annual LeadershIP conference brings together industry leaders, regulators, and academia to discuss innovation and Intellectual Property policies.    Washington, D.C. – March 26, 2019 For meetings with TIRIAS Research at or around the conference, please contact Principal Analyst Jim McGregor (jim@tiriasresearch.com).

Categories Events
Programmability, Latency, Accuracy, Size of Model, Throughput, Energy Efficiency, Rate of Learning

NVIDIA PLASTER Deep Learning Framework

PLASTER as a whole is greater than the sum of its parts. Anyone interested in developing and deploying AI-based services should factor in all of PLASTER’s elements to arrive at a complete view of deep learning performance. Addressing the challenges described in PLASTER is important in any DL solution, and it is especially useful for developing and delivering the inference engines underpinning AI-based services. Each section of this paper includes a brief description of measurements for each framework component and an example of a customer leveraging NVIDIA solutions to tackle critical problems with machine learning.  

Download this paper for free without registration (CLICK HERE)

  Or register with TIRIAS Research by adding this report to our shopping cart and checking out…

NVIDIA PLASTER

NVIDIA PLASTER Deep Learning Framework

“PLASTER” encompasses seven major challenges for delivering AI-based services: Programmability, Latency, Accuracy, Size of Model, Throughput, Energy Efficiency, Rate of Learning. This paper explores each of these AI challenges in the context of NVIDIA’s deep learning (DL) solutions.

AMD Optimizes EPYC Memory with NUMA

TIRIAS Research has published a new white paper, AMD Optimizes EPYC Memory with NUMA, that looks at how AMD designed the EPYC server chip balancing between system cost, die area, memory bandwidth, and memory latency. AMD achieved that goal by using the efficiencies of multichip module (MCM) technology and the company’s new Infinity Fabric (IF) technology....

Dell EMC Accelerates Pace in Machine Learning

In 2017, Dell EMC and NVIDIA announced a strategic agreement to jointly develop new datacenter products based on NVIDIA’s Volta generation GPUs, specifically for high-performance computing (HPC), data analytics, and artificial intelligence (AI) workloads. TIRIAS Research believes the agreement, coupled with continuous product line updates, will generate momentum for Dell EMC in ML applications.

The Instantaneous Cloud: Emerging Consumer Applications of 5G Wireless Networks

TIRIAS Research has published a new white paper, The Instantaneous Cloud: Emerging Consumer Applications of 5G Wireless Networks sponsored by NGCodec. TIRIAS Research tracks the intersection of emerging technology and the invention of new applications. Emerging 5G networks will carry consumer context and input into the cloud, and stream contextual or graphically intense experiences back down...

Qualcomm Centriq 2400 Server TCO: Redis Key Value Store

This paper examines the total cost of ownership (TCO) of servers using the Qualcomm Centriq 2400 system-on-chip (SoC) running the Armv8 instruction set architecture (ISA). TIRIAS Research compares an estimated three-year TCO for servers based on the Qualcomm Centriq 2452 SoC against a mainstream x86-based server using Intel Xeon Gold 5120 processors. The performance basis for this comparison is the Redis in-memory database. The target audiences for this TCO comparison are social media, Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) providers.

Download this paper from Qualcomm Datacenter Technologies (QDT)

 

Download this paper from Qualcomm Datacenter Technologies (QDT)

  Or register with TIRIAS Research by adding this report to our shopping cart and checking out...