MediaFox

Introduce: Specialized site to share, guide and update good techniques about programming, applications, servers, blogs, Network cloud, secure storage, ... and New Trends around the world.


Which GPU card should I buy best for Deep Learning?


GPU and Deep Learning
Graphics Processing Unit, or GPU, is a specialized processor used for high-speed parallel processing. It sounds technical, but the name is becoming more familiar as their applicability and popularity increase. Originally designed to enhance the visual experience for gamming and complex graphical interfaces (GUIs), its scope of application has increased dramatically. The GPU is currently used in a lot of applications that are of importance to many people. What we're interested in here is their suitability for Deep Learning tasks.

Much of the processing power in Deep Learning systems is used by a relatively small set of operations. The GPU is designed to perform these tasks very quickly and efficiently. These components do not replace the more versatile CPU, but complement it by solving very specific problems and requiring a lot of processing.

Are all GPUs created equal?
In fact, the GPUs currently on the market are likely to be measured in terms of several key features. These include the processor speed, the number of GPU cores, memory capacity and speed, and memory bandwidth. Each of these features is important, and depending on the application, improving a feature results in a faster overall processing.

With so many options, it's essential to do some research first to choose the best GPU for the particular task. The obvious starting point is the brand, or the manufacturer. Does one brand of GPU outperform another brand for deep learning projects? Let's take a look at some important points.

Market share
GPUs are available from a number of manufacturers, most notably NVIDIA and AMD. In fact, according to research firm Jon Peddie, the two together account for almost 100% of the market. Among these competitors, NVIDIA GPUs are more popular than AMD with a ratio of almost 2: 1 and this has a reason. It depends on a combination of ability and the problem of support.

Supporting issue
Whether your app is written in-house or third-party, it's important to make the programmer's job as simple as possible. When a software developer gets great support for their project, it turns out to be a great product. Many programmers make great, error-free applications. However, there may be fewer people realizing that a poorly powered component will be harder to work with. This leads to increased development times and it is more likely that bugs will arise and hinder future development.

A typical deep learning project is built on existing software, many software including different API libraries that can be accessed. In many cases, the foundation is the result of extensive internal development that has evolved over generations of production, testing, and modification. No matter where the operator stands, between the designer and the end user, they end up relying on developers to bring their products to life. In turn, the developer relies on both the strength of the product and the support needed to make it work.

GPU acceleration libraries
From a developer's point of view and deep learning, what will support look like? One important factor is the availability of GPU accelerating libraries, such as CUBLAS or CURAND. This is part of NVIDIA's Compute Unified Device Architecture (CUDA), which is a computing platform created by NVIDIA to support developers using GPU resources. Another important part for this NVIDIA-specific package is CUFFT, which is a compatible replacement for the library just for FFTW (Fast Fourier Transform in the West) CPUs. These libraries only form part of a huge toolkit with an ever-expanding knowledge base. So why is all of this important?

It matters because the biggest names in Deep Learning are tied to CUDA. What really matters is that when the Deep Learning frameworks are compared, one of the main features is whether the package supports the CUDA architecture.

Deep learning and arithmetic frameworks
Deep learning frameworks are used by developers to help leverage the power of technology through high-level programming interfaces. By using a language like Python, software developers work more abstractly and worry less about technical details. Although mathematically intensive functions are written in languages ​​like C ++, these functions are accessible through the top level APIs. Developers using well-supported frameworks will benefit from previous efforts in research, development, and testing. As of this writing, the most popular framework for deep learning applications is TensorFlow.

TensorFlow
TensorFlow is widely praised for simplifying and abstracting deep learning tasks. Developed by the Google Brain team, TensorFlow is an open source library that makes machine learning faster and easier. This popular framewoek makes extensive use of the CUDA architecture. In fact, without CUDA, the full power of your GPU won't be released in TensorFlow apps.

To learn more about TensorFlow, read the TensorFlow topic in the Server World blog.

PyTorch
PyTorch is a scientific computation package used to provide speed and flexibility in deep learning projects. In the development phase it can be used as an alternative to the CPU-only library, NumPy (NumPy Python), which is heavily relied on performing math operations in Neural Networks. PyTorch also relies on the CUDA library for GPU acceleration and as of the time of this writing, support for AMD GPUs is not available under this framework.

Microsoft Cognitive Toolkit (Formerly CNTK)
The Microsoft Cognitive Toolkit is a deep learning framework originally developed for internal use at Microsoft. Once released and available to everyone as an open source package, CNTK has become one of the most widely known deep learning frameworks. Although many current users are reported to use TensorFlow and PyTorch, the toolkit is still noted for ease of use and good compatibility. This package supports both CPU and GPU operation, although using the GPU accelerator requires the use of NVIDIA's proprietary CUDNN library.

Speed
When it comes to computers, the first thing for most people is speed. No matter how the application comes out, quick training times are essential for Deep Learning models. The faster the system is trained, the sooner you have results. This of course begs the question: How fast is a GPU?

When it comes to GPUs, performance is really a combination of raw computational power, memory, and bandwidth. However, in general, the higher the number of FLOPS (floating point operations per second), the faster the processor. Recently, Tom's Hardware published a ranking list of the speeds of every GPU available on the market. Notably, NVIDIA's GPUs take up the first six spots, giving the brand a clear advantage in speed.

Memory
Regardless of whether the task is performed by the CPU or GPU, it requires memory (RAM). This is because the processor is responsible for performing critical calculations, and the results for each turn need to be stored. For demanding tasks like deep learning apps, a significant amount of memory is required. Without enough RAM, performance will be dramatically reduced and not able to fully utilize the potential of raw processing power.

For GPU and RAM, there are two types: Integrated and Dedicated. Integrated GPU does not have its own memory. Instead, it shares the on-board memory used by the CPU. In contrast, a dedicated GPU performs calculations with its own RAM. While a dedicated GPU costs more, with additional memory typically in the 2 GB to 12 GB range, there are important advantages.

First, a dedicated GPU will not be adversely affected by a heavily loaded CPU. There's no need to share the RAM, it actually runs in parallel without having to wait for CPU-related operations to complete. Next, and perhaps more importantly, a dedicated GPU will accommodate the higher performance memory chips. This increase in memory speed is a direct factor in bandwidth, which is another important characteristic used to evaluate overall performance.

Bandwidth
There is no argument about random access memory without mentioning bandwidth and how it affects performance. Bandwidth is a combination of the speed of memory, the width of the memory, and the type of memory. While each makes sense, it's best to summarize by stating that when the bandwidth is not up to standard, the GPU will spend a lot of idle time waiting for the RAM to respond. It's not surprising that a lower priced GPU from the same manufacturer can differ only in bandwidth and, therefore, this is an important feature to consider.

Which GPU brand should I choose?
In today's technology terms, NVIDIA GPUs are the best choice for those interested in Deep Learning. There's no doubt that AMD makes great hardware and powerful processors, but in terms of the most important factors - GPU speed and support - it's simply no one to compete. with NVIDIA at this point.

Why? Ultimately, the development of a deep learning application depends on support, and when the system is ready, it moves to the issue of processing speed. The speed of the GPU is determined by a number of components, where memory and bandwidth are just as important as the GPU clock speed. Whether you are a developer, vendor or end user - no matter what angle you are, having a fast GPU is key to getting the best results.

Address: 797 Nguyen Tat Thanh - Tan Phu - Vietnam. | Gmail: MediaFox.Biz - Phone: 08.00530077
Copyright © 2003 – 2020 MediaFox.Biz. All rights reserved.