Don't take the graphics card for calculation! NVIDIA this provoked large customers

Home > Sci-Tech

Don't take the graphics card for calculation! NVIDIA this provoked large customers

2017-12-27 18:28:44 160 ℃

by AI technology headquarters (rgznai100) authorized reprint

dove

| the circle of friends was blown up by nvidia. What's the matter with

?

said, this goods recently NVIDIA quietly modified the user license agreement (EULA), forbidden to do deep learning using consumer grade graphics card GeForce in the data center. What do you use? It is mandatory to use its high-end processor Tesla series. What's the difference between

GeForce and Tesla?

GeForce GTX 1080: a set of data: PASCAL; CUDA 2560 cores; 8 TFLOPS; 8 (single-prec) GB GDDRX5 320 GB/s Max 180 W.

Tesla P100:; PASCAL CUDA; 3584 cores; 9.3 TFLOPS; 16 (single-prec) GB HBM2 732 GB/s; 250 Max W.

read, a senior industry focus on AI chip tell the AI technology base on a single card, the performance of geforce is greater than Tesla, while geforce cut some functions, but the frequency is higher than tesla. The functions of castration, many of which have little to do with depth learning, such as ECC memory, double data types.

on the GeForce to do deep learning, "full street 1080ti training neural network cost performance is not too high", the senior person stressed. In a word,

, in a word, the GeForce and Tesla architecture are similar and even better, but the latter is 10 times the price of the former. Do you buy it?

heh heh is sorry, as long as you are the data center, do not buy also have to buy!

this is naked extortion? What can not be used cheap? You NVIDIA this is the use of market dominance to force the user to pay, give the user play trick?

of course, it is necessary to point out that NVIDIA is only limited to do deep learning to use GeForce in the data center, the general colleges and universities and research institutes and other non business users, and will not be affected by what. What

do not bother, anyway, the consequence is that a ripple, overseas communities began to fryer. The comments of the

forum community are mainly divided into three groups: condemnation, calmness and strength.

is the battalion commander three party opinions were stripped off, as follows:

three party dispute: condemnation, calm, behind the

"as long as the data center to do deep learning projects must buy high priced Tesla it? The cheap and good GeForce series will not be used so? "

"why is the price going up 10 times just because it is placed in the data center?"

"how does this make the start-ups live?"

"sinister NVIDIA, the next step will make an estimated based software such as cuDNN in GeForce on the run all bug, so that users can not get GeForce to do a deep learning."

"ready to turn to Intel's Nervana and AMD."

dispassionate analysis, Like this:

"when Volta or update architecture GeForce comes out, it is estimated that only Cuda Core, no Tensor Core. CuDNN will do performance optimization for Tensor Core, because Tensor Core itself is not prepared for 3D graphics. "

force praise force, Like this:

"NVIDIA so do nothing wrong, and it is difficult to have people to compete. This is just a trap for these chip manufacturers to think that the server chip can be organically illustrated and bet on this area. Then NVIDIA will make free hand to do edge and terminal intelligence. Even after the hardware vendors have made the product, NVIDIA has a variety of means, such as market prices. "

"NVIDIA began to invest in developing Volta architecture at least five years ago, and invested 3 billion dollars before and after. It has to lead its rivals for at least three years, regardless of vision or execution. I guess NVIDIA and FLAC, improved the next generation GPU will be stronger."

what is the voice of the forum? One can not ignore the reality is that in the NVIDIA Megatron Quartet on the occasion, Intel and AMD are also compared to NVIDIA, spare no effort to catch up, they have their own advantages. NVIDIA

market monopoly how long? In the mad clamor gradually inve dari arrogance, Intel and AMD now how the potential? What is the key point in determining the dominance of deep learning chips?

to see the chip War Within Three Kingdoms under the Tim Dettmers of the big God. An article below

content from Tim Dettmers, AI (| translation technology base compilation: Lin Chunmian) long

NVIDIA's dominance can keep? With the release of the NVIDA Titan V graphics card,

is now entering the transition stage of deep learning.

's current competition in the deep learning chip market has been hot, and both AMD and Intel Nervana are expected to challenge NVIDA's dominance in the new year. And for consumers, I can't recommend which hardware is ideal to them. The most prudent choice is to make a choice after the transition period of the current hardware. It may take 3 months and maybe 9 months.

then why have we just stepped into the transition period of deep learning hardware?

NVIDA company has a rhetoric before, industry competition, realize its monopoly learning hardware market in depth, and by around wealth leading position in the 1 to 2 years to defend after. The new generation of Titan V graphics card sells for 3000 US dollars, and it will satisfy the performance requirement of deep learning well for tensor computing core learning area.

but, because of Ti