NVIDIA’s (NVDA) CEO Jen-Hsun Huang on Q3 2018 Results – Earnings Call Transcript



Q3 2018 Earnings Conference Call

November 9, 2017 17:00 ET


Simona Jankowski – VP, IR

Colette Kress – EVP & CFO

Jen-Hsun Huang – President & CEO


Toshiya Hari – Goldman Sachs

Stacy Rasgon – Bernstein

C. J. Muse – Evercore

Vivek Arya – Bank of America Merrill Lynch

Joseph Moore – Morgan Stanley

Craig Ellis – B. Riley & Company

Christopher Caso – Raymond James

Matthew Ramsay – Canaccord Genuity

Hans Mosesmann – Rosenblatt Securities


Good afternoon. My title is Victoria, and I am your convention operator for at present. Welcome to NVIDIA’s monetary outcomes convention name. [Operator Instructions] I will now flip the decision over to Simona Jankowski, Vice President of Investor Relations, to start your convention.

Simona Jankowski

Thank you. Good afternoon, everybody and welcome to NVIDIA’s Conference Call for the Third Quarter of Fiscal 2018. With me on the decision at present from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.

I would wish to remind you that our name is being webcast stay on NVIDIA’s Investor Relations web site. It can also be being recorded. You can hear a replay by phone till November 16, 2017. The webcast will probably be obtainable for replay up till subsequent quarter’s convention name to debate This autumn and full yr fiscal 2018 monetary outcomes.

The contents of at present’s name is NVIDIA’s property, it may well’t be reproduced or transcribed with out our prior written consent.

During this name, we might make forward-looking statements based mostly on present expectations. These are topic to a variety of vital dangers and uncertainties, and our precise outcomes might differ materially. For a dialogue of things that might have an effect on our future monetary outcomes and enterprise, please discuss with the disclosure in at present’s earnings launch, our most up-to-date Forms 10-Okay and 10-Q and the studies that we might file on Form Eight-Okay with the Securities and Exchange Commission. All our statements are made as of at present, November 9, 2017, based mostly on data presently obtainable to us. Except as required by regulation, we badume no obligation to replace any such statements.

During this name, we’ll talk about non-GAAP monetary measures. You can discover a reconciliation of those non-GAAP monetary measures to GAAP monetary measures in our CFO Commentary, which is posted on our web site.

With that, let me flip the decision over to Colette.

Colette Kress

Thanks, Simona. We had a superb quarter with file income in every of our 4 market platforms. And each measure of revenue hit file ranges, reflecting the leverage of our mannequin.

Data heart income of $501 million greater than doubled from a yr in the past and the sturdy adoption of our Volta platform and early traction with our inferencing portfolio. Q3 income reached $2.64 billion, up 32% from a yr earlier, up 18% sequentially and nicely above our outlook of $2.35 billion. From a reporting section perspective, GPU income grew 31% from final yr to $2.22 billion. Tegra processor income rose 74% to $419 million.

Let’s begin with our Gaming enterprise. Gaming income was $1.56 billion, up 25% year-on-year and up 32% sequentially. We noticed sturdy demand throughout all areas and type components. Our Pascal-based GPUs remained the platform of selection for avid gamers as evidenced by our sturdy demand for GeForce GTX 10-Series merchandise. We launched the GeForce GTX 1070 Ti which turned obtainable final week. It enhances our sturdy vacation lineup, starting from the entry-level GTX 1050 to flagship GTX 1080 Ti. A wave of nice titles is arriving for the vacations, driving enthusiasm out there.

We collaborated with Activision to convey Destiny 2 to the PC early within the month. PlayerUnknown’s Battlegrounds popularly referred to as [indiscernible], continues to be one of many yr’s most profitable titles. We are intently aligned with PUB G to make sure that GeForce is one of the best ways to play the sport, together with bringing shadow play highlights to its 20 million gamers. Last weekend, Call of Duty: World War II had a robust debut. And Star Wars Battlefront 2 will probably be [indiscernible]. E-sports stays probably the most essential secular development drivers within the Gaming market with a fan base that now exceeds 350 million.

Last weekend, the League of Legends World Championship was held in Beijing’s National Stadium, The Bird’s Nest the place the 2008 Olympic Games have been held. More than 40,000 followers attended stay. And on-line viewers have been stated to interrupt final yr’s file of 43 million following in 18 languages.

GPU gross sales additionally benefited from continued cryptocurrency cash. We met a few of this demand with a devoted board in our OEM enterprise and a portion with GeForce GTX boards, although it is tough to quantify. We stay nimble in our method to the cryptocurrency market. It is unstable, doesn’t and won’t distract us from specializing in our core Gaming market. Lastly, Nintendo Switch console continues to achieve momentum since launching in March and likewise contributed to development.

Moving to information heart; our information heart enterprise had an excellent quarter. Revenue of $501 million greater than doubled from final yr and rose 20% on the quarter and its sturdy traction of the brand new Volta structure. Shipments of the Tesla V100 GPU started in Q2 and ramped considerably in Q3, pushed primarily by demand from cloud service suppliers and high-performance computing.

As we’ve got famous earlier than, Volta delivers 10x the deep studying efficiency of our Pascal structure, which has been launched only a yr earlier, far outpacing Moore’s Law. The V100 is being broadly adopted with each main server OEM and cloud supplier. In China, Alibaba, Baidu and Tencent introduced that they’re incorporating V100 of their datacenters and cloud server, service infrastructures. In the U.S., Amazon Web Services introduced that V100 inferences at the moment are obtainable in 4 of its areas. Oracle Cloud has simply added Tesla P100 GPUs to its infrastructure choices and plans to broaden to the V100 GPUs. We anticipate help from V100 from different main cloud suppliers as nicely.

In addition, all main server OEMs introduced help for the V100, [indiscernible], Hewlett-Packard Enterprise, IBM and Supermicro are incorporating it in servers. And China’s high server OEMs, Huawei, Insper and Lenovo have adopted our HGX server structure to construct a brand new technology of accelerated datacenters with V100 GPUs.

Our new choices for the AI inference market are additionally gaining momentum. The just lately launched TensorRT programmable inference acceleration platform opened a brand new market alternative for us, enhancing the efficiency and decreasing the price of AI inferencing so as — by orders of magnitude in contrast with CPUs. It helps each main deep studying framework, each community structure and any stage of community complexity. More than 1,200 firms are already utilizing our inference platform, together with Amazon, Microsoft, Facebook, Google, Alibaba, Baidu, JD.com, [indiscernible], Hi Vision and Tencent.

During the quarter, we introduced that the NVIDIA GPU Cloud container registry or NGC is now obtainable by Amazon’s cloud and will probably be supported quickly by different cloud platforms. NGC helps builders get began with deep studying growth by no-cost entry to a complete easy-to-use, totally optimized deep studying software program stack. It allows instantaneous entry to essentially the most extensively used GPU-accelerated frameworks. We additionally continued to see sturdy development in our HPC enterprise. Next-generation supercomputers such because the U.S. Department of Energy’s CRM Summit Systems anticipated to return on-line subsequent yr, leverage Volta’s industry-leading efficiency and our pipeline is powerful.

The previous weeks have been exceptionally busy for us. We have hosted 5 main GPU Technology Conferences in Beijing, Munich, Taipei, Tel Aviv and Washington, with one other subsequent month in Tokyo.

In a robust indication of the rising significance of GPU-accelerated computing, greater than 22,000 builders, information scientists and others will come this yr to our GTCs, together with the primary occasion in Silicon Valley, that is up 10x in simply 5 years. Other key metrics present comparable good points. Over the identical interval, the variety of NVIDIA GPU builders has grown 15x to 645,000; and the variety of CUDA downloads this yr are up 5x to 1.Eight million.

Moving to Professional Visualization; third quarter income grew to $239 million, up 15% from a yr in the past and up 2% sequentially, pushed by demand for high-end real-time rendering, simulation and extra highly effective cell workstations. The protection and automotive industries grew strongly because the demand for skilled VR options pushed by Quadro P5000 and P6000 GPUs. Among key prospects, Audi and BMW are deploying VR in auto showrooms. And the U.S. Army, Navy and Homeland Security are utilizing VR for mission coaching.

Last month, we introduced early entry to NVIDIA Holodeck, the clever VR collaboration platform. Holodeck allows designers, builders and their prospects to return collectively just about from wherever on this planet in a extremely sensible, collaborated and bodily simulated setting. Future updates will deal with the rising demand for the event of deep studying methods and digital environments. In automotive, income grew to $144 million, up 13% year-over-year and up barely from final quarter.

Among key developments this quarter, we introduced DRIVE PX Pegasus, the world’s first AI laptop for enabling Level 5 driverless autos. Pegasus will ship over 320 trillion operations per second, greater than 10x its predecessor. It’s powered by 4 high-performance AI processors in a supercomputer that may be a dimension of a license plate. NVIDIA DRIVE is being utilized by over 25 firms to develop totally autonomous robotaxis and DRIVE PX Pegasus will turn out to be the trail to manufacturing. It is designed for [indiscernible] Certification, the ‘s highest security stage and will probably be obtainable within the second half of 2018.

We additionally launched the DRIVE Ix SDK for delivering intelligence experiences contained in the car. DRIVE Ix offers a platform for automotive firms to create and at all times have interaction AI Co-Pilot. It makes use of deep studying networks to trace head motion and gauge and it’ll have a dialog with the driving force utilizing superior speech recognition, lip-reading and pure language understanding. We consider this may set the usual for the subsequent technology of infotainment programs, a market that’s simply starting to develop.

Finally, we introduced that DHL, the world’s largest mail and bundle supply service, and [indiscernible], one of many world’s main automotive suppliers, will deploy a take a look at lead of autonomous supply vans subsequent yr utilizing the NVIDIA DRIVE PX platform. DHL will outfit electrical gentle vans with the ZF Pro AI self-driving system based mostly on our expertise.

Now turning to the remainder of the earnings badertion; Q3 GAAP gross margins was 59.5% and non-GAAP was 59.7%, each up sequentially and year-over-year, reflecting continued development in value-added platforms. GAAP working bills have been $674 million and non-GAAP working bills have been $570 million, in step with our outlook and up 19% year-on-year. Investing in our key market alternatives is important to our future, together with Gaming, AI and self-driving vehicles.

GAAP working earnings was a file $895 million, up 40% from a yr in the past. Non-GAAP working earnings was $1.01 billion, up 42% from a yr in the past. GAAP web earnings was a file $838 million and EPS was $1.33, up 55% and 60%, respectively, from a yr earlier. Non-GAAP web earnings was $833 million and EPS was $1.33, up 46% and 41%, respectively from a yr earlier, reflecting income energy in addition to gross margin and working margin growth. We’ve returned $1.16 billion to shareholders to this point this fiscal yr by a mix of quarterly dividends and share repurchases.

We have introduced a rise to our quarterly dividend of $zero.01 to an annualized $zero.60, efficient with our This autumn fiscal yr ’18 dividend. We are additionally happy to announce that we intend to return one other $1.25 billion to shareholders for fiscal 2019 by quarterly dividends and share repurchases. Our quarterly money stream from operations reached file ranges, surpbading $1 billion for the primary time to $1.16 billion.

Now turning to the outlook for the fourth quarter of fiscal 2018; we anticipate income to be $2.65 billion plus or minus 2%. GAAP and non-GAAP gross margins are anticipated to be 59.7% and 60% respectively, plus or minus 50 foundation factors. GAAP and non-GAAP working bills are anticipated to be roughly $722 million and $600 million, respectively. GAAP and non-GAAP OI&E are each anticipated to be nominal. GAAP and non-GAAP tax charges are each anticipated to be 17.5%, plus or minus 1% excluding discrete objects. Other monetary particulars are included within the CFO Commentary and different data obtainable on our web site.

We will now open the decision for questions. Please restrict your query to 1. Operator, we’ll — would you please pool for questions? Thank you

Question-and-Answer Session


[Operator Instructions] Your first query comes from the road of Toshiya Hari with Goldman Sachs.

Toshiya Hari

Jen-Hsun, three months in the past, you described the July quarter as a transition quarter in your information heart enterprise. And clearly, you guys have ramped very nicely into October. But in case you can speak a bit bit concerning the outlook for the subsequent couple of quarters in information heart? And significantly on the inferencing aspect. I do know you guys are actually enthusiastic about that chance. So in case you can share buyer suggestions and what your expectations are into the subsequent yr in inferencing, that will probably be nice.

Jen-Hsun Huang

Yes. As you already know, we began ramping very strongly Volta this final quarter. And we began the ramp the quarter earlier than. And since then, each main cloud supplier from Amazon, Microsoft, Google to Baidu, Alibaba and Tencent and even just lately, Oracle, has introduced help for Volta and we’ll be offering Volta for his or her inner use of deep studying in addition to exterior public cloud companies. We additionally introduced that each main server laptop maker on this planet has now supported Volta and within the means of taking Volta out to market. HP and Dell and IBM and Cisco and Huawei in China, Insper in China, Lenovo, have all introduced that they are going to be constructing service — households of servers across the Volta GPU.

So I feel we — this ramp is simply the primary a part of supporting the construct out of GPU-accelerated service from our firm for information facilities all around the world in addition to cloud service suppliers all around the world. The purposes for these GPU servers has now grown to many markets. I’ve spoken concerning the main segments of our Tesla GPUs. There are 5 of them that I discuss often.

The first one is high-performance computing the place the market is $11 billion or so. It is without doubt one of the quicker rising elements of the IT as a result of increasingly persons are utilizing high-performance computing for doing their product growth or searching for insights or predicting the market or no matter it’s. And at present, we signify about 15% of the world’s high 500 supercomputers. And I’ve repeatedly stated, and I consider this fully, and I feel it is changing into more and more true, that each single supercomputer sooner or later will probably be accelerated by some means. So this can be a pretty vital development alternative for us.

The second is deep studying coaching, which could be very, very very like high-performance computing. And that you must do computing at a really giant scale. You’re performing trillions and trillions of iterations. The fashions are getting bigger and bigger. Every single yr, the quantity of knowledge that we’re coaching with it’s rising. And the distinction between a computing platform that is quick versus not might imply the distinction between constructing a $20 million information heart or high-performance computing servers for coaching to $200 million. And so the cash that we save and the potential we offer is absolutely, the worth’s unbelievable.

The third section and that is the section that you simply simply talked about, has to do with inference, which is once you’re executed with growing this community, you needed to put it right down to the hyperscale datacenters to help the billions and billions of queries that buyers make to the Internet daily. And this can be a brand-new marketplace for us. 100% of the world’s inference is finished on CPUs at present. We introduced very just lately, this final quarter actually, that TensorRT three inference acceleration platform and together with our Tensor Core GPU instruction set structure, we’re capable of velocity up networks by an element of 100.

Now the way in which to consider that’s, think about no matter quantity of workload that you simply received, in case you can velocity up utilizing our platform by an element of 100, how a lot it can save you.

The different manner to consider that’s as a result of the quantity of — the networks are getting bigger and bigger they usually’re so advanced now. And we all know that each community on the planet will run on an structure as a result of they have been educated on our structure at present. And so whether or not it is CNNs or RNNs or GANs or auto encoders or the entire variations of these, regardless of the precision that that you must help. The dimension of the community, we’ve got the flexibility to help them; and so you possibly can both scale out your hyperscale datacenters to help extra visitors or you may scale back your value tremendously or concurrently, each.

The fourth section of our information heart is offering all of that functionality, what I simply talked about, whether or not it is HPC, coaching or inference and turning it inside out and making it obtainable within the public cloud. There are hundreds of startups now which might be in — are startup due to AI. Everybody acknowledges the significance of this new computing mannequin. And on account of this new instrument, this new functionality, all these unsolvable issues previously at the moment are curiously solvable. And so you may see startups cropping up all around the west, all around the east and there is simply — there are millions of them.

And these firms do not both — would quite not use their scarce monetary badets to go construct high-performance computing facilities or they do not have the ability to have the ability to construct out a high-performance platform the way in which these Internet firms can. And so these cloud suppliers, cloud platforms are only a improbable useful resource for them. So it get rented by the hour. We created along with that, and I discussed that every one the service suppliers have taken it to market. In conjunction with that, we created a registry within the cloud that containerizes these actually difficult software program stacks. Every one in every of these smooth — frameworks with the totally different variations of our GPUs and totally different acceleration of layers and totally different optimization methods, we have containerized all of that for each single model and each single kind of framework within the market.

And we put that up within the registry — cloud registry referred to as the [indiscernible] GPU Cloud. And so all you needed to do was obtain that into the cloud service supplier that we have got licensed in Tesla four. And with only one click on, you might be doing deep studying. And then, the final — and so that is the cloud service suppliers. If you — the way in which to guess that — estimate that’s there are clearly tens of billions of being invested in these AI startups. And some giant proportion of their funding fund raiser will finally needed to go in the direction of high-performance computing, whether or not they construct it themselves or they rented it within the clouds. And so I feel that is a multibillion alternative for us.

And then lastly, that is in all probability the biggest of all of the alternatives which is the vertical industries. Whether it is automotive firms which might be growing their supercomputers to prepare for self-driving vehicles or in healthcare firms that at the moment are benefiting from synthetic intelligence to do higher diagnostics of — badysis of illness, to manufacturing firms to — for in-line inspection, to robotics, giant logistics firms, Colette talked about earlier DHL. But the way in which to consider that’s all of those planning — all of those firms doing planning to ship merchandise to you thru this mbadive community of supply programs, it’s the world’s largest airplane [indiscernible] and whether or not it is Uber or DD or Lyft or Amazon or DHL or UPS or FedEx, all of them have high-performance computing issues that at the moment are transferring to deep studying.

And so these are actually thrilling alternatives for us, and so the final one is simply the vertical industries. I imply, all of those segments have been now ready to start out addressing as a result of we have put our GPUs within the cloud, all of our OEMs are within the means of taking these platforms out to market. And we’ve got the flexibility now to deal with high-performance computing and deep studying coaching in addition to inference utilizing one frequent platform. And so I feel the — we have been steadfast with the thrill of accelerated computing for information facilities. And I feel that is only the start of all of it.


Your subsequent query comes from the road of Stacy Rasgon with Bernstein Research.

Stacy Rasgon

I had a query in your Gaming seasonality into This autumn. It’s normally up a bit. I used to be questioning, do you see, I suppose, drivers that may drive the shortage of regular seasonal developments given how sturdy it has been sequentially and year-over-year? And I suppose as a badociated query, do you see your Volta volumes in This autumn exceeding Q3?

Jen-Hsun Huang

Let’s see. There’s — I will reply the final one first after which work in the direction of the primary one. I feel the steerage that we supplied, we really feel snug with. But if you consider Volta, it’s simply to start with of the ramp and it is going to ramp into the market alternatives I talked about. And so my hope is that we proceed to develop. And there’s each proof that the markets that we serve, that we’re addressing with Volta is — are very giant markets. And so there’s a variety of causes to be hopeful concerning the future development alternatives for Volta. We’ve primed the pump. So cloud service suppliers are both announce the provision of Volta or they announce the quickly availability of Volta. They’re all racing to get Volta by cloud as a result of prospects are clamoring for it. The OEMs are — we have primed the pump with OEMs and a few of them are sampling now and a few of them are racing to get Volta into manufacturing within the market. And so I feel the inspiration, the demand is there.

The pressing want for accelerated computing is there as a result of Moore’s Law shouldn’t be scaling anymore. And then we have primed the pump. So the demand is there, there’s a want, the necessity is there; and the foundations for getting Volta to market is primed. With respect to Gaming, what drives our Gaming enterprise? Remember, our Gaming enterprise is offered separately to hundreds of thousands and hundreds of thousands of individuals. And what drives our Gaming enterprise is a number of issues. As you already know, e-sports is extremely, extremely vibrant and what drives — the explanation why e-sports is so distinctive is as a result of individuals need to win and having higher gear helps. The latency that they anticipate is extremely low and efficiency drives down latency they usually need to have the ability to react as quick as they’ll. People need to win they usually need to ensure that the gear that they use shouldn’t be the explanation why they did not win.

The second development driver for us this content material, the standard of content material. And boy, in case you take a look at Call of Duty or Destiny 2 or PUB G, the content material simply appears to be like wonderful. The AAA content material simply appears to be like wonderful. And one of many issues that is actually distinctive about video video games is that so as to benefit from the content material and the constancy of the content material, the standard of the manufacturing worth at its fullest, you want the most effective gear. It’s very totally different than streaming video, it’s totally totally different than watching motion pictures the place streaming movies, it’s what it’s. But for video video games, after all, it’s not. And so when AAA titles comes out within the later a part of the yr, it helps to drive platform adoption. And then lastly, more and more, social is changing into an enormous a part of the expansion dynamics of Gaming. People are — they acknowledge how stunning these video video games are.

And in order that they need to share their brightest moments with individuals, they need to share the degrees they uncover, they need to take photos of the wonderful graphics that is inside. And it is without doubt one of the main drivers, the main driver, actually, of YouTube and folks watching different individuals play video video games, these broadcasters. And now, with our Ansel, the world’s first in-game digital actuality and encompbad and digital digicam, we’ve got the flexibility to take photos and present that with individuals. And so I feel all of those totally different drivers are serving to our Gaming enterprise. And I am optimistic about This autumn. It appears to be like like it is going to be an important quarter.


Your subsequent query comes from the road of C.J. Muse from Evercore.

C.J. Muse

I hoped to talk in a near-term and a longer-term query. On the close to time period, you talked concerning the well being on demand aspect for Volta. Curious in case you’re seeing any form of restrictions on the provision aspect, whether or not it is wafers or entry to high-bandwidth reminiscence, et cetera. And then the longer-term query actually revolves round CUDA. You’ve talked about that as being a sustainable aggressive benefit for you guys getting into the yr. And now that we have moved past HPC and hyperscale coaching to extra into inference and GPU as a service and you’ve got posted GTC around the globe, curious in case you might extrapolate on the way you’re seeing that benefit and the way you have seen it evolve over the yr and the way you are interested by CUDA because the AI normal?

Jen-Hsun Huang

Yes, thanks quite a bit, C.J. Well, all the things that we construct is difficult. Volta is the only largest processor that humanity has ever made, at 21 billion transistors, 3D packaging, the quickest reminiscences on the planet and all of that in a few hundred watts which mainly says it is essentially the most energy-efficient type of computing that the world has ever identified. And one single Volta replaces a whole bunch of CPUs. And so it is energy-efficient, it saves an infinite amount of cash and it will get this job executed actually, actually quick which is simply one of many the reason why GPU-accelerated computing is so in style now. With respect to the outlook for our structure. As you already know, we’re a one structure firm. And it is so vitally essential. And the explanation for that’s as a result of there are a lot software program and a lot instruments created on high of this one structure.

On the inference aspect — on the coaching aspect, we’ve got a complete stack of software program and optimizing compilers and numeric libraries which might be fully optimized for one structure referred to as CUDA. On the inference aspect, the optimizing compilers that takes these giant, large computational graphs that come out of all of those frameworks, and these computational graphs are getting bigger and bigger and their numerical precision differs from one kind of community to a different — from one kind of utility to a different. Your numerical precision necessities for a self-driving automotive the place lives are at stake to detecting the place counting the variety of individuals crossing the road, counting one thing versus attempting to trace — detect and monitor one thing very delicate in all of the climate situations, is a really, very totally different drawback.

And so the numeric — the sorts of networks are altering on a regular basis, they’re getting bigger on a regular basis. The numerical precision is totally different for various purposes. And we’ve got totally different computing — compute efficiency ranges in addition to vitality availability ranges that these inference compilers are more likely to be a number of the most advanced software program on this planet. And so the truth that we’ve got one singular structure to optimize for, whether or not it is HPC for numeric, molecular dynamics and computational chemistry and biology and astrophysics, all the way in which to coaching to inference provides us simply huge leverage. And that is the explanation why NVIDIA could possibly be an 11,000 individuals firm.

And arguably, acting at a stage that’s 10x that. And the explanation for that’s as a result of we’ve got one singular structure that is — that’s accruing advantages over time as an alternative of three, 4, 5 totally different architectures the place your software program group is damaged up into all these totally different, small subcritical mbad items. And so it is an enormous benefit for us. And it is an enormous benefit for the .

So individuals who help CUDA know that the next-generation structure will simply get a profit and go for the trip that expertise development offers them and affords them, okay? So I feel it is a bonus that’s rising exponentially, frankly. And I am enthusiastic about it.


Your subsequent query comes from the road of Vivek Arya with Bank of America.

Vivek Arya

Congratulations on the sturdy outcomes and the constant execution. Jen-Hsun, in the previous couple of months, we’ve got seen a variety of bulletins from Intel, from Xylinx and others describing different approaches to the AI market. My query is how does the client make that call, whether or not to make use of a GPUs or an SPGA or an ASIC, proper? What is — what can stay a aggressive differentiator over the long term? And does your place within the path market additionally then possibly provide you with a leg up after they think about answer for the inference a part of the issue?

Jen-Hsun Huang

Yes, thanks, Vivek. So initially, we’ve got one structure and folks know that our dedication to our GPUs, our dedication to CUDA, our dedication to the entire software program stacks that run on high of our GPUs, each single one of many 500 purposes, each numerical solver, each CUDA compiler, each instrument chain throughout each single working system in each single computing platform, we’re fully devoted to it. We help the software program first lengthy as we will stay. And on account of that the advantages to their funding in CUDA simply continues to accrue. I — you haven’t any concept how many individuals ship me notes about how they actually take out their outdated GPU, put in a brand new GPU. And with out lifting a finger, issues received 2x, 3x, 4x quicker than what they have been doing earlier than, unbelievable worth to prospects. The incontrovertible fact that we’re singularly centered and fully devoted to this one structure in an unwavering manner permits everyone to belief us and know that we are going to help it for so long as we will stay, and that’s the advantage of an architectural technique.

When you’ve got 4 or 5 totally different architectures to help that you simply supply to your prospects and also you ask them to choose the one which they like the most effective, you are basically saying that you simply’re unsure which one is the most effective. And everyone knows that no one’s going to have the ability to help 5 architectures without end. And consequently, one thing has to provide and it will be actually unlucky for a buyer to have chosen the improper one. And if there’s 5 architectures, absolutely, over time, 80% of them will probably be improper. And so I feel that our benefit is that we’re singularly centered. With respect to FPGAs. I feel FPGAs have their place. And we use FPGAs right here at NVIDIA to prototype issues and — however FPGAs is a chip design. It’s capable of be a chip for — it is extremely good at being a versatile substrate to be any chip, and in order that’s it is benefit.

Our benefit is that we’ve got a programming setting. And writing software program is quite a bit simpler than designing chips. And if it is inside the area that we concentrate on, like for instance, we’re not centered on community packet processing however we’re very centered on deep studying. We are very centered on excessive efficiency and parallel numeric evaluation. If we’re centered on these domains, our platform is absolutely fairly unbeatable. And in order that’s the way you badume by that. I hope that was useful.


Your subsequent query comes from Atif Malik with Citi.

Atif Malik

Colette, on the final name, you talked about that crypto was $150 million within the OEM line within the July quarter. Can you quantify how a lot crypto was within the October quarter? And expectations within the January quarter directionally? And simply longer-term, why ought to we expect that crypto will not affect the gaming demand sooner or later? If you may simply speak concerning the steps anyone has taken with respect to having the totally different mode and all that?

Colette Kress

So in our outcomes, within the OEM outcomes, our particular crypto boards equated to about $70 million of income, which is the akin to the $150 million that we noticed final quarter.

Jen-Hsun Huang

Yes. Our long run, Atif — nicely, initially, thanks for that. The longer-term manner to consider that’s crypto is small for us however not zero. And I consider that crypto will probably be round for a while, form of like at present. There will probably be new currencies rising, current currencies would develop in worth. The curiosity in mining these new rising foreign money crypto algorithms that emerge are going to proceed to occur. And so I feel for a while, we will see that crypto will probably be a small however not zero, small however not zero a part of our enterprise. The — when you consider crypto within the context of our firm general, the factor to recollect is that we are the largest GPU computing firm on this planet.

And our general GPU enterprise is absolutely sizable and we’ve got a number of segments. And there’s information heart and I’ve already talked concerning the 5 totally different segments inside information heart. There’s [indiscernible] and even that has a number of segments inside it, whether or not it is rendering or computed design or broadcast, in a workstation, in a laptop computer or in a knowledge heart, the structure is quite totally different. And after all, you already know that we’ve got excessive efficiency computing, you already know that we’ve got autonomous machine enterprise, self-driving vehicles and robotics.

And you already know after all that we’ve got gaming; and so these totally different segments are all fairly giant and rising. And so my sense is that as though crypto will probably be right here to remain, it would stay small not zero.


Your subsequent query comes from the road of Joe Moore with Morgan Stanley.

Joseph Moore

Just following up on that final query. You talked about that a number of the crypto market had moved to conventional gaming. What drives that? Is there a scarcity of availability of the specialised crypto product? Or is it simply that there is a choice pushed for the gaming oriented crypto options?

Jen-Hsun Huang

Yes, Joe, I admire you asking that. Here’s the explanation why. So what occurs is when a crypto — when a foreign money — digital foreign money market turns into very giant, it entices anyone to construct a customized ASIC for it. And after all, Bitcoin is the proper instance of that. Bitcoin is extremely simple to design in its specialised chip type. But then what occurs is a few totally different gamers begins to monopolize . As a end result, it chases everyone out of the mining market and it encourages a brand new foreign money to evolve, to emerge. And the brand new foreign money, the one option to get individuals to mine is that if it is onerous to mine, okay? You received to place some effort into it. However, you need lots of people to attempt to mine it.

And so subsequently, the platform that’s good for it, the perfect platform for digital, new rising digital currencies seems to be a CUDA GPU. And the explanation for that’s as a result of there are a number of hundred million NVIDIA GPUs within the market. If you need to create a brand new cryptocurrency algorithm, optimizing for our GPUs is absolutely fairly very best. It’s onerous to do. It’s onerous to do, subsequently, you want a variety of computation to do it. And but there’s sufficient GPUs within the market, it is such an open platform that the flexibility for anyone to get in and begin mining could be very low boundaries to entry.

And so that is the cycles of those digital currencies, and that is the explanation why I say that digital foreign money crypto utilization of GPUs, crypto utilization of GPUs will probably be small however not zero for a while. And it is small as a result of when it will get large, anyone will have the ability to construct customized ASIC. But if anyone builds a customized ASIC, there will probably be a brand new rising cryptocurrency. So ebbs and flows.


Your subsequent query comes from the road of Craig Ellis with B. Riley.

Craig Ellis

Jen-Hsun, congratulations on information heart annualizing $2 billion, it is an enormous milestone. I needed to follow-up with a query on a few of your feedback concerning information heart companions. Because as I look again during the last 5 years, I simply do not see any precedent for the momentum that you’ve out there place proper now between your server companions, white field companions, hyperscale companions which might be deploying it, hosted, et cetera. And so my query is, relative to the doubling that we have seen year-on-year in every of the final two years, what does that companion growth imply for datacenters development? And then if I might sneak yet one more in, two new merchandise simply introduced within the Gaming platform, the 1070 Ti and a Collector’s Edition on Titan Xp. What does that imply for the gaming platform?

Jen-Hsun Huang

Yes, Craig, thanks quite a bit. Let’s see. We have by no means created a product that’s as broadly supported by the industries and has grown 9 consecutive quarters. It has doubled year-over-year and with partnerships of the dimensions that we’re . We have simply by no means created a product like that earlier than. And I feel the explanation for that’s a number of folds. The first is that it’s true that CPU scaling has come to an finish. That’s simply legal guidelines of physics. The finish of Moore’s Law is simply legal guidelines of physics. And but, the world for software program growth and the world — the issues that computing might help resolve is rising quicker than any time earlier than.

Nobody’s ever seen a large-scale planning drawback like Amazon earlier than. Nobody’s ever seen a big planning drawback like DD earlier than, the variety of hundreds of thousands of taxi rides per week is simply staggering. And so no one’s ever seen giant issues like these earlier than, large-scale issues like these earlier than; and so excessive efficiency computing and accelerated computing utilizing GPUs has turn out to be acknowledged as the trail ahead. And so I feel that that is on the highest stage of crucial parameter.

Second is synthetic intelligence and its emergence and purposes to fixing issues that we traditionally thought have been unsolvable. Solving the unsolvable issues is an actual realization. I imply, that is taking place throughout nearly each we all know, whether or not it is Internet service suppliers to healthcare, to manufacturing, to transportation, logistics. You simply title it, monetary companies. And so I feel synthetic intelligence is an actual instrument. Deep studying is an actual instrument that may badist resolve a number of the world’s unsolvable issues.

And I feel that our dedication to excessive efficiency computing and this one singular structure, our seven yr headstart, if you’ll, in deep studying and our early recognition of the significance of this new computing method, each the timing of it, the truth that it was naturally an ideal match for the abilities that we’ve got after which the extremely — the unbelievable effectiveness of this method, I feel has actually created the proper situations for our structure.

And so I feel I actually admire you noticing that. But that is positively essentially the most profitable product line within the historical past of our firm.


Your subsequent query comes from the road of Chris Caso with Raymond James.

Christopher Caso

I’ve a query on the automotive market and the outlook there. And curiously, with the opposite segments rising as rapidly as they’re, auto is changing into a smaller share of income now. And definitely, the design traction appears very optimistic. Can you speak concerning the ramp when it comes to when the auto income, after we might see that as getting again to an identical share of income? Is that rising extra rapidly? Do you badume that’s more likely to occur over the subsequent yr with a few of these design wins popping out? Or is that one thing we ought to be ready for over a number of years?

Jen-Hsun Huang

I admire that, Chris. So the way in which to consider that’s, as you already know, we have actually, actually decreased our emphasis on infotainment regardless that that is the first a part of our revenues in order that we might take, actually, a whole bunch of engineers and together with the processors that we’re constructing now, a few 2,000, three,000 engineers, engaged on our autonomous machine and synthetic intelligence platform for this market to benefit from the place we’ve got and to go after this wonderful revolution that is about to occur. I occur to consider that all the things that strikes will probably be autonomous sometime. And it could possibly be a bus, a truck, a shuttle, a automotive. Everything that strikes will probably be autonomous sometime; it could possibly be a supply car, it could possibly be little robots which might be transferring round warehouses, it could possibly be delivering a pizza to you. And we felt that these — this was such an extremely nice problem and such an important compute drawback that we determined to dedicate ourselves to it.

Over the subsequent a number of years, and in case you take a look at our DRIVE PX platform at present, there’s over 200 firms which might be engaged on it. 125 startups are engaged on it. And these firms are mapping firms, they’re Tier 1s, they’re OEMs, they’re shuttle firms, automotive firms, trucking firms, taxi firms. And this final quarter, we introduced an extension of our DRIVE PX platform to incorporate DRIVE PX Pegasus which is now the world’s first auto grade full [indiscernible] platform for robotaxis. And so I feel our place is absolutely glorious and the funding has confirmed to be top-of-the-line ever. And so I feel when it comes to revenues, my expectation is that this coming yr, we’ll take pleasure in revenues on account of the supercomputers that prospects must purchase for coaching their networks, for simulating the — all these autonomous autos driving and growing their self-driving vehicles.

And we’ll see pretty giant portions of growth programs being offered this coming yr. The yr after that, I feel is the yr when you are going to see the robotaxis ramping and our economics in each robotaxi is a number of thousand . And then beginning, I’d say, late 2022, 2021, you are going to begin to see the primary totally automated autonomous vehicles, what individuals name stage four vehicles, beginning to hit the street. And in order that’s form of how I see it. Just subsequent yr is simulation environments, growth programs, supercomputers. And then the yr after that’s robotaxis. And then a yr or two after that will probably be all of the self-driving vehicles.


Your subsequent query comes from the road of Matt Ramsey with Canaccord Genuity.

Matthew Ramsay

I’ve, I suppose, a two-part query on gross margin. Colette, I bear in mind, I do not know if possibly three years in the past, three.5 years in the past at Analyst Day, you guys have been speaking about gross margins within the mid-50s and that was inclusive of the Intel fee. And now you are hitting numbers at 60% excluding that. I need to — in case you might speak a bit bit about how mixture of the information heart enterprise and a few others drives gross margin going ahead? And possibly Jen-Hsun you possibly can speak a bit bit about, you talked about each are being such an enormous chip when it comes to transistor rely. How you are interested by taking prices out of that product as you ramp in into gaming subsequent yr and the results on gross margins.

Colette Kress

Thanks, Matt, for the query. Yes, we have been on a gradual stream of accelerating the gross margins through the years. But that is the evolution of the whole mannequin. The mannequin of the value-added platforms that we promote and inclusive of the whole ecosystem of labor that we do, the software program that we allow in so many of those platforms that we convey to market. Datacenter is one in every of them; our ProVis, one other one. And if you consider all of our work that we’ve got when it comes to gaming, that general growth of the ecosystem. So this has been persevering with to extend our gross margin. Mix is extra of an announcement when it comes to every quarter, we’ve got a unique combine when it comes to our merchandise and a few of them have a bit little bit of seasonality.

And relying on when a few of these platforms come to market, we will have a mixture change inside a few of these subsets. It’s nonetheless going to be our focus as we go ahead when it comes to rising gross margins as finest as we will. You can see when it comes to our steerage into This autumn which we really feel snug with that steerage that we are going to improve it as nicely.

Jen-Hsun Huang

Yes. With respect to yield enhancement, the way in which to consider that’s we do it in a number of methods. The very first thing is I am simply extremely happy with the expertise group that we’ve got in VLSI they usually get us prepared for these model new nodes, whether or not it is within the course of readiness, by all of the circuit readiness, the packaging, the reminiscence readiness. The readiness is so unbelievable — extremely essential for us as a result of these processors that we’re creating are actually, actually onerous. They’re the biggest issues on this planet. And so we get one shot at it. And so the group does all the things they’ll to basically put together us. And by the point that we take off a product for actual, we all know for sure that we will construct it. And so the expertise group in our firm is simply world-clbad. Absolutely world-clbad, there’s nothing prefer it. Then as soon as we go into manufacturing, we get the advantage of ramping up the merchandise. And as yields enhance, we’ll absolutely profit from the badociated fee.

But that is not likely the place the main target is. I imply, within the closing evaluation, the actual focus for us is proceed to enhance the software program stack on high of our processors. And the explanation for that’s every one in every of our processors carry with it an infinite quantity of reminiscence and programs and networking and the entire information heart. Most of our information heart merchandise, if we will enhance the throughput of a knowledge heart by one other 50%, or in our case, typically occasions, we’ll enhance one thing from 2x to 4x, the way in which to consider that’s that billion-dollar information heart simply improved this productiveness by an element of two.

And the entire software program work that we do on high of CUDA and the unbelievable work that we do with optimizing compilers and graph badytics, all of that stuff then swiftly interprets to a worth to our prospects, not measured by however measured by a whole bunch of hundreds of thousands of . And that is actually the leverage of accelerated computing.


Your subsequent query comes from the road of Hans Mosesmann with Rosenblatt.

Hans Mosesmann

Jen-Hsun, are you able to touch upon a number of the points this week concerning Intel and their renewed curiosity in stepping into the graphic house and their relationship on the chip stage with AMD?

Jen-Hsun Huang

Yes, thanks, Hans. There’s a variety of information on the market. I suppose a number of the issues I take away, initially, Raj leaving AMD is a superb loss for AMD. And it is a recognition by Intel in all probability that the GPU is simply extremely, extremely essential now. And the fashionable GPU shouldn’t be a graphics accelerator. The fashionable GPU, we simply left the phrase G — the letter G in there. But these processors are area particular parallel accelerators. And they’re enormously advanced. They’re essentially the most advanced processes constructed by anyone on the planet at present. And that is the explanation why IBM makes use of our processors for the world’s largest supercomputers, that is the explanation why each single cloud, each single — each main cloud, each main server maker on this planet has adopted NVIDIA GPUs. It’s simply extremely onerous to do. The quantity of software program engineer that goes on high of it’s vital as nicely.

So in case you take a look at the way in which we do issues, we plan a roadmap about 5 years out. It takes about three years to construct a brand new technology and we construct a number of GPUs on the similar time. And on high of that, there’s some 5,000 engineers engaged on system software program and numeric libraries and solvers and compilers and graph badytics and cloud platforms and virtualization stacks so as to make this computing structure helpful to the entire those who we serve. And so when you consider it from that perspective, it is simply an infinite enterprise. Arguably, essentially the most vital enterprise of any processor on this planet at present. And that is the explanation why we’re capable of velocity up purposes by an element of 100. You do not stroll in and have a brand new widget and some transistors and swiftly, velocity up purposes by an element of 100 or 50 or 20. That’s simply one thing that is inconceivable except you do the kind of innovation that we do.

And then lastly, with respect to the chip that they constructed collectively, I feel it goes with out saying now that the vitality effectivity of Pascal GeForce and the Max-Q design expertise and the entire software program that we created has actually set a brand new design level for the . It is now potential to construct a state-of-the-art gaming pocket book with essentially the most modern GeForce processors and have the ability to ship gaming experiences which might be many occasions higher than a console in 4K and had that be in a laptop computer that is 18 millimeters skinny. The mixture of Pascal at Max-Q has actually raised the bar. And I feel that that is actually the essence of it.


Unfortunately, we’ve got run out of time. I will now flip the decision over to you for closing remarks.

Jen-Hsun Huang

We have one other nice quarter. Gaming is without doubt one of the fastest-growing leisure industries and we’re well-positioned for the vacations. AI is changing into more and more widespread in lots of industries all through the world and we’re hoping to prepared the ground with all main cloud suppliers and laptop makers transferring to deploy Volta and we’re constructing the way forward for autonomous driving. We anticipate robotaxis utilizing our expertise to hit the street in simply a few years. We sit up for seeing a lot of you on the SE 17 this weekend. Thank you for becoming a member of us.


This concludes at present’s convention name. You might now disconnect.

Copyright coverage: All transcripts on this web site are the copyright of Seeking Alpha. However, we view them as an essential useful resource for bloggers and journalists, and are excited to contribute to the democratization of economic data on the Internet. (Until now traders have needed to pay hundreds of in subscription charges for transcripts.) So our replica coverage is as follows: You might quote as much as 400 phrases of any transcript on the situation that you simply attribute the transcript to Seeking Alpha and both hyperlink to the unique transcript or to www.SeekingAlpha.com. All different use is prohibited.


If you’ve got any extra questions on our on-line transcripts, please contact us at: [email protected]. Thank you!

Source hyperlink

Leave a Reply

Your email address will not be published.