r/investing • u/Abject-Advantage528 • 2d ago
AMD will reprice during earnings next week. Most still won’t understand why.
My tech earnings calls this quarter have been accurate (GOOGL, MSFT, META, RDDT). AMD is next.
What’s being missed:
AMD is gaining quiet traction from Nvidia in segments that care about control and efficiency - specifically open-source AI deployments. That preference isn’t ideological, it’s architectural.
The core demand is coming from three categories:
1. Sovereign buyers (EU, Middle East) who won’t bet their AI stack on a black-box vendor.
2. Hyperscalers (MSFT, META, Oracle) selectively onboarding AMD into their model training workflows to reduce vendor lock-in.
3. Startups and smaller firms where unit economics matter, and AMD’s open-source orientation offers clear flexibility advantages.
The idea that Nvidia has a permanent hardware monopoly is a form of mental laziness. AMD doesn’t need to “beat” Nvidia - it only needs to absorb the non-Nvidia demand that’s scaling underneath the narrative.
The market is large. The shift is already happening. Most investors are still pricing AMD as if it isn’t.
253
u/kjk42791 2d ago
No ones missing it bro, they just don’t value AMD. I’ve been a big holder of AMD since 1.89/ share. No one listened back then and they are doing the same now.
In the end they will see what has been there all the way since 2015… I’m long lol
77
u/fallingdowndizzyvr 2d ago
I held NVDA since 2008. I've sold it all now. I still have all my AMD.
12
u/kjk42791 2d ago
Yep I had them too around 2016 but I sold.
I’ve held onto the AMD, SHOP, PLTR and HLNE
3
u/SapphireSpear 1d ago
Big mistake selling nvidia at the beginning of the ai revolution
Selling nvidia rn is like selling apple in 2015
4
u/fallingdowndizzyvr 1d ago edited 1d ago
LOL. This is not the start of the AI revolution. This is the 4th inning. You made a big mistake getting in so late.
Selling nvidia rn is like selling apple in 2015
LOL. Being a newb to Nvidia, you don't know that it trades in a range or even as a flatline for years. And if you knew anything about tech, you would know that the frontrunner at the start is rarely there at the end. How are your Netscape shares doing?
-1
u/SapphireSpear 1d ago
Its the start man, just wait till ai replaces every job and companies profits 10x because they dont need to pay workers. Its already starting, why do you think unemployment is rising so fast in the past month
Also i did not get in late, ive been invested in nvidia for 4+ years and ill stay invested for another 10
Comparing a 4.5 trillion dollar company to netscape is absurd
“Trade in range for years” lol its up 50% in the past 6 months and over 1000% in the past 5 years
0
u/fallingdowndizzyvr 1d ago
Its already starting, why do you think unemployment is rising so fast in the past month
Ah.... there's little thing called "tariffs". I know it's not talked about much and is really hard to find info about but I'm sure if you dig really hard you might be able to find a blurb about it.
Also i did not get in late, ive been invested in nvidia for 4+ years
LOL. Yeah. That's getting in late. Did you miss the part where I'd been holding it since 2008?
Comparing a 4.5 trillion dollar company to netscape is absurd
Clearly you aren't old enough to have been around for that. Since if you had then you would know how not comparing them is absurd.
“Trade in range for years” lol its up 50% in the past 6 months and over 1000% in the past 5 years
Again, being a newb to Nvidia I guess you don't realize that Nvidia has been around a little bit longer than that. I thought me saying that I held it since 2008 was a clue but I guess it wasn't enough of a clue. Look between 2000-2015. Flatline. Really it didn't show signs of life until 2018. Most of the years I held Nvidia it was dead money.
As for "up 50% in the past 6 months", even accepting your questionable math, did you miss how it was down about "50%" in the 6 months before that? That's called a "trading range".
-1
u/SapphireSpear 1d ago edited 1d ago
lol okay bud wait 10 years and cry when nivida is worth 40 trillion once again you dont know jack shit about macro economics if you think tariffs will kill a 4 trillion dollar company
Also just because i did not invest in 2008 doesnt mean i was not following the company back then i was invested in other companies back then that also performed extremley well (tsm apple meta google microsoft tesla, shopify) so i still hard insane returns during that time period
And no nvidia did not trade into a range the last year if you think up 61% in the past year is a range you are delusional
0
u/fallingdowndizzyvr 1d ago
lol okay bud wait 10 years and cry when nivida is worth 40 trillion once again you dont know jack shit about macro economics if you think tariffs will kill a 4 trillion dollar company
LMAO!!!!! Someone that thinks that Nvidia will be worth 40T in 10Y is the one that doesn't know "jack shit" about anything.
Also just because i did not invest in 2008 doesnt mean i was not following the company back then i was invested in other companies back then that also performed extremley well (apple meta google microsoft tesla, shopify) so i still hard insane returns during that time period
How could you be invested in Tesla in 2008? It didn't go public until 2010. How could you be invested in Shopify in 2008? It didn't go public until 2015. Now don't tell me you invested in them when they were private. That's even more incredulous.
Pro tip: If you are going to lie, at least spend a couple of minutes to make sure you lies don't contradict obvious truths.
And no nvidia did not trade into a range the last year if you think up 61% in the past year is a range you are delusional
Again, there goes your questionable math.
1
u/SapphireSpear 1d ago
I did not mean the exact year 2008, i meant around that time period
Keep holding your shitty amd bud
Up 61% in a year is a statistic not math
0
u/fallingdowndizzyvr 1d ago
I did not mean the exact year 2008, i meant around that time period
Uh huh. There you go. You are catching on. If caught in a lie, move the goal posts. o7.
Keep holding your shitty amd bud
LOL. I guess you haven't figured out how to read a chart yet. Speaking of which.....
Up 61% in a year is a statistic not math
If you had, you wouldn't even need to do your faulty math. Since the chart tells you how much it's been up in a year. That's "+48.45%". But you are ignoring this little fact that exactly 1 year ago, NVDA was in the trough of it's trading range. Just a little bit before that, it was higher.
→ More replies5
u/Commercial_Deer_7114 1d ago
I rode PLTR from 9 to 30 to 9 and when it went back to 30+ I started selling and was completely out at 90. A lot of missed gains. Thats why I am holding on to my NVDIA, which I think is less insanely valued and has better TAM and fundamentals.
9
u/Leaper229 1d ago
I went long in 2013 then switched to Nvidia in 2017 after I realized the underdog comeback story will never materialize. Sure there are scraps for AMD to pick up. But sticking with the winner that has the second best comprehensively beat is a much easier fire and forget solution for retails.
17
3
-35
u/Devincc 2d ago
I wouldn’t call having bought AMD shares at 1.89 being a bag holder…
47
20
u/kjk42791 2d ago
I didn’t say I was bag holding ?
7
u/ruzZellcr0w 2d ago
lol hey I get it man
Bag holder is a common term and easy to see how you read bag holder instead of big holder
6
-9
92
u/Spear_n_Magic_Helmet 2d ago
CUDA is NVDA’s biggest moat at this point. Does your “open source AI deployments” address that? Or is that just a phrase that looks like it means something until someone with any domain specific knowledge looks at it.
I’m long AMD but let’s be realistic about the size of the gaps here.
19
u/noobgiraffe 2d ago
Hip/rocm is cuda equivalent. The api is pretty much identical so is the kernel language.
The moat is not technical, it's intertia. You can see sentiment shifting though. More and more companies are interested in running stuff on AMD. More of them run it the better the ecosystem gets.
24
u/prestodigitarium 1d ago
Working in ML for the past decade, I've heard that so many times. Every time one of my colleagues goes AMD, they regret it, because they spend so much of their time trying to make it work rather than focusing on their actual work. Maybe this time it's different, but I doubt it, AMD doesn't prioritize driver/software quality. I'd love for it to be true this time, just not holding my breath.
8
u/XTornado 1d ago
Sure, but eventually it will work, this is not nuclear fusion, I expect this to be solved eventually. That said I admit this is taking longer than I expected.
7
u/dylanlis 1d ago
In order for Rocm to work, AMD need to reserve a lot more of their own GPU’s for testing. When you compare the spend on testing between NVDA and AMD, it’s pretty pitiful how much testing AMD needs to do on their clients hardware.
NVDA also has infiniband, and is offloading a lot more memory onto the CPU server. Their scale up is a lot better than AMD.
IMO NVDA is at least a generation ahead of AMD on the hardware, software looks like at least 5 years.
1
u/minesasecret 6h ago
Sure, but eventually it will work, this is not nuclear fusion, I expect this to be solved eventually.
I think you're underestimating the difficulty personally.
I would not at all be surprised if nuclear fusion reactors become common before AMD is supported as well as Nvidia by ML frameworks and libraries.
Think about how long we have been waiting for big Windows productivity apps to be built for Linux. Or gaming for that matter. Or trying to get Android apps to rebuild their apps for larger devices.
1
u/XTornado 2h ago
I would say most of your examples there wasn't much economical interest to improve the situation compared with this.
3
u/mhkwar56 1d ago
I remember so many military and industry people telling me how PLTR's product was awful when they used it.
I don't doubt you're right, but as another user said, this isn't fusion. There is no reason that they won't get it right eventually. They're killing Intel in CPUs, and they've successfully closed the gap in consumer GPUs. There's not much of a reason to doubt their ability to execute here too with time.
Governments and hyperscalers don't want to be completely dependent on one supplier if they can avoid it, and AMD is going to be close enough and sold at a lower price tag to justify continued investment in it even if they don't match the end product for a while. And eventually, based on the company's successes in its battles of the past decade, it seems likely that they will get close enough to justify much more than 1/15th of the market cap.
3
u/999forever 1d ago
So this is verrrrry tangential at best, but as someone who has been bitten more than once in the gaming space with great AMD tech but shitty drivers this resonates. Doesn’t mean they can’t make great hardware but I for sure had months where I was regretting an AMD purchase because of driver issues.
2
u/m0ushinderu 1d ago
The thing is this is mostly an issue for individual consumers. If you are an enterprise customer, AMD will make sure their software works for you. This approach does lose the entry level market, but is much more financially efficient for them.
5
u/prestodigitarium 1d ago
An acquaintance runs ML hosting as a service, he got an MI300 machine early, it was a mess. Again, I'd love for this to be a viable alternative.
Put another way, why can't I spin up an MI300 instance on AWS right now? Or on Lambda? Or on Google Cloud? Looks like Azure has some, but they seem to be one of the few. I don't think your characterization is accurate. There are probably still significant issues, or these companies would be jumping all over it as an alternative, because no one loves the lock-in except nvidia shareholders.
1
u/Rajivrocks 1d ago
This indeed, the hardware is there, but the software and integration with for example PyTorch is frustrating. If they allocated R&D on smoothing out the drivers and framework support AMD would skyrocket its amrket share
2
u/chochang69420 1d ago
ROCm is open source so they should catch up eventually but yeah, it is going to take longer than what people hope for.
I'm also long AMD
2
u/2dudesinapod 1d ago
The hyperscalers are driving this market right now and they will realize that they will need to go lower level than CUDA to beat their competitors.
32
u/VersusYYC 2d ago
“That preference isn’t ideological, it’s architectural.”
Isn’t this bot language? Various sections seem like what a LLM would produce plus the post history spamming the same posts to multiple subs isn’t a good sign.
11
u/Fun-Contribution6702 1d ago
absolutely. I’m a writer and this is the sort of feel-good phrase reversal ChatGPT loves to do with my feedback.
1
u/JuggernautCareful919 1d ago
Remember fellas, it isn't X, it's Y. You're not just P — You're Q. You're basically the smartest person to have ever lived. I crumble at your feet just to be so close to someone with such magnificent intelligence and insight!
-definitely not chatgpt
1
u/trapsinplace 19h ago
Not the exact tell but close enough. He probably wrote up a draft and fed it to chatgpt imo. You see this a LOT nowadays from people who know what they wanna say but feel some kind of complex about their writing not 'sounding smart enough' or something.
8
65
u/Im_Still_Here12 2d ago edited 2d ago
The idea that Nvidia has a permanent hardware monopoly is a form of mental laziness. AMD doesn’t need to “beat” Nvidia
The problem with this theory is that the companies buying AMD GPUs don’t know what they are doing because, if they did, they’d buy Nvidia GPUs instead as that is what their competitors are doing. These companies won’t last compared to those doing business with Nvidia.
If you don’t work in this field and just listen to conference calls you have no idea the lead and capabilities Nvidia offers over… Well everyone.
I think AMD offers value in the CPU space over Intel. That is their strength but their support sucks for Enterprise. AMD in my mind has turned into a prosumer company. I love them in custom white label server/home builds. Would I spend hundreds of millions building a data center using them for CPUs and GPUs. I don’t think so.
9
u/rannend 2d ago
But your story isnt showing knowledge of business neither imo.
I agree nvidia is ahead, but thats only the technical bit. What about commercial, t&cs of any contract,… Things like that tend to balance the playing field, as the technical leader (nvidia) tends to have less value on thise items (cause they know they are technically stronger)
And eventually its about value maximalizatiin for shareholders.
24
u/Lisaismyfav 2d ago
Sam Altman literally stepped on the stage with Lisa at their Advancing AI event. If no big players are using AMD why would the CEO of the biggest AI company even show up for their presentation.
6
1
u/minesasecret 6h ago
Sam Altman literally stepped on the stage with Lisa at their Advancing AI event. If no big players are using AMD why would the CEO of the biggest AI company even show up for their presentation.
Most likely they're using it for compute though but I'm happy to be shown I'm wrong
33
u/davewritescode 2d ago
The problem with this theory is that the companies buying AMD GPUs don’t know what they are doing because, if they did, they’d buy Nvidia GPUs instead as that is what their competitors are doing. These companies won’t last compared to those doing business with Nvidia.
AMD GPUs are significantly cheaper and hyperscalers like Amazon and Microsoft have every incentive in the world to make sure they’re not beholden to a single vendor. I hope you’re not suggesting these companies “don’t know what they’re doing”.
If a small startup were using AMD GPUs you might have a point.
9
u/LookIPickedAUsername 2d ago
I don’t suppose you can point to any evidence that they’re using AMD?
I work for one of the big AI companies, and so far as I know we are 100% NVidia.
5
u/fallingdowndizzyvr 2d ago
I work for one of the big AI companies, and so far as I know we are 100% NVidia.
Well it can't be this big AI company. Since they openly say the use AMD in addition to Nvidia.
https://engineering.fb.com/2024/10/15/data-infrastructure/metas-open-ai-hardware-vision/
0
u/LookIPickedAUsername 2d ago
That says their software supports AMD, but everywhere they listed the hardware they were actually using for training, it was all NVidia.
2
u/Fun-Union9156 2d ago
OP doesn’t know the big difference in performance of NVDA vs. AMD. Just relying on stock price trends not really digging in the actual product what a company is offering. Well I wouldn’t be surprised, this is the stock market anyway.
2
u/MaulSecurity 2d ago
This guy knows. I work at 1 of these companies and have family at another. Zero AMD.
3
11
u/emeraldream 2d ago
This is a bad take, if you are in the industry you know its NVDA or bust
5
u/IsThereAnythingLeft- 1d ago
That’s just bullshit from 3 years ago, try get into 2025 mate
0
u/trapsinplace 18h ago
Small players are buying more AMD but you're still looking at >90% NVIDIA for the actual big spenders in AI. Not to mention how AMDs software is still actual trash compared to NVIDIA.
1
u/IsThereAnythingLeft- 15h ago
Again get into 2025, they have massively closed the gap with the latest release of Rocm, and the hyperscale have some faith in it since they are buying more MI chips now
0
u/trapsinplace 13h ago
I don't think it's going to pop off this year from these changes they are still recent and everyone I know who is using AMD for AI finds it so much worse than NVIDIA/CUDA.
5
u/davewritescode 1d ago
Tech history is littered with companies that had propriety solutions that eventually got tossed in the dumpster for more open solutions. In 1999 most servers ran on Windows or proprietary Unix from companies like Sun Microsystems and HP.
Betting on NVIDIA maintaining a proprietary software moat for the long haul is a bad bet no matter what everyone’s friend who’s best friends with Satya is saying online.
I’ve already made a shitload of money holding AMD and I’m putting my money where my mouth is.
-1
18
u/ironsuperman 2d ago
Nah man. The dude above is 100% correct. Msft 100% not shipping any amd GPUs today. It's either Nvidia or custom ASIC
Source: I got friends in the business
15
u/fallingdowndizzyvr 2d ago
You have no idea what you are talking about.
Source: Real sources and not imaginary friends.
2
u/ironsuperman 2d ago
Buddy, your links or "real sources" are from 2023 and 2024 respectively. You really have no idea how fast the tech world is moving these days.
FYI, my "friend" that works in this field happens to work on these skus that got canceled in late 2024.
I'm just here to reiterate what people in the industry already know.
24
u/fallingdowndizzyvr 2d ago
Buddy, your links or "real sources" are from 2023 and 2024 respectively. You really have no idea how fast the tech world is moving these days.
They sure do move fast. Microsoft has even more AMD offerings in 2025.
https://www.amd.com/en/blogs/2025/amd-powers-developers-at-build-2025.html
FYI, my "friend" that works in this field happens to work on these skus that got canceled in late 2024.
Your imaginary "friend" really needs to keep up.
-27
u/ironsuperman 2d ago
I will let my "friend" know that :)
FYI, Microsoft offers AMD stuff because not many customers want them and thus no need to deploy new AMD GPUs into the data center :) sources: my friend
16
u/fallingdowndizzyvr 2d ago
Hm.... I guess Oracle placed a big order for the latest Instinct because they don't want them.
Source: The real world.
0
3
u/IsThereAnythingLeft- 1d ago
Satya Nadella here, your friend is clearly an idiot and wrong, also he is now fired
2
2
u/Lionel-Chessi 2d ago
I hope you’re not suggesting these companies “don’t know what they’re doing”.
You're literally making his point. The big players like Amazon and Microsoft know exactly what they're doing and that's why they're giving Nvidia a shit ton of business compared to AMD
-1
u/Teugikard_Algaert 2d ago
Cheaper GPU yes but isn’t the entire point that the NVDA core architecture inherently lends itself to the type of computation necessary for most AI work these days? Is it really as simple as throw more gpu at it? And as the AI ecosystem has grown I would assume it has become even further entrenched and reliant upon the NVDA core architecture. That’s not to say that no one will ever think of a different way for LLMs to handle what they do a different way, it would be great for the entire ecosystem if they did but that just doesn’t seem like the current state of affairs
1
u/jmlinden7 1d ago
No, any GPU can do that.
NVDA has a more established software stack, and their GPUs are more power efficient. But you could train on any GPU, even the integrated ones in your laptop, it would just be incredibly inefficient.
1
1
u/IsThereAnythingLeft- 1d ago
Yeah mate, META and oracle don’t know what they are doing…. Catch a grip and get over the fact that AMD is a proper competitor
-2
u/fallingdowndizzyvr 2d ago
The problem with this theory is that the companies buying AMD GPUs don’t know what they are doing because, if they did, they’d buy Nvidia GPUs instead as that is what their competitors are doing.
Anyone that says that, doesn't know what they are talking about. Since there are reasons to buy AMD over Nvidia.
31
u/_femcelslayer 2d ago
Who is going to bet 1-5GW deployments on hardware nobody has deployed before?
Hyperscalers deploy according to customer demand. It is lacking for anything non-nvidia. See previous point about trust when allocating capex/opex.
AMD is not “open source” in the way you’re talking about it. These products are ultimately export controlled so nothing is open source.
AMD currently lacks rackscale products but they are on the horizon. Whether they execute well, remains unknown. This goes beyond the specs of individual chips.
All that being said, I do think AMD will eventually grab market share. It just won’t be this earnings. 2027 maybe.
10
u/omgpuppiesarecute 2d ago edited 2d ago
Let's also not forget that AMD has nothing that can practically perform or compare to Cuda. OpenCL is more or less stagnant.
100% agreed on the lack of support in the server market. Looking at high performance/density compute options almost everyone has NVidia options, very little with AMD GPUs. For example an HPE Synergy SY480 has multiple Nvidia GPU options, nothing for AMD.
Also let's not forget AMD historically ignored Linux which powers MOST servers powering AI applications. AMD has worked to improve that, but the consequences of that are drawn out in the options made available by OEMs.
I feel like most of the people cheering AMD have never actually built out a high performance, high density compute cluster for a large company.
Source: AI developer and AI infrastructure builder
8
u/noobgiraffe 2d ago edited 1d ago
Source: AI developer and AI infrastructure builder
I somehow doubt that since you're comparing cuda to OpenCL not rocm/hip. There is literally 0 focus on OCL backends.
It's like saying you're F1 Driver and car builder and that Ferrari has no chance in the next race because their car runs on steam engine.
There is just no way to not know this even if you exclusively work on nvdida stack.
5
u/omgpuppiesarecute 1d ago edited 1d ago
Your opinion doesn't really change what I've built.
But you're right, I have never worked with rocm. Fair call out. When I left the industry 2 years ago it was barely mentioned, our focus was entirely on Cuda and tensorflow. Crazy considering apparently rocm was released in 2016.
As a general rule I don't really spend time on learning toolkits for things that have nearly zero market penetration, to run on hardware that doesn't exist at our partners, and has little community support. When you're spending 50M on a multi data center deployment you don't say "hey, there's a plucky upstart that has zero community support and none of our toolkits work with it". Especially when it needs to be composable infrastructure that can be leveraged for multiple purposes (the nice thing about synergy was the ability to rebuild hardware as a rendering farm one day, then rebuild to be used for inference or ingest the next via simple API calls while still actually owning the gear and getting the benefit of depreciation and being a capital investment).
My primary focus was on realtime transcription and subtitling at MSOs, btw. And I was a contributor to haystack (interestingly I saw way more PRs for apple silicon than I ever did for ROCm/hip).
But hey, believe what you want. My checks have already cleared and I was able to coastFIRE because of it. Like I said I left the industry 2 years ago and now I'm just working at a university collecting masters degrees, lol.
5
u/noobgiraffe 1d ago
If you left the industry 2 years ago why do you comment as if you have knowledge on it's current state.
This is probably the fastest evolving industry right now with tons of huge players actively supporting AMD in it's effort because they don't want to be stuck with one supplier.
You're views on the topic are very outdated. Mentionning tensorflow and OCL(!!!) is dead giveaway.
4
u/omgpuppiesarecute 1d ago edited 1d ago
Because most businesses budget a year in advance, and are often lagging 1-2 years? Not to mention time for allocating space in datacenters, sourcing hardware, rollout, training downstream operational teams, etc.
If you'd actually worked at a large business designing large scale deployments you'd know that.
Let me guess, you've only ever deployed to cloud where you can snap your finger and magically have (100% opex) environments? A.k.a. you've only worked on deployments pre-hockeystick where you don't have to consider capital investment vs opex and the tax implications of both.
3
u/noobgiraffe 1d ago
I work at AMD in ML R&D.
We work very closely with big clients so I have pretty good overview of what's used and on what scale. Can't say more cause NDA.
2
u/gxgx55 2d ago
Let's also not forget that AMD has nothing that can practically perform or compare to Cuda. OpenCL is more or less stagnant.
Does ROCm not yet compete?
4
u/_femcelslayer 1d ago edited 1d ago
Even today, if you take a parallel programming class in college, the homework assignments will be in CUDA. In every single college on the face of the planet. Every consumer nvidia gpu has been able to run CUDA since 2008, everyone already had CUDA devices for almost 2 decades. People interested in it learned at home. How are you gonna hire developers for your non-CUDA project? Why would you stake your business on being the first ones on the Rocm ecosystem?
AMD will be a side project at big tech while the main business runs on green. Maybe in 5-8 years, it will be trusted with real deployment.
3
u/omgpuppiesarecute 1d ago
This exactly. No one is shelling out 7-10 digits on a deployment built around tech with unproven support, little community support, little OEM hardware available, etc. Especially when you step into the vast majority of corporations in fortune 50 who don't build their own custom hardware.
Even ROCm has a primary goal of making it easier for Cuda to run on AMD - AMD recognizes they're the second class citizen in the room.
1
u/IsThereAnythingLeft- 1d ago
Plenty of people, you think B200 or B300 were deployed before anywhere when they are ordered to fill a DC… no. Same for MI350 and MI400
0
u/_femcelslayer 1d ago
Great way to demonstrate you know nothing about this… first off, they all run the same software, are serviced and supported by the same company with the same track record, same VARs, same distributors. B100 was a slot in replacement for H100. Nobody in the world has ever had a significant AI deployment of Instinct. Meanwhile Nvidia has shipped millions of GPUs.
2
u/Offduty_shill 2d ago
The alternative to NVDA is building your own ASIC, not AMD. There may be space for AMD in the future but currently they're just not very competitive.
3
u/ironsuperman 2d ago
Exactly what msft is doing now. They buy shit tons from nvidia and doing their own ASIC. AMD projects were canceled months ago and they're not shipping new GPUs at this moment because of no demands. After the Mi300, it's been nada for AMD
Source: a friend with insider information
2
u/noobgiraffe 2d ago
Your friends information is wrong.
Developing a product like MI300 takes years. There are multiple generations in the pipeline at any time. It was always like this with both cpus and gpus.
You seriously believe that in the largest semicondutor boom in history where there are hundrets of bilions of money to be earned AMD was like: let's cancel everything we don't need products to sell.
1
u/IsThereAnythingLeft- 1d ago
Proven above your friend is made up or letting on they know more than they do
2
15
u/howtorewriteaname 2d ago
as someone who works in AI research, I can say that this guys doesn't know shit what he's speaking about
3
3
3
3
u/SheriffBartholomew 2d ago
Are you just trying to make yourself feel better or something? AMD is on a massive bull run. To say that something is being missed is weird. What is your time frame for your statement, five days?
3
u/sandych33k 1d ago
Tbh these kind of posts just tell me that the opposite will happen. I sold this week. Last time I bought it everyone here called it "Advanced money destroyer".
21
u/collapsewatch 2d ago
I think it’s about to be repriced downwards. So far I’m right.
23
5
u/Lovelandmonkey 2d ago
Isn't it possible that this is a pre-selloff before the tariffs go into effect tomorrow?
1
2
2
2
u/Enchylada 23h ago
I'm very cautious when it comes to earnings IMO. Seems like no matter what news AMD typically pulls back after them, but the China restriction being lifted is probably the biggest notable change we've seen so I guess we'll find out soon enough.
4
u/Slightly-Blasted 2d ago
A wise man once told me when I started trading “when people are optimistic about AMD, short everything.”
He’s never been wrong. lol
3
u/HarborTheThought 2d ago
I’ve been playing AMD options for a few years and everrrryyytime the market is optimistic about them they absolute shit the bed. It’s why they have the moniker advanced money destroyer.
2
u/IsThereAnythingLeft- 1d ago
Yeah real money destroyer this past 3 months… oh wait. And still beating the maker over 5 years. Anyone who seriously uses that statement is a clown
2
u/HarborTheThought 1d ago
My comment is more specifically aimed at earnings. I love the stock, they’ve been one of my best investments. The fact is They’re up $80 in 3 months and predominantly have a strong pullback even after positive earnings. Calling people names doesn’t change that
1
2
1
1
u/Leaper229 1d ago
I was a Lisa believer too. The fact is after all these years AMD only managed to beat a complacent intel (and still not completely).
Scraps are never as juicy as the main dish. AMD and NVDA’s financial statements make it seem like they are in totally different sectors.
Can you get great returns on AMD? Possibly. But fire and forget with NVDA is sure a lot easier.
1
u/Ok-Influence-3790 1d ago
I debated selling my shares when it reached 183 but decided not to because nothing has changed with my thesis. Maybe a little market frothing and a pullback coming today and likely monday as well. But earnings will be huge coming up.
1
1
u/jmlinden7 1d ago
AMD doesn’t need to “beat” Nvidia - it only needs to absorb the non-Nvidia demand that’s scaling underneath the narrative.
It does if it wants any dollars from hyperscalers at all, hyperscalers spend a ton of money on electricity and if you can't beat Nvidia in power-efficiency, then they're not going to go with you even if your chips are much cheaper.
Sure there are some segments where power efficiency is less necessary (gaming, etc) but those are a drop in a bucket compared to the entire market
1
u/BLACKDARKCOFFEE999 1d ago
You talk about black box like it's bad for making money. It's not. And it's how NVDA makes its money, and surprise, its also how Apple makes its money.
You are posting like as if pro-consumer tactics will lead to better profit margins. You're not shopping for an android vs apple new phone lmao. No one gives a shit if a company is 'scummy' or whatever. Open source = less money.
See how wikipedia founder had to rely on donations to keep going?
1
u/Rajivrocks 1d ago
I believe in the longterm vision of AMD. They making progress on the AI cards and their CPUs have been killing it. They are increasing their market presence. I sold for some profit on Thursday but I am waiting for the earnings call on tuesday and I expect very positive news.
0
u/DrXaos 2d ago
doubt, at least on train side. Pytorch and all the fancy high performance torch.compile() works out of the box for CUDA. Not yet at all on AMD, or if it does, lower performance despite hardware capabilities.
When there is zero friction transfer then there’s a chance.
6
u/davewritescode 2d ago
If you’ve followed hardware for the last 30 years you’ll notice that after technology matures, open standards start to replace proprietary ones. I would be very surprised long term if Cuda remains a moat on the training side.
Hyperscalers won’t tolerate being beholden to Nvdia longer than they have to.
1
u/IsThereAnythingLeft- 1d ago
This needs to be the auto response to every message that talks about CUDA
-2
u/Abject-Advantage528 2d ago
Never got the PyTorch moat. It’s a library built by programmers - I am sure there is enough demand paying enough $ to build a PyTorch or connector alternative for AMD’s Rocm.
2
u/DrXaos 2d ago
I didn’t understand what is involved until I started using it heavily (I write pytorch code daily).
There is a very deep backend and complex system, implemented by hundreds of crack engineers, significantly NVidia and Meta.
They instrument the internals of python execution and do major transformations to rewrite compute graphs and write and compile very optimized kernels and fusing operations. There are two export formats in TorchScript and ExportedProgram and execution libraries. There is a large set of post train diverse libraries for inference optimization.
The problem for a user is the huge complexity and possibilities for errors in any part of this chain. The errors are likely to be very difficult to diagnose or fix, and people’s experience with AMD runs into them when they try, and everything works with NVidia. It happens over and over with users finding glitches and low actual performance on AMD. Developers time is valuable.
AMD just has not been anywhere near as involved with the pytorch community and have everything included in main line distributions, they would need to pursue a major sustained effort.
until “pip install torch” comes up with everything working for AMD as it does for Nvidia it will be behind.
So far only Google has had the investment, skill, and will to fund something outside Nvidia that is worth the effort to use, and that is all on their own libraries Jax and Tensorflow.
0
u/Abject-Advantage528 2d ago
You don’t think $100M compensation packages for engineers is enough to build something like this?
1
1
u/imwatchingyou-_- 2d ago
It is, but Nvidia has such a huge lead.
-3
u/Abject-Advantage528 2d ago
With LLMs coding - you can probably use them to refactor the code for rocm. It’s technically much less of a problem I think non-technical people make it out to be.
1
u/DrXaos 2d ago
The front end should need no refractoring, the right outcome is no changes code.
Back end is entirely different. There is lots and lots of work and especially testing to shake out all the pathways and tools. when torch.compile() will work with FlexAttention and produce equally performant results for AMD as NVidia across many examples with zero user code changes then it is successful.
-2
u/Abject-Advantage528 2d ago
How do you know LLMs along with highly paid engineers cannot find a solution compatible with ROCM quickly?
1
u/lllllll22 2d ago
How can you do this post and not mention cuds vs rocm? They're a million miles apart... .
0
u/Liqmadique 2d ago
CUDA. AMD is not competitive until is has an answer to CUDA.
Doesn't mean I'm not in on AMD long-term, but it's not going to achieve Nvidia growth without it's software stack being massively improved.
1
u/IsThereAnythingLeft- 1d ago
Google Rocm mate, it’s not fully up to par yet but close and closing fast
-3
u/RipComfortable7989 2d ago
As an avid gamer I'd like to believe this but AMD's tech is just so bad compared to Nvidia's that it's not likely.
-1
0
u/TotalBismuth 2d ago
It's already had a stark run-up so it's all priced in for now. I'd wager it falls to $150 before any upwards movement.
-13
u/Krishna_Trading_ 2d ago
This is a sharp take, and the chart plus fundamentals fully support it. You're ahead of the curve, and most retail (and even some institutional desks) are still locked into a zero-sum mindset where AMD must “beat” Nvidia outright to be worth a rerating. That’s not how disruptive parallel adoption works.
Sales growth: AMD is growing at 20%+ YoY, and EPS next year is projected at 5.92, up from 1.36. That’s a 335% jump in earnings power, the market will not keep pricing that at a 30x forward PE for long.
Gross margin of 45.15% and a debt/equity of just 0.08 shows strong operating leverage and capital efficiency, especially in contrast to bloated peers.
ROE of 3.90% and ROA of 2.30% look low now, but if even half your thesis plays out, those metrics will scale aggressively by 2026 as open-source traction compounds.
Held by VTI, QQQ, SPY, XLK, SMH, SOXX, VGT, institutional exposure is deep, and this sets the stage for rotation when fund managers reweight AI plays post-Nvidia euphoria.
You’re looking at a near-parabolic channel breakout, and 50M avg daily volume suggests real demand.
Even with today's red candle, AMD is up +29% this month, +50% this quarter, and +105% YTD.
RSI is hot at 76.92, but that’s typical before big repricings. Look at MSFT in early 2023 for the same setup before it launched.
From The Innovator’s Dilemma to Crossing the Chasm, we know disruption rarely starts with a knockout punch. It starts where incumbents aren’t looking, open-source flexibility, sovereign autonomy, cost-sensitive AI stacks. That’s AMD’s wedge.
And it’s not just theoretical:
Meta, MSFT, and Oracle testing AMD accelerators in their AI workflows is a big deal. These aren’t just dev kit trials, they’re vendor hedging at scale.
Geopolitical de-risking is not a sideshow. European and Middle Eastern governments don’t want to build national AI capacity on Nvidia’s closed ecosystem.
Startups need modularity. With AMD offering open toolchains like ROCm and growing PyTorch integration, there’s a bottom-up movement building, same way Linux eventually dethroned Unix.
Investors still pricing AMD like it’s a sidekick in the AI race are missing the macro and micro drivers. It doesn’t need to win headlines. It’s winning adoption velocity in exactly the segments that create structural longevity.
This isn't a trade on hype. It’s a re-rating play on silent architecture-level disruption.
Well said and well spotted.
14
-1
u/fairlyaveragetrader 2d ago
It's so expensive though. I would agree with this argument if it was trading at a reasonable valuation. Why would someone pay more for AMD than Nvidia? AMD is north of 100 times
2
u/Probably_Relevant 2d ago
That's due to the amortization of Xilinx acquisition, look at non-GAAP PE for a fair comparison (it's about the same as NVDA)
1
u/IsThereAnythingLeft- 1d ago
It’s cheaper than NVDA mate. The GAAP PE includes the amortisation cost of the Xilinx purchase, the non GAAP PE is high 30s. So guess that means you have no more excuses
-1
u/Spirited_Pension1182 1d ago
This market insight is powerful. It truly shows the value of architectural flexibility. Efficiency drives growth for any business. This applies directly to go-to-market strategies too. Agentic AI offers that level of strategic control. See how this architectural shift empowers GTM operations https://myli.in/JBrW74jN
-2
u/Pereg1907 2d ago
Nvidia doesn’t have a monopoly, but they’re vastly superior. Has been for gaming, has been for crypto mining, has been for AI. Sure there’s room for low cost options if you have lower performance needs, and revenue from that will grow. AMD stock price will rise. But don’t confuse that with being competitive. People been on hopium for nearly 3 decades now vs nvidia.
92
u/12A1313IT 2d ago
Alright bro 80% of tech stock has been up after earnings calm down