Micron: The AI Memory Supercycle!
US memory chip manufacturer experiencing AI-driven, Nvidia-like demand growth for its chips!
Welcome to Global Equity Briefing, my weekly investing newsletter.
I am Ray, a passionate investor and equity analyst. And today I am covering Micron!
For 50+ years, the memory industry was viewed as a brutal, cyclical commodity market.
Companies in this space were prone to extreme boom-and-bust cycles, where periods of high profitability were quickly followed by devastating oversupply and price collapses.
However, the rise of AI has fundamentally changed this dynamic.
Micron is transitioning from a cyclical commodity manufacturer into a structural enabler of AI.
Last quarter, the company grew revenues by an insane 196% and net income by 770%, reaching a net margin of 58%!
These types of results are unheard of for an industrial company of such scale and remind us of the stratospheric growth of Nvidia in 2024.
In this Micron Deep Dive, I will examine what is driving the growth and whether the company remains an attractive long-term investment opportunity.
Let’s begin.
1. Business Model
2. Manufacturing Strategy
3. Customers
4. Risks
5. Opportunities
6. Financials
7. Valuation
8. Valuation Model
9. Conclusion
1. Business Model
Micron is the only major American manufacturer of memory chips.
During the 1980s and 1990s, the memory industry went through brutal price wars initiated by Japanese and later South Korean competitors. Dozens of American memory makers were forced into bankruptcy or exited the market.
Micron survived through a relentless focus on cost-efficiency and strategic acquisitions. The 1998 purchase of Texas Instruments’ memory business and the 2013 acquisition of Japan’s Elpida Memory were key moments.
The Elpida acquisition, in particular, doubled Micron’s capacity and secured its position as a primary supplier for Apple’s iPhone.
In 2026 its Micron’s core products include:
DRAM
HBM
NAND Flash
NOR Memory
Let’s expand on them.
1.1. DRAM
DRAM is the short-term memory of any computer system.
It provides a high-speed workspace where the processor stores data for immediate access. Unlike storage, DRAM is non-permanent and volatile, so the data is lost when power is removed.
DRAM is by far the largest product category, generating $45.6B in LTM and 78.5% of total revenues!
This is also the primary growth area for the company, with Micron growing DRAM revenues by a mindboggling 207% in Q1 2026.
The growth is driven by the fact that the memory requirement for AI servers is much larger than that of traditional enterprise servers. While a standard server might require a few hundred gigabytes of RAM, an AI-optimized server requires up to 8 times the DRAM to handle the massive datasets and parameters of AI models.
Simply put, thanks to the AI boom, DRAM has transitioned from a supporting component to the primary enabler of LLMs.
Micron’s current leadership in this space comes from its 1-gamma node, which is the most advanced DRAM manufacturing process in the world.
The 1-gamma node is expected to become the highest-volume node in company history and is on track to represent the majority of the DRAM bit mix by the middle of calendar 2026.
This technology provides several benefits for AI:
Bit Density: 45% increase on 1-beta node, allowing for more memory in the same space.
Latency: 17% lower latency for AI inference tasks.
Energy Efficiency: 24% better energy efficiency, reducing data center power costs.
As AI models grow in size, they require more DRAM capacity per server. Previous generation servers use several hundred gigabytes of DRAM, but new AI-optimized platforms are pushing these requirements into the terabyte range.
This change creates a structural increase in the total addressable market for DRAM, improving on the simple replacement cycles.
In short, by utilizing ASML’s EUV machines, Micron has been able to increase bit density and improve power efficiency, which is a must for data centers where electricity costs are a major driver of costs.
1.2. HBM
High-bandwidth memory (HBM) is a specialized type of DRAM that uses a vertical stacking architecture to achieve extreme data transfer speeds.
Instead of placing memory chips side-by-side, HBM stacks multiple DRAM dies on top of each other.
This stack is then placed on the same package as the AI processor, allowing for a much wider interface and shorter signal paths.
The main use cases for HBM include:
AI training
High-performance computing
Advanced graphics rendering
In AI systems, HBM is the most critical component because it feeds data to the GPU.
Without HBM, the GPU would be starved of data, making it effectively useless for advanced AI. Micron entered the market with HBM3E, which offers approximately 1.2 TB/s of bandwidth per stack and up to 36GB of capacity.
Micron’s HBM3E is particularly competitive because it uses 30% less power than similar products from competitors. But the industry is now moving toward HBM4, which Micron began shipping in volume during Q1 2026.
HBM4 has a major architectural change, with more than doubling in bandwidth to 2.8 TB/s, and max capacity to 64 GB.
AI desperately needs HBM4 because the size of model parameters is growing faster than memory technology can keep up.
OpenAI’s and Anthropic’s trillion-parameter AI models are hitting a physical limit with the older 1024-bit interfaces. By doubling the interface width to 2048-bit, HBM4 allows for massive throughput without needing to increase the electricity voltage.
This helps manage the thermal wall, which is the heat trapped inside the stack that can damage the chips. Micron’s HBM4 is designed into the Nvidia Vera Rubin platform.
HBM revenues are reported within the overall DRAM segment, so we don’t know exactly, but it is likely a large share and the majority of the growth.
But some independent analysts estimate that last year Micron made about $6-10B in revenues from it, about 16-27% of total group revenues.
1.3. NAND Flash
NAND flash is the permanent, non-volatile storage technology that retains data even when power is turned off.
It is the core technology used in solid-state drives, smartphones, and memory cards.
Unlike DRAM, which is built for speed, NAND is built for high-capacity storage. The main use cases include storing operating systems, applications, and large datasets.
For AI, NAND is used to store the massive amounts of data that are needed to train models.
The NAND segment generated $12.1B in revenues in LTM, about 21% of total revenues!
Do you like the chart above?
FiscalAI makes managing investments smarter, faster, and stress-free.
Visualisations
AI-powered insights
Financial data
Earnings transcripts
Portfolio analytics
All in one place. Instead of wasting hours digging through filings and spreadsheets, Fiscal.ai helps you get to the important information in minutes.
Save time, stay organized, and let Fiscal.ai handle the heavy lifting so you can focus on growth.
Join with my link below by Thursday, 14 May, and get a 25% discount.
Similar to DRAM, the NAND segment has also experienced stratospheric AI-driven growth, with revenues growing by an incredible 169% in Q1 2026.
As AI data centers are increasingly moving away from previous-generation hard disk drives and toward all-flash storage to speed up data access, Micron has released several products to support this trend:
245TB 6600 ION SSD: This is the highest-capacity data center drive available, allowing for a massive reduction in the number of server racks needed for storage.
122TB E3.S SSD: This drive offers 67% more server rack density than previous form factors and 37% better energy efficiency than using multiple hard drives.
G9 NAND: Micron’s latest NAND technology provides the high speeds necessary for AI data lakes and vector databases.
One of the growing uses for NAND in AI is in inference.
During the inference process, AI models generate a large amount of temporary data called the KV cache to remember the context of a conversation.
As context windows grow to millions of tokens, this cache can become too large to fit in DRAM. High-speed NAND SSDs are now used to store this cache, which prevents the model from having to recalculate everything from scratch when users ask a question.
This offloading helps improve the efficiency of AI systems and reduces the total cost of ownership for data center operators.
1.4. NOR Memory
NOR memory is another type of non-volatile storage, but it is optimized for high-speed reading of small amounts of code rather than large datasets.
It allows a processor to execute code directly from memory, a feature known as execute-in-place. The main use cases for NOR are storing boot code for computers, firmware for automotive systems, and control software for industrial equipment.
Simply put, while NAND is used for large files, NOR is used for instant-on performance.
When an AI-powered car starts, the safety-critical microcontrollers must boot up in milliseconds to ensure sensors and cameras are active.
Micron’s automotive NOR flash is engineered to meet these demands, providing the durability and speed required for advanced driver assistance systems, sensors, and gateways.
Meanwhile, in the AI data center, NOR memory is used for the fundamental boot-up process of servers and networking switches.
The revenues from this product are reported within the NAND flash segment. While they are not experiencing explosive AI-driven growth, the demand trends still look strong and favorable, thanks to a growing usage in automotive systems.
Furthermore, the market for NOR flash is currently experiencing a supply squeeze.
Because major manufacturers like Micron and Samsung are focusing their factory capacity on more profitable HBM and advanced DRAM, the availability of NOR is tightening. This has led to longer lead times and firmer pricing for NOR products across all end markets, including industrial and medical sectors.
For Micron, this supply-demand imbalance in the broader memory market helps maintain strong profitability even in its smaller product lines.
1.5. Cyclicality
Micron operates in the semiconductor memory industry, which has long been dealing with high capital intensity, rapid technological obsolescence, and a notorious boom-and-bust cycle.
This cycle is driven by the incredibly strong demand elasticity of the memory industry.
When demand is high, prices rise rapidly as users of memory chips race to secure supply. This is exactly what we are seeing now. Memory is an input in the final product, which can’t be shipped without memory.
It takes years and costs $10-20B to build new fabs, so memory makers can’t quickly increase supply, further driving higher prices.
When prices are high, manufacturers look for ways to reduce memory requirements, reducing specs or changing designs.
Let’s not forget that high prices also incentivize suppliers to increase production. So, memory manufacturers all at the same time race to invest in capacity to increase supply. While it takes years for the additional supply to reach the market, it does at the same time.
Supply explodes, and pricing collapses.
The same factors that make increasing supply quickly difficult also make decreasing supply a non-starter. Once built, foundries must operate constantly, as the equipment that they spent billions on degrades while interest costs pile up.
In this industry, raw material input costs are a lower share of total costs than the depreciation of equipment.
Simply put, by shutting down foundries, memory makers will lose more money than by selling each chip at a loss.
Furthermore, Micron’s business is also complicated by the bullwhip effect!
It’s a supply chain phenomenon where small changes in consumer demand for end-products (like smartphones or PCs) result in increasingly larger swings in orders as one moves down the supply chain to the semiconductor manufacturers.











