The three major original factories of memory gathered SEMICOM, and the Korean factory even shouted to strengthen Taiwan and South Korea s further cooperation

With the explosive growth of generative AI, the memory is no longer just an auxiliary component, but a starting point and key driver of artificial intelligence. Therefore, in the 2025 Semicon Taiwan's memory forum, SK Hynix, Samsung and Micron,...


With the explosive growth of generative AI, the memory is no longer just an auxiliary component, but a starting point and key driver of artificial intelligence. Therefore, in the 2025 Semicon Taiwan's memory forum, SK Hynix, Samsung and Micron, the world's three major memory manufacturers, participated in the event on the same stage, and through thematic speeches, they talked about the necessity of redefining the role of memory in the AI ​​era, so as to jointly face the severe challenges of AI-based facilities in terms of efficiency, power consumption and expansion, and shape the future of AI through innovation and cooperation.

SK Hynix leads AI performance and efficiency with full stacking memory

SK Hynix Vice President Cui Junlong emphasized that the AI ​​market is developing at an amazing speed, and dozens or even hundreds of AI models are expanding into thousands of downstream applications. This puts forward a huge demand for AI-based facilities that spans efficiency, scalability and power consumption efficiency. Memory, especially high-frequency wide memory (HBM), has become the core of the AI ​​opportunity. SK Hynix is ​​transforming from a single memory supplier to provide a "full stacked memory" solution and uses this vision to lead the industry.

In addition, AI-based facilities have key limitations such as frequency width, power consumption, heat density and physical space. It is expected that by 2030, the power consumed by data centers will increase by three to six times compared to 2023, with AI work load accounting for 65% of total power demand. Power consumption has become the most important consideration in AI industry. Choi Junlong pointed out that HBM's power consumption in a single AI rack is increasing from 12% to 18% or more in the future, while HBM's power consumption efficiency is increased by 10%, which can save 10% of the total power consumption of the entire rack. Memory frequency width is even more of a bottleneck for AI training and recommendations.

Choi Junlong emphasized that in order to address these challenges, SK Hynix continues to promote the technological edge. Its latest HBM upgraded version is comparable to HBM3, with a bandwidth increase of more than 200% and a power consumption efficiency increase of up to 40%. SK Hynix was the first to launch the HBM3E, with a capacity of 36GB and a bandwidth of more than 2TB/s, setting up a new industry rod. In the future, HBM will no longer be just a memory, but will embed new features and processing capabilities, becoming passively transformed into a part of active intelligence, tailoring the memory with dedicated features for every customer and AI chip.

Finally, SK Hynix clearly stated that Taiwan has a unique "directional integration ecosystem", covering crystal foundry, packaging, testing and system integration, and is the only area that can provide a full-stage AI platform. SK Hynix will use its HBM leading position to provide Taiwan’s AI ecosystem with the memory needed to jointly build the future of AI-based facilities. They emphasize the directional integration with Taiwan’s AI.

Samsung customized solution solutions and efficiency innovation should challenge AI

Jangseok Choi, vice president of Samsung Memory Product Planning, said in a speech that the calculation demand for AI models has grown 4.7 times per year in the past 15 years, but the computing efficiency, memory capacity and frequency width growth rate are far less than that, resulting in a serious bottleneck. The over-the-top GPU problem caused by OpenAI image generation function is the photo of insufficient AI-based infrastructure. In addition, by 2030, the power consumption of global data centers may double compared to last year, and it may even be even higher if AI adoption rates are higher than expected.

To solve the four major challenges in the AI ​​era (efficiency, basic facilities, power consumption and workload management), Samsung proposed the "Bespoke HBM solution". They can adjust the characteristics and functions of HBM according to the specific needs of the customer, whether it is pursuing maximum frequency, cost-effectiveness or power consumption efficiency. Samsung is the only company that can provide comprehensive internal services such as memory, logic, crystal OEM and advanced packaging, and has unique value initiatives.

In terms of improving efficiency, Jangseok Choi said that Samsung is developing PIM (processor memory) technology to significantly reduce power consumption and improve overall system performance by unloading certain tasks from the GPU. For example, LPDDR6 PIM is 2.6 times more efficient than traditional solutions and has half the energy consumption. In addition, with the huge data demand for generative AI, Samsung launched a new AI layer storage solution, including performance layer, storage layer and capacity layer, and developed SOCAMM2 for LPDDR memory to achieve data center expansion and efficiency optimization. In the future, Samsung will promote the HBM packaging to move from heat-pressure bonding to hybrid copper bonding, achieving higher stacking layers, better heat dissipation and higher signal integrity.

At the end of the speech, Jangseok Choi clearly stated that he would like to thank the precious support provided by Taiwan's semiconductor industry, which is of great importance to Samsung's current leading position. Faced with increasingly complex industry challenges, Samsung's continuous cooperation with Taiwan's industry is indispensable for the creation of innovative solutions and promoting technological progress. They hope to work with Taiwan to build a better future.

Micron's memory innovation and breakthroughs in 3D DRAM

Micro Vice President Nirmal Ramamurthy pointed out that AI is bringing huge economic value and industry changes, but the complexity of the AI ​​model is constantly improving, requiring amazing memory capacity, and it is expected that the parameters will reach millions in the near future.. Currently, the growth rate of computing performance (1.6 times every two years) is faster than memory performance, resulting in the widening of the gap between the two, which requires the memory to expand at a faster and better speed. Energy efficiency is another key limitation for AI’s sustainability, with data centers predicting that by 2030, most of the U.S. energy demand, most of which comes from AI.

Nirmal Ramamurthy emphasizes that the new AI workloads have prompted server architecture to evolve, and the GPU in the dedicated computing knife server is tightly coupled with HBM, while memory such as LPDDR is also connected to the CPU through high performance. Micron emphasizes that AI’s success relies on a comprehensive memory level, including high-frequency wide memory (HBM), balanced main memory (such as LPDDR, DDR, etc.), and expandable large-capacity memory and AI optimized storage. Micron provides a wide range of memory product combinations to meet the diversified needs of AI computing systems.

Nirmal Ramamurthy further points out that the development of memory technology must optimize performance, capacity and power consumption to meet the various needs from the data center to the edge of AI equipment. HBM's success has been achieved through overwhelming process, innovative design and advanced packaging structure. The DRAM core surface is subject to technical challenges such as micro-constriction to atomic level limit, increased capacitance width ratio, and reduced sensing margin.

To overcome these limitations, Micron expects that DRAM will move towards a "3D DRAM" structure similar to 3D NAND, increasing the unit volume through vertical stacking. This technology requires high-performance CMOS, complex wafer bonding, and innovative process equipment. In addition, Micron is also exploring new memory systems such as 1T-1C iron and electrical memory, which is expected to play a role in expanding memory due to its close to DRAM performance and cost-effectiveness. Micron also actively uses AI and modeling to accelerate its own technological development process.



Recommend News