China’s Ai has emerged as a top contender for U.S. leaders, demonstrating a model that claims to offer comparable performance at a fraction of the cost. The company’s mobile app, released in early January, has also topped the iPhone charts in major markets including the US, UK, and China. Founded in 2023 by Liang Wengfeng, former head of ai-dresgy fund Funder Higher, DeepSeek creates an open model and integrates reasoning features that reflect thinking before giving feedback. Wall Street’s reaction has been mixed. While Jefferier warns that Deepseek’s efficient approach is “some eufex euphoria” after spending the new commitment from Meta and Microsoft – every billion is done without more advanced GPUs. Goldman Sachs sees broader implications, suggesting the development could reshape competition between established tech giants and startups by lowering barriers to entry. Here’s how Wall Street analysts react to deepseek, in their own words (deephasis mine): Jefferies employment implications for capex estimates to give pitturis from capex from stargate and meta last week. With performance delivered in jereek comparable to GPT-4o for a portion of computing power, there is pressure on Capex Builders to Increase Open Plans and Growth Center Profits. If the smaller model works well, it’s a positive potential for smartphones. We can’t believe in smartphone AI as AI is not gaining traction with consumers. More hardware upgrades (Draam PKG + fast) are required to run larger models on the phone, which will incur costs. AAPL’s model is moe-based reality, but 3BN’s data parameters are still too small to make the service useful to consumers. Therefore, whatever the success of Deepseek offers some hope but there is no impact on Outlook Term near smartphone AI. China is the only market that is pursuing LLM efficiency because of chip profits. Trump/Musk likely recognize the risk of more restrictions is to force China to update more quickly. Therefore, we think that Trump will relax his AI deployment policy. Citi while Deepseek can be groundbreaking, we question the understanding that it does without using more advanced GPUs and/or building the LLMS that underlies the model the latter is based on. While the dominance of US companies in the most advanced AI models can be challenged, that said, we think that in a more transparent environment, access to more advanced chips is an advantage. Thus, we don’t expect AI companies to go further than advanced GPUs that provide $/tfloops at scale. We see the latest AI Capex announcements like Stargate as a nod to the need for advanced chips. Bernnstein is short, we believe that 1) deepseek did not “build a $5m openai”; 2) The model looks great but we don’t think she’s a miracle worker; and 3) the resulting panic over the weekend seems overblown. My own initial reaction did not include panic (far from it). If we admit that deepseek may reduce the cost to get the same performance of the model, say, 10x, we can also note that the trajectory of the cost of the current model is increasing with respect to all years, “) which cannot continue forever. In this context, we need this kind of innovation (moe, distillation, mixing precision etc.) if AI continues to advance. And for those looking for AI Adoption, because our semi analysts believe in the peridik paradox (i.e. gain efficiency results in net increase), and believe in capacity Unwanted computing Demand increases vs. long-term impact, because we do not believe that computing is anywhere near the limit in AI the highest in the rest of the world (frankly we don’t know what lab has been used to develop And deploy their own models, but we can’t believe that they don’t consider or even use the same strategy). Peradogo is a small farm while back! As AI gets more efficient and accessible, we’ll see job vacancies skyrocket, turning them into a commodity we can’t get enough of. https://t.co/omecophdiz- Satya Nadella (@satyanadella) May 27, 2025 Morgan Stanley We have not confirmed its veracity, but if it is accurate, and the right llm can be developed for a fraction of the previous investment, we can see Generative AI open the end in smaller and smaller computers (decrease from supercomputers, office computers, and finally computers can benefit from related claims for related products (Chips and Spe ) as a demand The generative AI. Goldman Sachs with the latest development, we can also see 1) Potential competition between the rich internet rain among the rich and modern, especially the new models that are developed, especially with the existing new models; 2) From training to more infencing, with increasing emphasis on training after training (including reasoning capabilities and reinforcement capabilities) that require significant training; and 3) Potential for more global expansion for Chinese players, given performance and cost/price considerations. We continue to expect the race for AI / AI applications to continue in China, especially in various applications to C, such as, making Tencers in WEIXIN (WECHAT) Super- app. Among C-to-C applications, bytainence has led the way by launching 32 AI applications over the past year. Among them, it is China’s most popular Chature Chatbot with the highest MAU (C.70MN), which recently updated with Pro buuudao 1.5 model. We believe additional revenue streams (subscriptions, advertising) and a sustainable path to monetization / Positive unit economics among applications / agents will be key. For the infrastructure layer, the focus on investors has been centered on whether there is a willingness close to market share in Capex Ai in Capex and Computing significant, if there is a possibility of significant improvement. For China’s Cloud / Data Cloud Players, we must focus for 2025 will center around the chip availability and capabilities of Clov Services (Protecting Cloud infrastructure, and Handling Infrastructure / GPU , How AI & AI-related work services can contribute to growth and profits going forward. We remain positive on the long demand growth of AI as the reduction of computation / training / inference can lead to higher adoption of AI. See also theme # 5 of our main theme reporting base / bear for forecasting bbat capex depends on the availability of the chip, where we expect in the base case (GSE: + 38% YOY) At a rather moderate pace vs a 2024, driven actively in the AI ​​infrastructure. JPMorgan is more important, created from the Deepseek research paper, and the efficiency of the model. It is not clear what is best to describe the GPGE Hopper-~ 50K (similar to the Size in Cluster paid for GPT-5 training), but what is considered to reduce the cost (the inference cost For the V2 model which, for example, is claimed to be 1/7 that of the Turbo GPT-4. The subversive (though not new) claim – which began to hit the US name this week – is “more innovation.” Liang: “Now I don’t see a new approach, but big companies don’t have a clear upper hand. Big companies have existing customers, but the cash flow business is also a burden, and this makes them easily disturbed at any time. ” And when asked about the reality that Gpt5 is still not released: “Openai is not God, so it is not always ahead.” UBS throughout 2024, the first year we see workplaces in China, more than 80-90% Demand AI driven by AI driven by AI driven by AI driven by AI 1-2 Hypercaler customers, is AI training that has sensitive power to cost utility rather than user latent. If the training and maintenance costs of AI are lower, we would expect more users to add AI to improve their business or develop new use cases, especially retail customers. Such an ID request means that the focus is more on location (as the latency of the user is more important than the primary cost), and at the cost of the price for IDC operators who have a lot of number 1 resources and satellites. Meanwhile, a diverse customer portfolio will also lead to greater pricing power. We will update the story as a realist again.