The DeepSeeek’s Open Source model, R1, sucks nvidia stocks and causing consumer applications quickly to the top of the App Store. The last month of deepseek said to train the model using a number of data center 2,000 GPUs in Nvidia for about two months at $ 5.5 million. Last week, this publishes the paper that shows that the latest model performance is the most advanced considerations model around the world. The model was trained at the data center that wasted a billion in a faster and value. Reactions in the technology industry for high performance high, lower cheap models. Patinger gelsinger, for example, take it to X with Glee, Post, “Thank you deepseek.” The wisdom is learned the lesson we think. DeepSeek reminds three important learns of computing history: 1) calculate gas laws. Make dramatic to expand the market. The market is wrong, this will make ai … – @Pgelsinger) on January 27th, 2025 gelsinger, and the current hardware engineer, the chairman is now, the messaging platform and involvement for the churches. He went to December after four years and tried to chase Nvidia with alternative AI GPU, Gaudi 3 AI. Gelsinger writes that DeepSenet must remind the most important tech industry: the low cost of the lift of the city is more; In addition from fundirrishes in trouble; And “open a win. DeepSeek will help reset the more closed AI model work,” he writes. Opening and anticropic both closed sources. Gelsinger tell techcrunch that R1 is amazing, the gloo has decided to use and pay the opening. Gloo is the AI service building called Kallm, which will offer chatbot and other services. “My glue engineer runs R1 today,” he said. “They can open O1 – well, they can only access O1, through API.” However, in two weeks, the expans expect to be rebuilt from Gris “with our own model of our own open source,” he said. “Convenient.” He said she thinks DeepSeek will make an affordable AI, AI is not only everywhere. Good AI will be there everywhere. “I want to better Ai in the Oura ring. I want to better AI in my phone’s help. I want to better AI on the device attached, like a sound recognition in EV,” he said. Gelinger reactions may be in other people who do not enjoy the basic model now now have more affordable challenges. AI has grown more expensive, no less. Other issues issued by IME efflyying certainly admitting numbers of somehow and training should be more expensive. Some think I can’t say that using a higher chips because the Chip Ai Ai’s export limit Ai Ai Ai Chip AI AI AI Ai Chip. Other people worship holes in performance, find space where other models are better. It still believes that the next model opequai, O3, will exit R1 when it is freed that the QUO status will be repaired. Gelsinger crawl them all. “You will not have a complete transparency, given most of the work in China,” he said. “But it’s still, all evidence is 10-50x cheaper in training than o1.” DeepSeek proves that AI can advance forward “with creativity engineering, not throwing more hardware and more resources power. So fun,” he said. As for this is a Chinese developer, like concerns about privacy and censorship, glesgecally metaphorically glesgecally shaken her head. “Having Chinese people reminds themselves about open ecosystems, may be ecosystem for our community, for the world West,” he said. TechCrunch has newsletters focus on Ai-Focus! Log in here to get in the inbox each Wednesday.