After the Chinese AI Laboratory Deepseek released the best technical model AI model that Meta and Google must provide with the best technical OpenAI, it caused the first Silicon Valley tire in 2025.
Deepseek claims that it is already efficient and quickly establishes its models (although some are suspicious of these claims), and provide these models with a small part of the price collected by the American AI company. This development not only caused the science and technology giants, but also shocked this development at the highest level of the US government. These giants worry that China has made progress in the AI military reserve competition.
The co -founder of AI infrastructure startup ANYSCALE Robert Nishihara said in an interview with TechCrunch: "If many AI laboratories are now in the war room, I will not be surprised."
The rise of DeepSeek marks a inflection point of the AI landscape of Silicon Valley. AI CEO, founders, researchers and investors told TechCrunch that the model of Deepseek has a significant impact on the US AI policy. In addition, these experts say these models are indicators for the acceleration of AI progress.
The assistant professor of the New York University Data Science Center Ravid Shwartz-Ziv said in an interview: "DEEPSEEK is excessive." "But this is still very interesting, we can get it from it, we can get it from it a lot of."
KatanForoosh, CEO of Workerra and part -time lecturer at Stanford University, said that one of the main innovations of Deepseek created its R1 model is "pure enhanced learning".
KatanForoosh compared DeepSeek's breakthrough with a child who accidentally burned a child who did not encounter the hot board.
"(A child) may encounter the hot plate, be burned, and quickly learn not to do it." KatanForoosh said in text. "This is a purely enhanced learning-based on feedback (...) Deepseek method from repeated experiments to allow the model to learn through experience alone."
Compared with other cutting -edge AI models, DeepSeek seems to be more dependent on enhanced learning. OpenAI also used enhanced learning technology to develop O1, which disclosed a few weeks before Deepseek announced that R1 was announced. The company claims that OpenAI's upcoming O3 model can use similar methods to achieve better performance, but other calculations can also be achieved.
According to KatanForoosh, strengthening learning is one of the most promising methods for improving AI basic models. The term "basic model" usually refers to the AI model that trains a lot of data, such as images and texts from web. It seems that other AI laboratories may continue to promote the restrictions on enhanced learning to improve its AI model, especially considering the success of Deepseek.
Just a few months ago, AI found that it was trying to improve the performance of the basic model. However, the success of methods such as compulsory learning and testing time scales shows that the progress of AI may be recovering.
AI2 Researcher Nathan Lambert said in an interview with TechCrunch: "R1 has more confidence in me to maintain a higher speed of progress."
R1 can be downloaded and run on any machine that meets the hardware, and matches or defeat O1 in many AI benchmark tests. Although this is not the first time we have seen the "closed" model (such as OpenAI and public available models), the performance gap is narrow, but the speed of Deepseek has disappeared the industry.
This may promote the United States to increase investment in open or fully open source AI to compete with China. Martin Casado, an ordinary partner of Andreessen Horowitz (A16Z), told TechCrunch that Deepseek proves how "wrong" the regulatory manager in the past two years is.
In an interview, Casado said: "For AI, I think this is just to show us that (the United States) is not alone." "Competitive solutions can come from anywhere, but especially China. We should not do us to us Strong investment instead of hindering our innovation.
Casado seems to refer to the SB 1047 of the California Act SB 1047, which was recently rejected by the former President Biden, which was recently rejected, and both actively opposed A16Z. A16Z believes that these two measures are prioritized to prevent the "weird" AI world ending scenes related to the innovation of the United States. More widely speaking, Silicon Valley usually successfully suppress the "AI destruction movement" in 2024. For AI, A16Z, and others have repeatedly stated that what really cares about is that the United States has lost its competitive advantage in China.
In view of the rise of Deepseek, this situation seems to be more obvious.
It is not nothing. A16Z has invested in many open AI players in the world, including DataBricks, MISMTRAL and BLACK FOREST LABS. Risk investment companies can also play an excellent role to provide suggestions for the Trump administration on AI. Former A16Z partner Sriram Krishnan is now Trump's senior AI policy consultant.
President Trump said on Monday that Deepseek should be the "wake -up call" of the American AI company, and at the same time praised the open attitude of the Chinese AI laboratory. This is very closely arranged with A16Z's position on AI.
A16Z co -founder Marc Andreessen said in an article on X: "Deepseek R1 is the moment of AI's Putnik," decades ago, it quoted the launch of the Soviet Earth -track spacecraft. The launch of the spacecraft made the United States the United States Promote the United States to seriously invest in its space plan.
The rise of DeepSeek seems to have changed the idea of Open AI suspicion, such as former Google CEO Eric SC HMIDT. Just last year, Schmidt paid attention to the spread of the open AI model in the west of the world. However, in a column article published on Tuesday, Schmidt said that the rise of Deepseek marked the "turning point" in the global AI competition and called for further investment in the American Open AI.
The important thing is not to exaggerate the achievement of Disak.
For example, some analysts are skeptical of DeepSeek's statement, that is, it has trained one of its boundary models at a price of $ 5.6 million. Deepseek V3 (a challenge in the AI industry) uses about 2,000 older more older ones NVIDIA GPU. After all, China Artificial Intelligence Lab did not germinate overnight. According to reports, Deepseek's inventory exceeded more than 50,000 NVIDIA Hopper GPUs.
Deepseek's model is also defective. According to the test of the information available organization NewsGuard, when asked about the topics related to the news, 83 % of R1 provided incorrect answers or non -recruitment meetings. Another test found that R1 refused to answer 85 % of the prompts related to China, which may be the result of the government review system developed by the AI model in the country.
Then, there is a claim for IP theft. Openai said it has evidence that Deepseek uses its AI model to use the process of distillation to train its own training. If it is true, this will violate Openai's terms and will also make DeepSeek's achievements less impressive. For example, Berkeley researchers have recently created a distillation reasoning model of only $ 450. (Of course, OPENAI is currently sued by many political parties because it is allegedly violated copyright in training their own models.)
Nevertheless, Deepseek moved the needle with a more efficient model and it was innovative. Lambert pointed out that unlike O1, R1 revealed its "thinking process" to users. Lambert observed that some users trusted or believed in the AI reasoning model when they saw the internal process, and they "explained their work" during this period.
Now, we must look at the response from the decision makers and AI laboratories in the United States.