On Monday January 27, a little known Chinese start-up called DeepSeek sent shockwaves and panic through Silicon Valley and the global stock market with the launch of their generative artificial intelligence(AI) model that rivals the models of tech giants like OpenAI, Meta and Google. It’s AI assistant became the no. 1 downloaded app in the U.S., surprising an industry that assumed only big Western companies could dominate AI.
Many AI-related stocks, including Nvidia, took a hit as investors reevaluated the competitive landscape. But what brought the market to its knees is that DeepSeek developed their AI model at a fraction of the cost of models like ChatGPT and Gemini. The launch of DeepSeek is being coined “AI’s Sputnik moment” in the global race to harness the power of AI.
To break down what this development could mean for the future of AI and how it could impact society, we spoke with Arun Rai, Director of the Center for Digital Innovation at Robinson.
How is DeepSeek’s AI technology different and how was it so much cheaper to develop?
AI development has long been a game of brute force—bigger models, more computing power, and cutting-edge chips. OpenAI, Google DeepMind, and Anthropic have spent billions training models like GPT-4, relying on top-tier Nvidia GPUs (A100/H100) and massive cloud supercomputers.
DeepSeek took a different approach. Instead of relying on expensive high-end chips, they optimized for efficiency, proving that powerful AI can be built through smarter software and hardware optimization.
Key differences include:
- DeepSeek’s model doesn’t activate all its parameters at once like GPT-4. Instead, it uses a technique called Mixture-of-Experts (MoE), which works like a team of specialists rather than a single generalist model. When asked a question, only the most relevant parts of the AI “wake up” to respond, while the rest stay idle. This drastically reduces computing needs.
- They also designed their model to work on Nvidia H800 GPUs—less powerful but more widely available than the restricted H100/A100 chips. These chips are also much cheaper. DeepSeek used PTX, an assembly-like programming method that lets developers control how AI interacts with the chip at a lower level. This allowed them to squeeze more performance out of less powerful hardware, another reason they didn’t need the most advanced Nvidia chips to get state-of-the-art results.
- Training was also optimized to reduce expensive human fine-tuning. Most AI models, including GPT-4, rely on large teams of human reviewers to manually refine responses, ensuring quality and safety. This is time-consuming and expensive. DeepSeek automated much of this process using reinforcement learning, meaning the AI learns more efficiently from experience rather than requiring constant human oversight.
How did the launch of DeepSeek happen?
DeepSeek’s emergence wasn’t gradual—it was sudden and unexpected. Founded in late 2023, the company went from startup to industry disruptor in just over a year with the launch of its first large language model, DeepSeek-R1.
The U.S. government had imposed trade restrictions on advanced Nvidia AI chips (A100/H100) to slow global competitors’ AI progress. But DeepSeek adapted. Forced to work with less powerful but more available H800 GPUs, the company optimized its model to run on lower-end hardware without sacrificing performance.
DeepSeek didn’t just launch an AI model—it reshaped the AI conversation showing that optimization, smarter software, and open access can be just as transformative as massive computing power.
There’s been a lot of buzz about DeepSeek being an “open-source model.” What does open source mean and what impact does that have?
AI models vary in how much access they allow, ranging from fully closed, paywalled systems to open-weight to completely open-source releases. DeepSeek’s approach stands at the farthest end of openness—one of the most unrestricted large-scale AI models yet.
Most AI models are tightly controlled. OpenAI’s GPT-4, Google DeepMind’s Gemini, and Anthropic’s Claude are all proprietary, meaning access is restricted to paying customers through APIs. Their underlying technology, architecture, and training data are kept private, and their companies control how the models are used, enforcing safety measures and preventing unauthorized modifications.
Some AI models, like Meta’s Llama 2, are open-weight but not fully open source. The model weights are publicly available, but license agreements restrict commercial use and large-scale deployment. Developers must agree to specific terms before using the model, and Meta still maintains oversight on who can use it and how.
DeepSeek’s model is different. It imposes no restrictions. Anyone—from independent researchers to private companies—can fine-tune and deploy the model without permission or licensing agreements.
This approach has major advantages. It democratizes AI innovation by giving startups, researchers, and developers access to cutting-edge AI without licensing fees. It encourages global AI development, allowing independent AI labs to improve the model. And it breaks the monopoly of large AI firms, offering a powerful alternative to proprietary, paywalled AI models.
But it also introduces significant risks. Unlike proprietary AI, where companies can monitor and restrict harmful applications, DeepSeek’s model can be repurposed by anyone, including bad actors. This raises concerns about misinformation, deepfake production, and AI-generated fraud. Without built-in safeguards, open AI systems could be used for mass disinformation, cyberattacks, or social manipulation.
DeepSeek’s move has reignited a debate: Should AI models be fully open, or should companies enforce restrictions to prevent misuse? Some see DeepSeek’s release as a win for AI accessibility and openness driving innovation, while others warn that unrestricted AI could lead to unintended consequences and new risks that no one can control.
Is the launch of DeepSeek something to panic over or be excited about?
The launch of DeepSeek marks a transformative moment for AI—one that brings both exciting opportunities and important challenges. It has opened new possibilities for AI development while also raising fresh questions about security, responsibility, and control.
On one hand, DeepSeek’s open-source release expands access to cutting-edge AI like never before, which could lead to faster breakthroughs in fields like science, health care, and business. DeepSeek’s efficiency-first approach also challenges the assumption that only companies with billions in computing power can build leading AI models. If this method scales, it could redefine how AI is developed globally. At the same time, its unrestricted availability introduces complex risks.
What are the concerns with DeepSeek?
DeepSeek’s launch has raised critical questions about security, control, and ethical responsibility. The main concerns center on national security, intellectual property, and misuse.
Unlike proprietary AI models, DeepSeek’s open-source approach allows anyone to modify and deploy it without oversight. This raises fears that bad actors could use it for misinformation campaigns, deepfakes, or AI-driven cyberattacks. The U.S. Navy was the first to ban DeepSeek, citing security concerns over potential data access by the Chinese government.
Since then, Texas, Taiwan, and Italy have also restricted its use, while regulators in South Korea, France, Ireland, and the Netherlands are reviewing its data practices, reflecting broader concerns about privacy and national security. Similar concerns were at the center of the TikTok controversy, where U.S. officials worried that data from an app used by millions of Americans could be accessed by the Chinese government.
The debate isn’t just about DeepSeek—it’s about how open AI should be. Can AI be both widely accessible and responsibly managed? That question will shape the future of AI policy and innovation.
How does regulation play a role in the development of AI?
AI regulation is at a crossroads. Governments are racing to balance innovation with security, trying to foster AI development while preventing misuse. But the challenge is AI is evolving faster than laws can keep up.
In the U.S., regulation has focused on export controls and national security, but one of the biggest challenges in AI regulation is who takes responsibility for open models. As AI continues to advance, policymakers face a dilemma—how to encourage progress while preventing risks. Should AI models be open and accessible to all, or should governments enforce stricter controls to limit potential misuse? The answers will shape how AI is developed, who benefits from it, and who holds the power to regulate its impact.
How could DeepSeek’s impact on the AI landscape ultimately impact society?
DeepSeek’s impact on AI isn’t just about one model—it’s about who has access to AI and how that changes innovation, competition, and governance.
By making a powerful AI model open-source, DeepSeek has lowered the barrier to AI development, enabling more researchers, startups, and organizations to build and deploy AI without relying on big tech firms or government-backed research labs. It also challenges the idea that AI progress depends solely on massive computing power, proving that smarter software and hardware optimization can rival brute-force approaches.
At the same time, decentralization makes AI harder to regulate. Without a central authority controlling its deployment, open AI models can be used and modified freely—driving both innovation and new risks.
DeepSeek has forced a key question to the forefront: Will AI’s future be shaped by a handful of well-funded Western firms and government-backed AI research labs, or by a broader, more open ecosystem? That choice will determine not just who has access to AI, but how it reshapes society.
Georgia State University
Citation:
Q&A: How DeepSeek is changing the AI landscape (2025, February 5)
retrieved 5 February 2025
from https://techxplore.com/news/2025-02-qa-deepseek-ai-landscape.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.