Open-source versus Proprietary Models: Are Mistral AI and Llama 3 Threatening Commercial Giants?

A surprising twist has emerged on the battlefield for AI dominance. For a long time, proprietary models like OpenAI’s GPT-4 and Anthropic’s Claude reigned supreme in terms of performance, leaving open-source alternatives far behind. This performance gap, which was supposed to be insurmountable, is dramatically shrinking. The latest open-source models, primarily Meta’s Llama 3 and French startup Mistral AI’s Mistral Large, are achieving results comparable to flagship products from commercial giants, and in some specialized tasks even surpassing them. This unexpected development not only challenges the dominant business paradigm of the AI industry but also raises fundamental questions about the future of this technology—will it remain in the hands of a few controlling corporations, or will it become a democratically accessible tool that anyone can customize to their needs? The stakes include not only the future of a trillion-dollar market but also how AI will integrate with the economy and society.

Shrinking Performance Gap – A Breakthrough in Open-source Models

Just a few months ago, the gap between the best proprietary models and their open-source counterparts seemed insurmountable. Models like GPT-4 and Claude dominated across all relevant benchmarks, while open-source solutions were viewed as interesting but clearly inferior alternatives.

“It was widely accepted that open-source models would always lag 1-2 generations behind leading proprietary models,” explains Dr. Elena Rodriguez from the AI Open Research Institute. “However, this view has been fundamentally challenged by breakthrough progress over the past year.”

Llama 3 – Social Media Giant Enters the Game

Meta, Facebook’s parent company, made the most dramatic breakthrough by introducing Llama 3, which according to independent tests achieves comparable, and in some tasks even superior, results to GPT-4 and Claude 3 Opus.

Introduced in March 2025, the Llama 3 family includes both smaller-scale versions (7B and 70B parameters) and the flagship Llama 3 405B model, which is currently the largest publicly available model in the world. Unlike its predecessors, Llama 3 was trained on a much larger and more diverse dataset, including multimodal data, allowing it to process and generate content encompassing text, images, and code.

“Llama 3’s results in benchmarks like MMLU, HumanEval, and GSM8K are astounding,” comments Sarah Chen, an analyst at Bernstein Research. “Particularly in tasks requiring mathematical reasoning and programming, Llama 3 405B outperforms GPT-4, though it still lags slightly in understanding cultural nuances and contextual subtleties.”

A significant aspect is that while Llama 3 is available under an open-source license for research and non-commercial purposes, Meta has also introduced the Llama Cloud service, enabling commercial use of the model in a SaaS (Software as a Service) model. This hybrid strategy allows the company to both support the open-source community and monetize its technology.

Mistral AI – European Challenger Changes the Game

Even more surprising is the success of Mistral AI—a French startup founded by former Google DeepMind researchers, which has joined the top tier of AI firms in record time. The company’s latest model, Mistral Large, despite its smaller scale (132B parameters) and significantly fewer computational resources used for training, achieves results comparable to GPT-4 in many key benchmarks.

Mistral has taken an innovative approach to model development, focusing on architecture efficiency and training data quality rather than simply scaling the number of parameters.

“Mistral proves that smart architecture design and careful data curation can be more important than raw scale,” explains Dr. Thomas Lee from UC Berkeley AI Research Lab. “Their models show exceptional parameter efficiency—achieving more with fewer parameters, which is a breakthrough for AI democratization.”

Mistral offers models of varying scales—from the small Mistral Tiny (7B), through the medium Mistral Medium (32B), to the flagship Mistral Large—all available both in open-source form and via API. Characteristically, the company has adopted an “open-weight, closed-training” approach—publicly releasing model weights but keeping details about training data and procedures proprietary.

Innovation Supply Chain in the Open-source Ecosystem

Beyond the main models, the open-source ecosystem is flourishing thanks to thousands of individual researchers and small teams that fine-tune, specialize, and develop base models.

“The true power of open-source models lies in the distributed innovation ecosystem that forms around them,” explains Marcus Thompson, founder of the OpenAI Research Collective. “When a community of hundreds of thousands of researchers and developers can experiment with these models, innovations and applications emerge that no single organization would ever discover.”

Examples of such community innovations include:

  • TinyLlama – a highly optimized version of Llama 2 that runs on standard laptops without requiring advanced GPUs
  • Meditron – a specialized adaptation of the Llama base model, fine-tuned for medical applications, achieving results that surpass ChatGPT in medical diagnoses
  • CodeLlama-70B-Instruct – a specialized version of Llama optimized for generating and analyzing code, comparable to GitHub Copilot

It’s worth noting that many of these specialized models achieve better results in specific domains than general proprietary models, despite using significantly fewer resources for their development.

Dissonant Duet – Collaboration and Competition Among Tech Giants

Paradoxically, many breakthrough open-source models are created by companies that simultaneously offer commercial, proprietary AI services. This apparent contradiction reflects the complex business and technological strategies of tech giants.

Meta’s Llama – A Two-Track Strategy

Meta’s decision to release Llama models as open-source while simultaneously developing its own AI services reflects a complex corporate strategy.

“Meta is implementing a classic ’embrace, extend, and monetize’ strategy,” explains Dr. Rachel Johnson, professor of technology management at Harvard Business School. “By making Llama open-source, the company builds a dominant position in the AI ecosystem, creates de facto standards, and shapes the direction of technology development, while retaining the ability to monetize through services based on this technology.”

Internal documents suggest that Meta views the open-source strategy as a way to counterbalance the dominance of OpenAI and Microsoft, while securing its position in the future AI ecosystem. By providing powerful models, Meta also gains valuable feedback and improvements from the developer community, effectively leveraging the collective intelligence of thousands of researchers.

Google’s Gemma – Late Entry into the Open-source Game

Google, long viewed as an advocate of the proprietary approach with its Bard/Gemini models, has also joined the open-source movement, introducing the Gemma model in February 2025.

“Google’s decision to enter the open-source space reflects the growing significance of this approach,” comments Sarah Wong, an analyst at Morgan Stanley. “No major player can afford to ignore the open-source ecosystem, which is becoming a key arena for innovation and AI adoption.”

The Gemma models, though technically smaller than flagship Llama or Mistral models, stand out with a strong emphasis on safety and responsibility, reflecting Google’s cautious approach to AI deployment.

Microsoft – Collaboration with Meta Despite Partnership with OpenAI

Perhaps the most surprising turn of events is Microsoft’s decision to integrate Llama models into its Azure cloud, despite its strategic partnership with OpenAI.

“Microsoft is securing its positions on all fronts of the AI war,” explains Marcus Chen, an analyst at JP Morgan. “While the company has invested billions in OpenAI and GPT, it cannot afford to ignore the growing importance of open-source models, especially in light of increasing corporate demand for hybrid solutions.”

This seemingly paradoxical strategy reflects a deeper trend in the industry—growing demand for flexibility and control from corporate users who don’t want to be dependent on a single AI provider and prefer solutions they can adapt to their specific needs and requirements.

Business Models at a Crossroads – How Does Open-source Generate Revenue?

One of the most intriguing aspects of the rivalry between open-source and proprietary models is the evolution of business models. How can companies generate revenue from technology that is available for free?

Hybrid Licensing – Balancing Openness and Commercialization

Leading open-source model providers have developed a nuanced approach to licensing that allows them to both support the open-source community and monetize their technology.

“Licensing models have evolved significantly beyond the simple dichotomous division into ‘open’ and ‘closed,'” explains Dr. Lisa Chen, an intellectual property law expert specializing in AI. “Contemporary open-source licenses for AI models often contain clauses distinguishing between research, non-commercial, and commercial applications, creating a spectrum of usage possibilities.”

Meta’s Llama 3 is available under a license that allows free use for research and non-commercial purposes but requires separate arrangements for commercial applications. Similarly, Mistral offers its smaller models under the liberal Apache 2.0 license while maintaining more restrictive terms for flagship models.

Monetization Through Services – “Open Core, Closed Service”

The most effective business model for open-source AI companies is becoming the “open core, closed service” approach—making base models available as open-source while offering advanced services and features in a commercial model.

“It’s a classic freemium strategy applied to AI,” explains Robert Wong, a partner at venture capital firm Andreessen Horowitz. “You provide the base technology for free, building a community and ecosystem, and then monetize advanced features, enterprise support, and specialized model fine-tuning.”

Mistral AI exemplifies this strategy perfectly. The company makes its models available as open-source but also offers a convenient API, SLA (Service Level Agreement) guarantees, advanced content filtering features, and dedicated support for corporate clients—all as part of a paid service.

This strategy is proving surprisingly effective. According to a financial report leaked to the press, Mistral AI achieved revenue of around 125 million euros in the first 6 months of 2025, significantly exceeding analyst expectations.

Specialization and Fine-tuning – Creating Added Value

Another lucrative business model is specializing and fine-tuning base open-source models for specific industry applications.

“The real business value often lies not in the base model itself, but in its specialized adaptations,” explains Dr. Thomas Wong from Stanford AI Lab. “A company can take an open-source Llama model, fine-tune it for a specific domain—such as law, medicine, or finance—and offer this specialized version as a commercial product.”

This strategy has been successfully implemented by many startups:

  • Hippocratic AI uses Llama base models to create specialized medical assistants
  • LexiLlama adapts open-source models for legal applications
  • FinGPT specializes in adapting open-source models for financial analysis and investment management

Each of these companies creates added value not through the base model itself, but through specialized training data, domain-specific safety layers, and integrations with industry systems.

Battle for Control and Privacy – Key Advantages of Open-source Models

One of the main factors driving the adoption of open-source models, particularly among corporate and institutional clients, is the growing need for control, transparency, and data sovereignty.

Data Sovereignty as a Strategic Imperative

For many organizations, especially in regulated and sensitive sectors, the ability to deploy AI models within their own infrastructure, without sending data to external services, is becoming a key requirement.

“Data sovereignty is no longer a luxury but a strategic imperative for many organizations,” explains Maria Rodriguez, Chief Information Security Officer at a global financial firm. “Proprietary models like GPT or Claude require sending data to external APIs, which creates fundamental risk both in terms of confidentiality and regulatory compliance.”

Open-source models enable organizations to deploy the full AI stack on their own infrastructure, maintaining complete control over data and processes. This is a key advantage in sectors such as:

  • Banking and finance – where regulations often prohibit transferring customer data to external entities
  • Healthcare – where patient data protection is subject to rigorous regulations such as HIPAA
  • Defense and national security – where data has critical importance for state security
  • Research and development – where companies don’t want to share confidential information with potential competitors

Transparency and Auditability

Another key advantage of open-source models is the full transparency of their operation, which is critical for organizations that need to understand and explain decisions made by AI systems.

“In the era of growing AI regulations, such as the EU AI Act, the ability to explain and document exactly how an AI model works is becoming a key legal requirement,” notes Dr. Robert Chen, a partner at a law firm specializing in AI regulation. “Open-source models allow full insight into their operation, which significantly facilitates compliance with regulations requiring transparency and explainability.”

This transparency is particularly important in the context of detecting and mitigating biases in AI models. Researchers and organizations can analyze exactly how the model makes decisions and introduce appropriate corrections when problematic patterns are detected.

Customization and Optimization Capabilities

The ability to deeply customize models to specific needs and constraints represents another significant advantage of open-source solutions.

“Proprietary APIs are like ready-made meals—you can choose from the menu, but you don’t have control over the ingredients and preparation method,” metaphorically explains Sarah Johnson, CTO at an AI startup. “Open-source models are like having your own kitchen—you can customize every aspect to your needs, optimizing for cost, performance, or specific requirements.”

These customization capabilities include:

  • Distillation and quantization – reducing model size for deployments on mobile or edge devices
  • Domain specialization – fine-tuning the model on specialized industry data
  • Optimization for specific hardware – adapting the model to the organization’s specific hardware infrastructure
  • Integration with internal systems – deep connection with existing tools and knowledge bases

Limitations of Open-source Models – Why Don’t They Dominate the Market?

Despite impressive progress and strategic advantages, open-source models still face significant limitations that prevent them from completely dominating the market.

Infrastructure Challenges – Deployment and Maintenance Costs

One of the main limitations of open-source models is the high infrastructure requirements associated with their deployment and maintenance.

“The fact that a model is available for free doesn’t mean that deploying and operating it is cheap,” explains Marcus Thompson, an AI infrastructure consultant. “Running Llama 3 70B requires advanced GPUs, specialized knowledge, and significant infrastructure investments, which for many organizations can be an insurmountable barrier.”

According to analysis conducted by Forrester Research, the total cost of ownership (TCO) of an internal deployment of a large open-source model can be 3-5 times higher than using a proprietary API in a 3-year perspective, especially for organizations without existing AI infrastructure and a specialized team.

These costs include not only hardware and infrastructure but also:

  • Specialized personnel for model deployment and maintenance
  • Energy costs associated with running large models
  • Regular updates and model fine-tuning
  • Monitoring, security, and risk management

Gap in “User-friendliness” and Commercial Readiness

Another significant limitation of open-source models is often a lower level of “commercial readiness” compared to proprietary solutions.

“Open-source models are like cars delivered in parts—powerful, but requiring specialized knowledge to assemble and run,” explains Dr. Lisa Wang from the Digital Transformation Institute. “In contrast, proprietary APIs are like limousines with chauffeurs—less flexible, but ready to use immediately and much more accessible to non-technical users.”

This gap in user-friendliness includes:

  • Lack of friendly user interfaces and documentation for non-technical users
  • Limited monitoring and management tools
  • Less developed security features and content filtering
  • Limited technical support and auxiliary resources

Competency Gaps in Organizations

Even when an organization can overcome infrastructure barriers, it often encounters competency gaps—a lack of specialists who could effectively deploy and manage open-source models.

“Demand for specialists who understand how to deploy, fine-tune, and operate large language models dramatically exceeds supply,” notes Sarah Chen, a recruiter specializing in AI. “This competency gap is currently one of the main limitations to open-source model adoption in medium and small organizations.”

Research conducted by IEEE shows that over 70% of organizations interested in deploying open-source models cite lack of appropriate competencies as the main barrier, and 63% report difficulties in recruiting specialists with appropriate qualifications.

Convergence of Business Models – A Hybrid Future for AI

As the AI market matures, an interesting trend of convergence is emerging—the blurring of boundaries between open-source and proprietary approaches. This evolution suggests that the industry’s future may belong to hybrid models combining elements of both approaches.

Trend 1: Proprietary Models with Open Weights

The first trend is the growing inclination of proprietary providers to partially open their models, particularly by releasing model weights (parameters) while maintaining control over other aspects.

“We’re observing a new category that could be called ‘pseudo-open’—models whose weights are publicly available, but other aspects, such as training data or architecture, remain proprietary,” explains Dr. James Wilson from Oxford AI Ethics Lab. “This hybrid approach allows for greater transparency and adaptation possibilities while preserving certain competitive advantages for the creators.”

Examples of this trend include:

  • Claude Weights – Anthropic is experimenting with releasing weights of smaller versions of its model while maintaining a proprietary API for the latest versions
  • OpenAI GPT-3.5 Weights – OpenAI released weights of an older version of its model as a gesture toward greater transparency
  • Cohere Command-R – the company released Command-R model weights while maintaining a proprietary API for the flagship Command+ model

Trend 2: Commercialization in the Open-source Ecosystem

In parallel, the open-source AI ecosystem is increasingly professionalizing and commercializing, with sophisticated business models extending far beyond the traditional approach to open-source software.

“The open-source AI ecosystem has undergone a dramatic evolution from hobbyist projects to a highly professional environment with advanced monetization strategies,” notes Robert Kim, founder of the Open Source AI Coalition. “Contemporary open-source AI projects often have well-funded startups or large corporations behind them, with clear commercialization strategies.”

This professionalization includes:

  • Venture-backed startups building businesses around open-source models
  • Sophisticated enterprise offerings based on open-source cores
  • Model management platforms combining open-source and proprietary solutions

Trend 3: Market Specialization and Segmentation

The third key trend is progressive specialization and market segmentation, with different business models dominating in different segments.

“Instead of one winner dominating the entire market, we’ll likely see an ecosystem of specialized solutions responding to different needs and use cases,” predicts Maria Chen, an analyst at Goldman Sachs. “The open-source approach may dominate in some segments, while proprietary models will be preferred in others.”

This segmentation is already crystallizing along several key dimensions:

Segmentation by Organization Size

  • Enterprise – large organizations often prefer a hybrid approach, combining proprietary APIs for standard applications with dedicated open-source models for critical workloads
  • Mid-market – medium-sized companies often use proprietary APIs due to lower entry barriers and predictable costs
  • Startups and individual developers – often prefer open-source models due to flexibility and control

Segmentation by Application Domain

  • Safety-critical applications (e.g., medicine, finance) – advantage of open-source models due to the possibility of full validation and control
  • Creative and generative applications – dominance of proprietary models, which often offer higher quality of generated content
  • Programming applications – growing dominance of open-source models that can be deeply integrated with development tools

Geographic Segmentation

  • North America – balanced mix of proprietary and open-source solutions
  • Europe – preference for open-source models due to privacy regulations and digital sovereignty
  • Asia – regional models often based on open-source with local adaptations

Global Implications – Democratization of AI or New Divisions?

The rivalry between open-source and proprietary models has profound implications extending far beyond technology and business, touching on issues of global technological equality, digital sovereignty, and access to AI.

Democratization of AI or New Forms of Exclusion?

One of the main arguments for open-source models is their potential to democratize access to advanced AI. However, the real picture is much more complex.

“The mere availability of open-source code doesn’t automatically mean democratization of access to AI,” notes Dr. Elena Rodriguez from the Digital Inclusion Institute. “Infrastructure, competency, and capital barriers can create new forms of exclusion, even when the technology itself is theoretically available for free.”

Research conducted by the United Nations Technology Innovation Lab shows that despite the availability of open-source models, actual adoption of advanced AI remains heavily concentrated in wealthy regions with developed technological infrastructure and high human capital.

At the same time, however, open-source models enable local adaptation and development of AI tailored to the specific needs of regions and communities that might be overlooked by proprietary, mainstream solutions.

Technological Sovereignty versus Global Standardization

Another key issue concerns the tension between the pursuit of technological sovereignty and the benefits of global AI standardization.

“Open-source models allow countries and regions to build their own AI ecosystems without dependence on a few global corporations, which has fundamental importance for digital sovereignty,” explains Prof. Thomas Chen from the Beijing Institute of Technology Policy. “At the same time, technological fragmentation can hamper global cooperation and lead to inefficiencies in AI development.”

This tension is particularly visible in the context of regional AI initiatives:

  • The European Union intensively supports the development of European open-source models, such as Mistral and BLOOM, as part of its digital sovereignty strategy
  • China is developing its own base models, both open-source (Wu Dao, Baichuan) and proprietary (Ernie Bot)
  • India has announced a national AI initiative based on open-source models adapted to Indian languages and cultural contexts

This regionalization of AI may lead to the emergence of separate technological ecosystems, potentially complicating global cooperation and interoperability. At the same time, it may foster the development of more diverse and locally adapted AI solutions.

Environmental Impact and Sustainable Development

The environmental aspects of training and deploying large AI models cannot be overlooked. Different approaches—open-source versus proprietary—have different implications for the carbon footprint of AI.

“On one hand, centralization of training in a few large data centers of proprietary providers can be more energy-efficient,” notes Dr. Maria Garcia from the Climate Tech Institute. “On the other hand, distributed deployments of open-source models often utilize existing infrastructure and can be better tailored to local needs, potentially reducing unnecessary computations.”

Research from the MIT Environmental Solutions Initiative indicates that the total environmental impact depends on specific deployment patterns:

  • Central APIs may be more energy-efficient per token but generate more “unnecessary” computations due to standardization
  • Local deployments of open-source models may be less efficient per token but better tailored to actual needs, reducing unnecessary computations

Innovative hybrid approaches are also emerging on the horizon, such as federated learning and distributed deployments, which have the potential to combine the advantages of both approaches from an environmental perspective.

Future Perspectives – Coevolution Instead of Absolute Dominance

Looking to the future, experts predict further coevolution of proprietary and open-source models, leading to a more complex and diverse AI ecosystem than a simple dominance of one approach.

Hybrid Ecosystem as the Likely Scenario

“Instead of one business model triumphing over the other, a stabilization of the ecosystem is more likely, where different approaches coexist, compete, and complement each other,” predicts Dr. Robert Kim from the Future of AI Institute. “This dynamic balance may actually drive innovations faster than the dominance of a single paradigm.”

In this ecosystem, several key trends can be anticipated:

Functional Specialization of Models

  • Proprietary models may dominate in applications requiring the highest quality of generated content, particularly in creative and consumer domains
  • Open-source models may gain an advantage in applications requiring control, transparency, and integration with existing systems

Cyclical Innovation Dynamics

The history of technology shows a characteristic pattern of cyclical innovations between open and closed systems, where periods of intense proprietary innovation alternate with phases of open collaboration.

“We observe an almost rhythmic pattern in technology history—initially closed systems are disrupted by open alternatives, which over time themselves evolve toward more closed ecosystems, creating space for a new wave of open solutions,” explains Prof. Sarah Johnson, a technology historian from MIT.

This pattern was visible in the evolution of operating systems (from proprietary Unix, through open Linux, to semi-open commercial distributions), programming languages, and may repeat in the AI domain.

Regional Differentiation

Different regions may develop their own balances between proprietary and open-source models, reflecting their specific priorities, regulatory contexts, and economic strategies:

  • Europe may shift toward greater dominance of open-source models, in line with data sovereignty and transparency priorities
  • The United States may maintain a strong proprietary ecosystem, balanced by a dynamic open-source community
  • China may develop its own hybrid ecosystem with strong state participation in both models

Breakthrough Innovations from Outside the Western Bubble

One of the most exciting aspects of AI democratization through open-source models is the potential for breakthrough innovations coming from outside traditional technology centers.

“The history of technology teaches us that breakthrough innovations often come from unexpected places, from people and communities with unique perspectives and experiences,” notes Dr. James Wong from the Innovation Studies Centre. “Open-source models enable researchers and developers from around the world to make significant contributions to AI development, which may lead to concepts and applications that would never emerge in mainstream corporate laboratories.”

We’re already seeing promising examples of such innovations:

  • BLOOM – a multilingual model developed by a global consortium of researchers, with special emphasis on African languages traditionally overlooked by mainstream models
  • Jais – an Arabic-language open-source model developed in the UAE, which outperforms Western models in processing Arabic texts
  • Airavata – an Indian open-source model specially designed for India’s 22 constitutionally recognized languages

These regional innovations may not only fill gaps left by mainstream models but also contribute to fundamental progress in understanding and designing AI systems.

Responsible AI – Shared Responsibility, Different Approaches

Regardless of the business model, a key challenge for the entire AI industry remains ensuring that this powerful technology is developed and deployed in a responsible, ethical manner that benefits society as a whole.

Transparency as the Foundation of Trust

“Regardless of whether we’re talking about proprietary or open-source models, transparency must be the foundation of responsible AI development,” emphasizes Dr. Elena Rodriguez from the AI Ethics Institute. “The difference lies in how this transparency is realized in different business models.”

Open-source models offer, by definition, greater transparency of architecture and weights but may be less transparent regarding training data. Conversely, proprietary providers may provide carefully documented validation and audit processes but hide technical details of their models.

New hybrid approaches, such as “model cards” (detailed documentation of model capabilities and limitations) or “datasheets for datasets” (transparent documentation of data collections), show that a certain level of transparency is possible regardless of the business model.

Collective Responsibility and Collaboration

Ultimately, ensuring responsible AI development requires collaboration and collective responsibility, transcending the open-source versus proprietary dichotomy.

“Responsible AI is not a technical problem that can be solved by choosing the right business model,” summarizes Prof. Thomas Lee from Stanford HAI. “It’s a complex social challenge requiring engagement from diverse stakeholders—from technology companies, through researchers and regulators, to civil society and ordinary users.”

This collective responsibility manifests in initiatives such as:

  • International research consortia, like Partnership on AI, bringing together different ecosystem actors
  • Multidisciplinary approaches to AI design and deployment, incorporating technological, social, ethical, and legal perspectives
  • Proactive collaboration with regulators to shape responsible legal frameworks for AI

Conclusions – Evolution Instead of Revolution

The dynamic rivalry between open-source and proprietary models is not a zero-sum game where one business model must triumph at the expense of the other. Instead, we are witnessing a complex evolution of the AI ecosystem, where different approaches find their niches, compete, and inspire each other.

“The history of technology is rarely a simple story of absolute dominance by one business model,” summarizes Dr. Sarah Chen from the Technology Policy Institute. “We typically observe coevolution of different approaches that mutually shape and drive innovations. Similarly in AI, both open-source and proprietary models play key roles in the ecosystem, satisfying different needs and use cases.”

This coevolution not only drives technological innovations but also contributes to building a more diverse and resilient AI ecosystem. In a world where artificial intelligence is becoming a fundamental technology shaping our society and economy, such diversity of approaches may be not just desirable but essential.

Ultimately, the real question is not “which business model will win?” but “how can we best utilize different business models to maximize social benefits and minimize risks associated with AI?” The answer to this question will shape not only the future of the AI market but also how this transformative technology will impact our lives and society.