From Silicon Valley to sovereign stacks: The global AI power shift
As large language models struggle with bias and multilingual performance, small and community-driven AI projects offer an alternative path.

Written by Navanwita Sachdev
Just as there can be no single human being without bias and context, there probably can never be an Artificial Intelligence (AI) model that keeps everyone content. Because challenges remain in ensuring fair representation in AI training data and balancing government involvement. As these developments unfold, the future of global technology governance will depend on collaboration, innovation, and a commitment to inclusivity across linguistic and cultural boundaries.
As per The Algorithm, at RightsCon, a digital rights conference held in Taiwan, civil society organizations from around the world, including the US, discussed the loss of US government funding for global digital rights work. Discussions included observations that American big tech companies, which have users far beyond US borders, seem to be shifting in willingness to engage with and invest in smaller user bases, particularly non-English-speaking communities. So much so, that policymakers and business leaders in Europe are reconsidering their reliance on US-based tech, giving rise to a new interest in developing homegrown alternatives, particularly for AI.
For example, India is preparing to roll out an indigenous Indian web browser, with three homegrown projects selected for productization under the Ministry of Electronics and Information Technology’s (MeitY) Indian Web Browser Development Challenge (IWBDC). Union Minister for Electronics and Information Technology Ashwini Vaishnaw, selected three projects, submitted by Zoho Corporation, Team PING, and Team Ajna, to build a browser that meets India’s security, language, and usability needs.
Opinions bound that the US President Donald Trump administration’s shift in policy has pushed the US toward “competitive authoritarianism”, a claim by some political scientists. Many are seeing the loss of US government funding as a significant setback for global digital rights work.

Social media content moderation systems already use automation and large language models (LLMs), but they are failing. For example, they haven’t been successful in detecting gender-based violence in India, South Africa, and Brazil. The problem is, if platforms rely more on LLMs for content moderation, the problem of detecting gender-based violence will likely get worse. After all, LLMs are being used to moderate other content, even though they themselves are moderated poorly.
“LLMs are helpful for flagging patterns, detecting spam, and offering human moderation teams a scalable assist but they’re not yet nuanced enough for judgment-based calls. The trade-off between over-moderation and under-protection is real. You need models that can learn domain-specific boundaries and cultural nuances or which can mitigate bias. This again brings us closer to the value SLMs could offer,” says Ruban Phukan, CEO of GoodGist, an agentic AI company.
The problem primarily arises because many AI systems are trained primarily on English-language data, particularly American English. A study found that ChatGPT performed worse in Chinese and Hindi than in English and Spanish for health-related queries. Multilingual language models perform poorly with non-Western languages.
“As LLMs get better at detecting the nuances of human language, they will become better tools for content moderation. However, they cannot be left alone, since the amount of false positives are likely to skyrocket, we’ve seen this happen with Meta’s new implementation of AI for moderation in Facebook,” explains JD Raimondi, Head of Data Science, at Making Sense.
Some support community-driven AI approaches, such as small language models and chatbots. Indian startup Shhor AI, has developed a content moderation API for Indian vernacular languages. Other solutions include Mozilla’s volunteer-led effort to collect training data in languages other than English. Lelapa AI is building AI models for African languages.
According to Raimondi, SLMs have come to level the field for most companies where they can access a language model without a huge investment in equipment or relying on other providers for a fraction of the price, albeit at the cost of a lower (but still acceptable) performance.
“They are also easier to train and adapt, cheaper to deploy and change, and simpler to distribute. Furthermore, until hardware gets better, SLMs are the ones that are likely to be present in mobile devices running without an internet connection,” he adds.
We also have to remember that shifts in technology are bound to occur all the time.
Sumedh Nadendla, a venture capitalist at Pacific Alliance Ventures, reminds that the definition of small and large changes with time as technology improves. “What is large today might be small twenty years from now. The DeepSeek-V3 model has 671 billion parameters with a context length of 128,000 right now. I think that’s just going to be the lower base in 20 years. What is LLM today might be SLM 20 years from now,” he says.
Also, AI has become the new competitive theme of global tech. The recent Paris AI Summit involved major power plays, an indication that AI has definitely become a source of geopolitical competition more than a cause for international cooperation and multilateral governance. France’s President Emmanuel Macron clearly used the summit as a platform to depict France as the AI leader in Europe and the star host of the summit.
However, European companies that are spending big on Gen AI need to start showing returns on their massive outlays by next year, because risk investors who have paid sky-high prices to join the market boom can lose patience.
Andy Yen, CEO of Proton, recently said that Trump’s policies are accelerating Europe’s push for tech sovereignty. In fact, several countries have announced “sovereign AI” initiatives, looking to keep their data away from the US. The European Union appointed its first commissioner for tech sovereignty, security, and democracy in November. Europe is working on the “Euro Stack,” a digital infrastructure initiative. The Indian government has developed “India Stack,” a digital infrastructure system that includes Aadhaar. Dutch lawmakers recently passed motions to reduce dependence on US tech providers.
But are governments the right parties to decide which languages and perspectives should be included in AI training data?
“Yes..but carefully,” replies Phukan. “Governments should help ensure transparency, access to diverse language datasets, and create safety regulations. But heavy-handed involvement can risk slowing innovation. A supportive policy environment that promotes open-source SLMs, especially in underrepresented languages, could be transformative. We need guardrails, not gatekeepers,” he reiterates.
Government involvement in language model development could be problematic if it determines which languages or perspectives are prioritized.
According to a study, minimal regulation for AI never maximizes actual consumer welfare. For example, the study points out that China and the EU use top-down command and control approach to AI regulation, while Russia and the UK use bottom-up industry self-regulation based on AI ethics. Thus, the AI policy for these countries cannot be the same. The study advises that under high or low foreign competition, stringent AI regulation should be chosen, while under intermediate foreign competition, looser regulation is recommended.
So, both context and global perspective are significant for an AI model. And we might have no choice but to work on one forever as the human race continues to evolve.
More Tech
Photos
Must Read
Jun 03: Latest News
- 01
- 02
- 03
- 04
- 05