• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Sovereign AI gets boost from new NVIDIA microservices

Simon Osuji by Simon Osuji
August 27, 2024
in Artificial Intelligence
0
Sovereign AI gets boost from new NVIDIA microservices
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


To ensure AI systems reflect local values and regulations, nations are increasingly pursuing sovereign AI strategies; developing AI utilising their own infrastructure, data, and expertise. NVIDIA is lending its support to this movement with the launch of four new NVIDIA Neural Inference Microservices (NIM).

These microservices are designed to simplify the creation and deployment of generative AI applications, supporting regionally-tailored community models. They promise deeper user engagement through an enhanced understanding of local languages and cultural nuances, leading to more accurate and relevant responses.

This move comes amidst an anticipated boom in the Asia-Pacific generative AI software market. ABI Research forecasts a surge in revenue from $5 billion this year to a staggering $48 billion by 2030.

Among the new offerings are two regional language models: Llama-3-Swallow-70B, trained on Japanese data, and Llama-3-Taiwan-70B, optimised for Mandarin. These models are designed to possess a more thorough grasp of local laws, regulations, and cultural intricacies.

Further bolstering the Japanese language offering is the RakutenAI 7B model family. Built upon Mistral-7B and trained on both English and Japanese datasets, they are available as two distinct NIM microservices for Chat and Instruct functions. Notably, Rakuten’s models have achieved impressive results in the LM Evaluation Harness benchmark, securing the highest average score among open Japanese large language models between January and March 2024.

Training LLMs on regional languages is crucial for enhancing output efficacy. By accurately reflecting cultural and linguistic subtleties, these models facilitate more precise and nuanced communication.  Compared to base models like Llama 3, these regional variants demonstrate superior performance in understanding Japanese and Mandarin, handling regional legal tasks, answering questions, and translating and summarising text.

This global push for sovereign AI infrastructure is evident in significant investments from nations like Singapore, UAE, South Korea, Sweden, France, Italy, and India.  

“LLMs are not mechanical tools that provide the same benefit for everyone. They are rather intellectual tools that interact with human culture and creativity. The influence is mutual where not only are the models affected by the data we train on, but also our culture and the data we generate will be influenced by LLMs,” said Rio Yokota, professor at the Global Scientific Information and Computing Center at the Tokyo Institute of Technology.

“Therefore, it is of paramount importance to develop sovereign AI models that adhere to our cultural norms. The availability of Llama-3-Swallow as an NVIDIA NIM microservice will allow developers to easily access and deploy the model for Japanese applications across various industries.”

NVIDIA’s NIM microservices enable businesses, government bodies, and universities to host native LLMs within their own environments. Developers benefit from the ability to create sophisticated copilots, chatbots, and AI assistants. Available with NVIDIA AI Enterprise, these microservices are optimised for inference using the open-source NVIDIA TensorRT-LLM library, promising enhanced performance and deployment speed. 

Performance gains are evident with the Llama 3 70B microservices, (the base for the new Llama–3-Swallow-70B and Llama-3-Taiwan-70B offerings), which boast up to 5x higher throughput. This translates into reduced operational costs and improved user experiences through minimised latency. 

(Photo by BoliviaInteligente)

See also: OpenAI delivers GPT-4o fine-tuning

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence, development, llm, microservices, nim, Nvidia, sovereign ai



Source link

Related posts

Hacked Prayer App Sends ‘Surrender’ Messages to Iranians Amid Israeli and US Strikes

Hacked Prayer App Sends ‘Surrender’ Messages to Iranians Amid Israeli and US Strikes

February 28, 2026
NASA Is Making Big Changes to Speed Up the Artemis Program

NASA Is Making Big Changes to Speed Up the Artemis Program

February 28, 2026
Previous Post

The Newest Most Promising Deepwater Rig Demand Hotspot

Next Post

In Shot at Gaza Protesters, NYU Makes “Zionist” a Protected Class

Next Post
In Shot at Gaza Protesters, NYU Makes “Zionist” a Protected Class

In Shot at Gaza Protesters, NYU Makes “Zionist” a Protected Class

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Sam Bankman-Fried Will Testify in His Own Trial. It’s a Legal Hail Mary

Sam Bankman-Fried Will Testify in His Own Trial. It’s a Legal Hail Mary

2 years ago
e4 Unveils CommunicationHub

e4 Unveils CommunicationHub

2 years ago
Archaeologists in Brazil find petroglyphs alongside dinosaur tracks

Archaeologists in Brazil find petroglyphs alongside dinosaur tracks

2 years ago
Making big models do small jobs with application programming interfaces

Making big models do small jobs with application programming interfaces

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • Mahama attends Liberia’s 178th independence anniversary

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.