em360tech image
Image credit: ARM/AI-generated | Adobe Stock

Chinese scientists leveraged the publicly available Llama, Meta’s large language model (LLM) to train a new AI model specialising in military applications.

In an exclusive report published earlier today [November 1, 2024], Reuters said that the top Chinese institutions associated with the People’s Liberation Army used the public LLama model to develop an AI tool called ChatBit for military applications.

ChatBIT seems to have outperformed several other AI models according to a research paper exclusively viewed by Reuters

The study claims that the new Chinese AI tool surpassed the performance of some other competent AI models that were nearly 90% as capable as OpenAI’s powerful ChatGPT-4

However, the definition of ChatBIT’s performance was not clarified in the paper nor were the details of how the AI model was placed into operations. 

ChatBIT Trained on Small Dataset

The paper noted that ChatBIT was fine-tuned and "optimised for dialogue and question-answering tasks in the military field.”

chatbit, ai tool for military use developed using meta llama open-source
Image Credit: Llama - DIVERSITY (AI-gen) | Adobe Stock. Meta -  rarrarorro | Adobe Stock

"In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis but also ... strategic planning, simulation training and command decision-making will be explored," the study stated.

ChatBIT’s complete set of capabilities and computational power still remains unknown. Scientists reported that the model comprised just about 100,000 military dialogue records. This is an incredibly small dataset compared to the amount other large language models (LLMs) are trained on.

"That's a drop in the ocean compared to most of these models (that) are trained with trillions of tokens so … it really makes me question what do they actually achieve here in terms of different capabilities," expressed Joelle Pineau, a vice president of AI Research at Meta and a professor of computer science at McGill University in Canada.

The study’s 6 authors include Geng Guotong and Li Weiwei from AMS's Military Science Information Research Center and the National Innovation Institute of Defense Technology, and academics from the Beijing Institute of Technology and Minzu University.

AI for Warfare Technologies

Sunny Cheung, associate fellow at the Jamestown Foundation specialising in China's emerging and dual-use technologies including AI told Reuters:

"It's the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes.”

Meta imposes restrictions on its open-source AI models like Llama to prevent misuse. Users must request a licence, to gain access to Meta’s AI services.

The tech giant has banned warfare terms for subjects including – military, warfare, nuclear industries or applications, espionage and other terms relevant to SU defence export controls, weapon development and any other information intending to “incite and promote violence.

This concern continues to plague the courts of the US regarding how gen-AI models could suggest aggressive or warfare-like notions. 

Adept users often find a way around such limitations. 

For instance, EM360Tech reported earlier in September about a hacker who could build a fertilizer bomb as a result of ChatGPT prompts. 

This hacker tricked OpenAI’s ChatGPT into bypassing its ethical guidelines and producing instructions to make an explosive fertilizer bomb.

Recently, Joe Biden, the US President finalised restrictions to curb US investments in China’s advanced technology sector. 

The investment ban will be imposed on Chinese AI technologies, semiconductors and quantum computing with an aim to prevent US money from contributing to China’s technology that might be backing the military.

William Hannas, lead analyst at Georgetown University's Center for Security and Emerging Technology (CSET), told Reuters:

"Can you keep them (China) out of the cookie jar? No, I don't see how you can. There is too much collaboration going on between China's best scientists and the U.S.' best AI scientists for them to be excluded from developments.”