Covering Disruptive Technology Powering Business in The Digital Age

image
Alibaba Cloud Open-Sources More LLMs with Diverse Sizes and Multimodal Features
image

 

Alibaba Cloud, the digital technology and intelligence backbone of Alibaba Group, announced that it has open-sourced two large language models (LLM), Qwen-72B and Qwen-1.8B, the 72-billion-parameter and 1.8-billion-parameter versions of its proprietary foundation model Tongyi Qianwen, on its artificial intelligence (AI) model community ModelScope, and the collaborative AI platform Hugging Face.

In addition, Alibaba Cloud has made available more multimodal LLMs including Qwen-Audio and Qwen-Audio-Chat, a pre-trained audio understanding model and its conversationally finetuned version for research and commercial purposes.

As of today, the cloud computing pioneer has contributed various sizes of LLMs with parameters ranging from 1.8B, 7B, 14B to 72B, as well as multimodal LLMs with audio and visual understanding features.

“Building up an open-source ecosystem is critical to promoting the development of LLM and AI applications building. We aspire to become the most open cloud and make generative AI capabilities accessible to everyone,” said Jingren Zhou, CTO at Alibaba Cloud. “To achieve that goal, we’ll continue to share our cutting-edge technology and facilitate the development of the open-source community together with our partners.”

Pre-trained on over 3 trillion tokens, the 72-billion-parameter model outperforms other major open-source models in ten benchmarks, including the Massive Multi-task Language Understanding (MMLU) benchmark that measures the model’s multitask accuracy, HumanEval that tests code generation capabilities and GSM8K, a benchmark for arithmetic problems, to name a few.

Qwen-72B outperforms other major open-source models in ten benchmarks

Accomplishing Even Intricate Tasks

The model also exhibits proficiency in tackling a variety of intricate tasks, including role-playing and language style transfer, referring to the ability of the LLM to assume a specific role or persona and generate more contextually relevant responses consistent with the persona. Such features can be useful in AI applications such as personalised chatbots.

Companies and research institutions can access the Qwen-72B model’s code, model weights, and documentation and use them for free for research purposes. For commercial uses, the models will be free to use for companies with fewer than 100 million monthly active users.

Alibaba Cloud also announced that it has open-sourced the 1.8-billion-parameter of its LLM that can run on the edge. The lightweight LLM enables inference on edge devices with constrained computational resources, making it possible to be deployed on end devices such as cell phones.

The smaller-sized version, with less computing resource requirement, can be useful for individuals looking for a more cost-effective, easy-to-deploy option in using LLMs. The 1.8B model is currently only available for research purposes.

To offer LLMs that can process a greater variety of input formats, Alibaba Cloud also announced that it has open-sourced Qwen-Audio and Qwen-Audio-Chat, the models with enhanced audio understanding capabilities for research and commercial purposes.

Qwen-Audio can understand text and audio input in diverse formats, including human speech, natural sound, and music, and produce text as output. It is capable of performing over 30 audio processing tasks, such as multi-language transcription, speech editing, audio caption analysis, etc. Its conversationally finetuned version, Qwen-Audio-Chat, can support multiple rounds of question-and-answering based on the audio and perform diverse audio-oriented tasks, such as the detection of emotions and tones in human speeches.

A Continuing Commitment to Offering LLMs

The initiative marks another attempt from Alibaba Cloud to offer multi-modal large language models that can understand data types beyond text to the open-source community. Earlier this year, it announced the launch of the open-source Large Vision Language Model Qwen-VL and its chat version Qwen-VL-Chat which can understand visual information and perform visual tasks.

The open-sourced LLM models, including Qwen-7B, Qwen-14B, and Qwen-VL and their conversationally finetuned versions, have gained a combined download of over 1.5 million times on Alibaba Cloud’s open-source AI model community ModelScope and Hugging Face since August. ModelScope has become the largest AI model community in China, boasting over 2.8 million active developers, with over 100 million model downloads to date.

For more information, please check out the details of Qwen-72B and Qwen-1.8B on ModelScope, Hugging Face, and GitHub pages.

For a demo of Qwen-Audio, please click here.

(0)(0)

Archive