Chinese scientists from the Beijing Academy of Artificial Intelligence (BAAI) along with dozens of colleagues from other organizations have created the most complex model of natural language processing (NLP), which exceeds the analogues from Google and OpenAi. The generative neural network of deep learning WUDAO 2.0 was created within the framework of China’s aspiration to increase its technological competitiveness on the world stage.
WUDAO 2.0 is a pre-trained II model that uses 1.75 trillion parameters. For comparison, the GPT-3 model from OpenAi uses 175 billion parameters, and the Google Switch Transformer presented in January is 1.6 billion parameters. Parameters are variables defined by machine learning models. As the model develops, the parameters are specified, their number increases, due to which the accuracy of the operation of the algorithm and its performance increases.
Developers showed how WUDAO 2.0 uses its capabilities to simulate speech, writing poems, image recognition and text generation. According to reports, WUDAO 2.0 has been trained in English and Chinese, for which researchers used 4.9 TB images and texts, including 1.2 TB texts in English and Chinese.
«These complex models trained in giant data sets require only a small number of new information to master a specific function, because they can use already acquired knowledge to perform new tasks, “said one of the scientists Baai Blake Yang (Blake Yan).