Friday, April 26, 2024
Advertisement
  1. You Are At:
  2. News
  3. Technology
  4. Microsoft unveils Kosmos-1,a new AI model to race up with ChatGPT

Microsoft unveils Kosmos-1,a new AI model to race up with ChatGPT

Microsoft's AI researchers in a paper wrote that a big convergence of language, multimodal perception, action, and world modelling is a key step toward artificial general intelligence. In this work, we introduce Kosmos-1, a Multimodal Large Language Model (MLLM) that can perceive general modalities

India TV Tech Desk Edited By: India TV Tech Desk Noida Published on: March 03, 2023 19:07 IST
Microsoft
Image Source : MICROSOFT Microsoft

As the war over artificial intelligence (AI) chatbots has been heating up in the past few months, Microsoft has unveiled Kosmos-1, a new AI model. The new model is capable to respond to visual cues or images, apart from text prompts or messages.

ALSO READ: International Women’s Day 2023: Best hearables and wearables to gift under Rs 3,000

 

The multimodal large language model (MLLM) can help the user with an array of new tasks, including visual question answering, image captioning and more.

Kosmos-1 could pave the way for the next stage beyond ChatGPT's text prompts.

ALSO READ: Snapchat+ subscribers will soon be able to freeze Streaks: Know-how?

 

Microsoft's AI researchers in a paper wrote: "A big convergence of language, multimodal perception, action, and world modelling is a key step toward artificial general intelligence. In this work, we introduce Kosmos-1, a Multimodal Large Language Model (MLLM) that can perceive general modalities, learn in context and follow instructions.”

The paper further suggested that multimodal perception, or knowledge acquisition and ‘grounding’ in the real world, is needed to move beyond ChatGPT-like capabilities to artificial general intelligence (AGI), reports ZDNet.

ALSO READ: YouTube English Help forum disables new comments and posts: Know the reason

 

The paper further reads, "More importantly, unlocking multimodal input greatly widens the applications of language models to more high-value areas, such as multimodal machine learning, document intelligence, and robotics." 

The goal is to align perception with LLMs, so that the models are able to see and talk, mentioned IANS report.

Experimental results showed that the Kosmos-1 AI chatbot has achieved an impressive performance on language understanding, generation, and even when directly fed with document images.

It also showed good results in perception-language tasks, which include multimodal dialogue, visual question answering, image captioning and vision tasks, like image recognition with descriptions (specifying classification via text instructions).

The Microsoft team said, "We also show that MLLMs can benefit from cross-modal transfer, i.e., transfer knowledge from language to multimodal, and from multimodal to language. In addition, we introduce a dataset of Raven IQ test, which diagnoses the nonverbal reasoning capability of MLLMs.”

Advertisement

Read all the Breaking News Live on indiatvnews.com and Get Latest English News & Updates from Technology

Advertisement
Advertisement
Advertisement
Advertisement