March 20, 2026 07:17 pm (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
Mamata unveils TMC candidate list for Bengal polls; to face Suvendu in Bhabanipur | ‘Not a one-day battle for me’: Mamata Banerjee on facing Suvendu Adhikari in Bhabanipur | Mamata vs Suvendu: Bhabanipur set for high-voltage showdown | Barbaric: India condemns Pakistani airstrike on Kabul hospital | Middle East conflict: Israel says it killed key Iranian commander during overnight strike | Middle East on edge: Kataeb Hezbollah commander Abu Ali al-Askari killed | Middle East on edge: Kataeb Hezbollah commander Abu Ali al-Askari killed | Afghanistan claims Pakistani airstrike on Kabul hospital left 400 killed, Islamabad denies | ECI orders major reshuffle in Bengal police brass a day after poll announcement | 10 patients killed in fire at SCB Medical College Hospital in Cuttack; staff injured
Microsoft
Microsoft makes Mona Lisa to rap with AI technology.Photo Courtesy: X page video grab

Mona Lisa is rapping in a new viral video, check out how Microsoft made it possible with AI

| @indiablooms | Apr 21, 2024, at 07:35 pm

The iconic Mona Lisa is no longer only smiling, she also prefers to sing and even rap, thanks to the new artificial intelligence technology unveiled by Microsoft.

Last week, Microsoft researchers detailed a new AI model they’ve developed that can take a still image of a face and an audio clip of someone speaking and automatically create a realistic looking video of that person speaking, reported CNN.

The video can leave people stunned as it is complete with lip-syncing and natural face and head movements.
In one demo video, researchers showed how they animated the Mona Lisa to recite a comedic rap by actor Anne Hathaway, the American news channel reported.

Speaking about outputs from AI model named VASA-1, Micorsoft said: "We introduce VASA, a framework for generating lifelike talking faces of virtual characters with appealing visual affective skills (VAS), given a single static image and a speech audio clip. Our premiere model, VASA-1, is capable of not only producing lip movements that are exquisitely synchronised with the audio, but also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness."

"The core innovations include a holistic facial dynamics and head movement generation model that works in a face latent space, and the development of such an expressive and disentangled face latent space using videos. Through extensive experiments including evaluation on a set of new metrics, we show that our method significantly outperforms previous methods along various dimensions comprehensively. Our method not only delivers high video quality with realistic facial and head dynamics but also supports the online generation of 512x512 videos at up to 40 FPS with negligible starting latency. It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviours" the website said.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.