At Google I/O earlier last year, Google announced that it’s exploring a new technology called MUM (Multitask Unified Model) internally to help its ranking systems better understand language.
Dubbed “a new AI milestone for understanding information,” MUM is designed to make it easier for Google to answer complex needs in search.
Google promised MUM is 1,000 times more powerful than its NLP transfer learning predecessor, BERT.
It uses a model called T5, the Text-To-Text Transfer Transformer, to reframe NLP tasks into a unified text-to-text format and develop a more comprehensive understanding of knowledge and information.
According to Google, MUM can be applied to document summarization, question answering, and classification tasks such as sentiment analysis.
Clearly, MUM is a major priority inside the Googleplex – and something that important to the search team had better on the SEO industry’s radar, as well.
But is it a ranking factor in Google’s search algorithms?
The Claim: MUM As A Ranking Factor
Many who read the news about MUM when it was first revealed naturally wondered how it might impact search rankings (especially their own).
Google makes thousands of updates to its ranking algorithms each year and while the vast majority go unnoticed, some are impactful.
BERT is one such example.
Rolled out worldwide in 2019, it was hailed the most important update in five years by Google itself.
And sure enough, BERT impacted about 10% of search queries.
RankBrain, rolled out in the spring of 2015, is another example of an algorithmic update that had a substantial impact on the SERPs.
Now that Google is talking about MUM, it’s clear that SEO professionals and the clients they serve should take note.
Roger Montti recently wrote about a patent he believes could provide more insight into MUM’s inner workings.
That makes for an interesting read if you want to take a peek at what may be under the hood.
For now, let’s just consider whether MUM is a ranking factor.
The Evidence For MUM As A Ranking Factor
When RankBrain rolled out, it wasn’t announced until some six months afterward. And most updates aren’t announced or confirmed at all.
However, Google has gotten better at sharing impactful updates before they happen.
For example, BERT was first announced in November 2018, rolled out for English-language queries in October 2019, and rolled out worldwide later that year, in December.
We had even more time to prepare for the Page Experience signal and Core Web Vitals, which were announced over a year ahead of the eventual rollout in June 2021.
Google has already said MUM is coming and it’s going to be a big deal.
But could MUM be responsible for a rankings drop many sites experienced in the spring and summer of 2021?
The Evidence Against MUM As A Ranking Factor
In his May 2021 introduction to MUM, Pandu Nayak, Google Fellow and Vice President of Search, made it clear that technology isn’t in play. Not yet, anyway:
“Today’s search engines aren’t quite sophisticated enough to answer the way an expert would. But with a new technology called Multitask Unified Model, or MUM, we’re getting closer to helping you with these types of complex needs. So in the future, you’ll need fewer searches to get things done.”
The timeline given then as to when MUM-powered features and updates would go live was “in the coming months and years.”
When asked whether the industry would get a heads up when MUM goes live in search, Google Search Liaison Danny Sullivan said yes.
Yes, as with BERT, I’m sure we’ll let every know. We won’t be mum on MUM.
— Danny Sullivan (@dannysullivan) May 20, 2021
More recently, Nayak explained how Google is using AI in Search and wrote,
“While we’re still in the early days of tapping into MUM’s potential, we’ve already used it to improve searches for COVID-19 vaccine information, and we’ll offer more intuitive ways to search using a combination of both text and images in Google Lens in the coming months.
These are very specialized applications — so MUM is not currently used to help rank and improve the quality of search results like RankBrain, neural matching and BERT systems do.”
He also added that any future applications of MUM will be subjected to a rigorous evaluation process including paying special attention to the responsible usage of AI.
MUM As A Ranking Factor: Our Verdict
Bottom line: Google doesn’t use MUM as a search ranking signal. It’s a language AI model built on Google’s open source neural network architecture, Transformer.
Google will train MUM as it did BERT on large datasets, then fine-tune it for specific applications on smaller datasets. This is what it’s testing with MUM’s use for improving vaccine search results.
Google has mentioned specific ways in which it may be used in the (near) future, including:
- Surfacing insights based on its deep knowledge of the world.
- Surfacing helpful subtopics for deeper exploration.
- Breaking down language barriers by transferring knowledge across languages.
- Simultaneously understanding information from different formats like webpages, pictures and more.
How will you optimize for MUM?
That remains to be seen.
What is for sure: Google search’s intelligence is growing by leaps and bounds.
As Google’s search algorithms become more sophisticated and better able to determine the intent and nuance of language, attempts at trickery and manipulation will be less and less effective (and likely easier to detect).
With an NLP technology 1000x more powerful than RankBrain on the horizon, optimizing for human experience is more important than ever.
If you want to get ahead of MUM, focus on what the content you’re creating means for the people whose needs it is intended to meet.
The machines are inching ever closer to fully and completely experiencing that content as your intended reader/viewer does.
Featured Image: Paulo Bobita/