The great 2020 Gmail outage: A tale of two blackouts, and lessons learned

Last week was an unhappy one for Google. It suffered two major outages; the root cause of each could concisely be described as “oops”.

For 47 minutes Dec. 14, many Google Cloud services were apparently down. They actually weren’t – but nobody could authenticate to them so they were inaccessible. Then, for a combined total of six hours and 41 minutes on Monday and Tuesday, by Google’s count, Gmail began bouncing emails sent to some gmail.com addresses, saying those addresses did not exist.

The company has now released detailed reports of what went wrong, and they offer lessons for every IT shop. Kudos to Google for their transparency in describing the incidents in detail, including the embarrassing bits.

Here’s what happened.

Authentication fail

The 47 minutes Google techs would likely love to forget began in October, when as part of a migration of the User ID Service, which maintains unique identifiers for each account and handles OAuth authentication credentials, to a new quota management system, a change was made that registered the service with the new system. That was fine. However, parts of the old system remained in place, and they erroneously reported usage of the User ID Service as zero. Nothing happened at that point because of an existing grace period on enforcing quota restrictions.

On Dec. 14, the grace period expired.

Suddenly the usage of the User ID Service apparently fell to zero. The service uses a distributed database to store account data (it uses Paxos protocols to coordinate updates) and rejects authentication requests when it detects outdated data. With what it thought was zero usage, the quota management system reduced the available storage for the database, which prevented writes. Within minutes, the majority of read operations became outdated, generating authentication errors. And to make life more interesting for the technicians trying to troubleshoot, some of their internal tools were impacted as well.

Google does have safety checks in place that should detect unintended quota changes, but the edge case of zero usage was not covered. Lesson: even if it seems improbable, take those edge cases into account.

To get things moving again, Google took several steps. First, it disabled the quota management system in one datacentre, and when that quickly improved the situation, five minutes later disabled it everywhere. Within six minutes, most services had returned to normal. Some suffered lingering impact; you can see the whole list here.

But now the real work begins. In addition to fixing the root cause, Google is implementing a number of changes, including:

  1. Reviewing its quota management automation to prevent fast implementation of global changes
  2. Improving monitoring and alerting to catch incorrect configurations sooner
  3. Improve the reliability of tools and procedures for posting external communications during outages that affect internal tools
  4. Evaulating and implement improved write failure resilience into the User ID service database
  5. Improving resilience of GCP Services to more strictly limit the impact to the data plane during User ID Service failures

Gmail bounces

The Gmail failure hit in two waves. On Monday, Google Engineering began receiving internal user reports of delivery errors and traced them to a recent code change in an underlying configuration system that resulted in the provision of an invalid domain name (instead of gmail.com) to the SMTP inbound service. When the Gmail accounts service checked these addresses, it could not detect a valid user, so generated the SMTP error 550 – a permanent error that, for many automated mailing systems, results in the user being removed from their lists. The code change was reversed, which corrected the situation.

On Tuesday, the configuration system was updated again (Google does not say whether it was the same change, re-applied, or another buggy one), and bounces started again. The changes were reversed, and Google has committed to the following:

  1. Update the existing configuration difference tests to detect unexpected changes to the SMTP service configuration before applying the change.
  2. Improve internal service logging to allow more accurate and faster diagnosis of similar types of errors.
  3. Implement additional restrictions on configuration changes that may affect production resources globally.
  4. Improve static analysis tooling for configuration differences to more accurately project differences in production behaviour

If you’d like to read Google’s full report on the Gmail outage, you’ll find it here. The authentication failure report is here.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Lynn Greiner
Lynn Greiner
Lynn Greiner has been interpreting tech for businesses for over 20 years and has worked in the industry as well as writing about it, giving her a unique perspective into the issues companies face. She has both IT credentials and a business degree.

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now