Service Health

This page provides status information on the services that are part of Google Cloud. Check back here to view the current status of the services listed below. If you are experiencing an issue not listed here, please contact Support. Learn more about what's posted on the dashboard in this FAQ. For additional information on these services, please visit https://cloud.google.com/.

Incident affecting Google BigQuery, Apigee, Google Compute Engine, Google Kubernetes Engine, Cloud Memorystore, Google Cloud Bigtable, Persistent Disk, Google Cloud Dataflow, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Cloud Filestore, Cloud Data Fusion, Cloud Load Balancing, Memorystore for Redis

We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Incident began at 2022-05-06 01:30 and ended at 2022-05-06 12:06 (all times are US/Pacific).

Previously affected location(s)

Iowa (us-central1)

Date Time Description
16 May 2022 16:12 PDT

INCIDENT REPORT

Summary:

On 6 May 2022 at 01:30 US/Pacific, multiple Google Cloud services experienced issues in the us-central1 region. These issues mostly were isolated to us-central1-b for zonal services, but some regional services experienced degradation until their traffic could be shifted away from the impacted zone. Most Google Cloud services recovered automatically, after the underlying problem was resolved. We sincerely apologize for the impact to your service or application. We completed an internal investigation and are taking immediate steps to improve the quality and reliability of our services. If you believe that your services experienced an SLA violation as a result of this incident, please contact us.

Root Cause:

Google Cloud systems are built on a zonal distributed storage system called Colossus, which replicates data across a large number of individual storage servers called D Servers. In this incident, a background job responsible for repacking storage objects began to retry those repack operations more aggressively as part of its normal operations. This subsequently increased the load on the Colossus system in the zone, including the number of open connections to the D Servers.

The sudden increase in connection load to D Servers caused a small number of servers to unexpectedly crash due to high memory pressure. This led our automated management systems to remove them from the serving fleet for Colossus. This further reduced the number of D Servers available to handle the rising traffic loads and increased the traffic latency within the Colossus system in the impacted zone.

This significant increase in latency subsequently impacted our customers’ performance across a range of Google Cloud services that are built atop Colossus, including Persistent Disk, BigQuery, and many others.

This zonal incident impacted some regional services due to the specific failure mode. When a Colossus cluster is marked down, the regional services receive proactive notification and automatically shift traffic away from the cluster. Since this cluster was still up, but with variable latency for some operations, the regional services received no proactive notification and were unable to automatically shift traffic away from the cluster. Therefore, the impact to a number of regional services was extended as they had to manually remove the impacted cluster from serving.

Remediation and Prevention:

Google engineers were alerted to the issue on Friday, 6 May, 2022 at 01:54 US/Pacific and immediately started an investigation.

Google engineers stopped the background traffic. To increase traffic capacity, Google engineers re-added the impacted D Servers to the serving fleet, mitigating the issue at 12:06 US/Pacific.

Google is committed to quickly and continually improving our technology and operations to prevent service disruptions. We are taking the following steps to prevent this or similar issues from happening again:

  • Investigate and add additional protections in the D Servers to decrease memory pressure during high network traffic load periods.

  • Improve the retry logic for the storage object repacking job to ensure that it cannot overload the Colossus system within a zone.

  • Extend the automated D Server management systems to better handle crash loop conditions and quickly restore D Servers to production once they become healthy.

  • Google's regional services are designed to tolerate zonal failure while staying within their service level objectives. The nature of this failure was not properly handled by some regional services. We are committed to investigating the behavior of each regional service impacted in this outage to ensure that fault tolerance gaps are properly addressed.

Detailed Description of Impact:

Some customers may have experienced high latency or errors in multiple Google Cloud services in the impacted region.

  • BigQuery [us-central1 and US multi-region]: Customers saw increased query, import, and export latencies and errors. The overall duration of impact was 6 hours 5 minutes in us-central1 and 4 hours 34 minutes in US multi-region.

  • Cloud Bigtable [us-central1-b zone]: A small number of Customers in us-central1-b experienced elevated latency and errors as well as replication delays for a duration of 10 hours, 36 minutes. A very minor percentage of the affected customers for Cloud Bigtable had residual impact for additional 6 hours, 40 minutes.

  • Cloud Pub/Sub [us-central1 region]: Customers may have seen missing backlog stats metrics for subscriptions against topics with messages published to us-central1 for a duration of 2 hours, 41 minutes. Since the impact was based on the message publish region, the subscribers could have been in regions other than us-central1.

  • Google Cloud Load Balancer (GCLB) [us-central1-b zone]: New load balancer creations and modifications or deletions of existing components with backends in us-central1-b may have been delayed or not taken effect until the outage was resolved. The total impact duration for GCLB is 4 hours, 46 minutes.

  • Google Compute Engine (GCE) [us-central1-b zone]: Customers may have seen issues with instance availability in us-central1-b due to some input/output (I/O) operations in Persistent Disk Standard disks being stuck for over one minute. Additionally, Regional Persistent Disk Standard disks with a replica in us-central1-b may have been briefly affected due to delays in failover. A small number of instances may have experienced brief loss of network connectivity to other Google Cloud services following live migration events. The total impact duration for GCE is around 5 hours, 25 minutes.

  • Cloud Datastream [us-central1 region]: Customers may have seen streams enter into "Failed" state on the Datastream UI, noticed no new data ingested by Datastream into Google Cloud Storage buckets, had duplicate data loaded into Google Cloud Storage, or metrics not being reported. This impacted a whole region for a duration of 7 hours 40 minutes, because the cluster over provisioning was not at a high enough level, and losing one zone on the underlying Kafka cluster caused the cluster to be at 100% utilization, until it was able to fully copy the data to a new zone.

  • Cloud Filestore [us-central1-b zone]: Many Filestore instance creation operations failed. Additionally, a small number of instances were unresponsive for the duration of the incident. Some instances suffered performance impact.

  • Cloud Memorystore [us-central1-b zone]: Redis nodes in us-central1-b may have been unavailable for a duration of 3 hours, 54 minutes.

  • Cloud SQL [us-central1-b zone]: Customers may not have been able to connect to their instance in us-central1-b through the Cloud SQL Auth proxy for a duration of 3 hours, 5 minutes.

  • Google Kubernetes Engine (GKE) [us-central1-b zone]: Customers may have experienced issues interacting with their clusters' control planes. New workloads may not have been scheduled. Auto scaling may not have been operational.The total impact duration for GKE is 7 hours, 11 minutes.

  • Apigee [us-central1 region]: Customers may have seen errors for their API traffic with Datastore Errors. Apigee is internally redundant across zones within the region, but due to the high latency failure mode in us-central1-b, the engineers were not able to remove the impacted zone from the regional cluster. The total impact duration for Apigee is 2 hours, 55 minutes.

  • Dataflow [us-central1 region]: New Dataflow jobs may have failed to start in us-central1-b. Jobs already running in us-central1-b may have been stuck or delayed, but restarting the jobs would have automatically routed them to a healthy zone starting at 05:00 US/Pacific if the customer was using auto zone placement. The total impact duration for dataflow is 2 hours, 55 minutes.

  • Cloud Data Fusion (CDF) [us-central1 region]: Customers may have experienced DataFusion instance creation failures, instance availability issues and higher data processing pipeline failures. This was because Persistent Disk (PD) issues caused DataFusion backend services to become unhealthy and the total impact duration was about 7 hours.

6 May 2022 16:12 PDT

Mini Incident Report

We apologize for the inconvenience this service disruption may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Support by opening a case using https://cloud.google.com/support or help article https://support.google.com/a/answer/1047213.

(All Times US/Pacific)

Incident Start: 06 May 2022 01:30

Incident End: 06 May 2022 12:06

Duration: 10 hours, 36 minutes

Affected Services and Features:

  • BigQuery
  • Cloud Pub/Sub
  • Google Cloud Load Balancer (GCLB)
  • Google Compute Engine (GCE)
  • Datastream
  • Cloud Filestore
  • Cloud Memorystore
  • Cloud SQL
  • Apigee
  • Cloud Dataflow
  • Cloud Data Fusion (CDF)
  • Cloud Bigtable
  • Google Kubernetes Engine (GKE)

Regions/Zones: us-central1-b

Description:

Multiple Google Cloud services experienced issues in the us-central1 region beginning Friday, 6 May 2022 at 01:30 PT. These issues were predominantly isolated to us-central1-b for zonal services, but some regional services experienced degradation until their traffic could be shifted away from the impacted zone. Most services recovered automatically after the underlying problem was resolved.

The issues were triggered by an unexpected increase in normally occurring background traffic in the Google Cloud distributed storage infrastructure[1] within the us-central1-b zone. The system automatically directed load away from backend file servers that were impacted by this load increase. This subsequently reduced the overall traffic capacity in the zone. Google engineers mitigated the issue by stopping the background traffic and marking the impacted file servers as available in order to increase capacity.

Customer Impact:

How Customers Experienced the Issue: Some customers may have experienced high latency or errors in multiple Google Cloud services in the impacted region.

  • BigQuery: Customers may have seen increased query delays and/or failures.
  • Cloud Bigtable: Customers may have experienced elevated latency and errors.
  • Cloud Pub/Sub: Customers may have seen missing metrics for backlog statistics.
  • Google Cloud Load Balancer (GCLB): New load balancer creations, as well as modifications or deletions of existing components, with backends in us-central1-b may have been delayed or not taken effect until the outage was resolved.
  • Google Compute Engine (GCE): Customers may have seen issues with instance availability in us-central1-b due to some input/output (I/O) operations in Persistent Disk being stuck for over one minute. A small number of instances may have experienced brief loss of network reachability to other Google Cloud services following live migration events.
  • Datastream: Customers may have seen streams enter into "Failed" state on the Datastream UI, noticed no new data ingested by Datastream into Google Cloud Storage bucket, or metrics not being reported.
  • Cloud Filestore: Customers may have experienced hung tasks in Filestore instances.
  • Cloud Memorystore: Redis nodes in us-central1-b may have been unavailable.
  • Cloud SQL: Customers may not have been able to connect to their instance in us-central1-b through the proxy-server.
  • Google Kubernetes Engine (GKE): Customers may have experienced issues interacting with the control plane. New workloads may not have been scheduled. Auto scaling may not have been operational.
  • Apigee: Customers may have seen errors for their API traffic with Datastore Errors.
  • Dataflow: Some Dataflow batch and streaming jobs in us-central1 may have been stuck or delayed.
  • Cloud Data Fusion (CDF): CDF operations, like instance creation and pipeline launch, may have failed in the us-central1 region due to an issue on Compute Engine.

[1] https://cloud.google.com/blog/products/storage-data-transfer/a-peek-behind-colossus-googles-file-system

6 May 2022 12:09 PDT

The issue with Cloud Data Fusion, Cloud Filestore, Cloud Memorystore, Google BigQuery, Google Cloud Dataflow, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Kubernetes Engine, Persistent Disk, Apigee has been resolved for all affected projects as of Friday, 2022-05-06 12:06 US/Pacific.

Products with Narrow Impact:

  • Google Cloud Bigtable: Outage is currently affecting less than 2% of customers us-central1-b. Mitigation is ongoing and support will continue to work with the affected customers through resolution.

We will publish an Incident Report once we have completed our internal investigation.

We thank you for your patience while we worked on resolving the issue.

6 May 2022 11:16 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with multiple cloud services including BigQuery, Cloud Networking, Cloud SQL, Google Kubernetes Engine (GKE), Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services, Cloud Data Fusion (CDF) beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation is completed, and most of the affected services have recovered.

We will provide more information by Friday, 2022-05-06 12:30 US/Pacific.

Products Recovered:

  • BigQuery Engine
  • Cloud Pub/Sub
  • Cloud Networking
  • Compute Engine
  • Datastream
  • Cloud Filestore
  • Cloud Memorystore
  • Cloud SQL
  • Cloud Data Fusion
  • Apigee
  • Dataflow
  • GKE

Products Still Recovering:

  • Google Cloud Bigtable: Mitigation is ongoing for a small number of customers who are experiencing elevated latency and errors.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 10:57 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with multiple cloud services including BigQuery, Cloud Networking, Cloud SQL, Google Kubernetes Engine (GKE), Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services, Cloud Data Fusion (CDF) beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation is completed, and most of the affected services have recovered.

We will provide more information by Friday, 2022-05-06 11:30 US/Pacific.

Products Recovered:

  • BigQuery Engine
  • Cloud Pub/Sub
  • Cloud Networking
  • Compute Engine
  • Datastream
  • Cloud Filestore
  • Cloud Memorystore
  • Cloud SQL
  • Cloud Data Fusion
  • Apigee
  • Dataflow
  • GKE

Products Still Recovering:

  • Google Cloud Bigtable: Mitigation is ongoing for a small number of customers who are experiencing elevated latency and errors.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 10:28 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with Persistent Disk affecting multiple services including BigQuery, Cloud Networking, Cloud SQL, Google Kubernetes Engine (GKE), Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services, Cloud Data Fusion (CDF) beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation is completed and most of the affected services have recovered.

We will provide more information by Friday, 2022-05-06 11:00 US/Pacific.

Products Recovered:

  • BigQuery Engine
  • Cloud Pub/Sub
  • Cloud Networking
  • Compute Engine
  • Datastream
  • Cloud Filestore
  • Cloud Memorystore
  • Cloud SQL
  • Cloud Data Fusion
  • Apigee
  • Dataflow
  • GKE

Products Still Recovering:

  • Google Cloud Bigtable: A small number of customers may still be experiencing elevated latency and errors.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 09:55 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with Persistent Disk affecting multiple services including BigQuery, Cloud Networking, Cloud SQL, Google Kubernetes Engine (GKE), Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services, Cloud Data Fusion (CDF) beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation is completed and most of the affected services have recovered.

We will provide more information by Friday, 2022-05-06 10:30 US/Pacific.

Products Recovered:

  • BigQuery Engine:Cloud Pub/Sub
  • Cloud Networking
  • Compute Engine
  • Datastream
  • Cloud Filestore
  • Cloud Memorystore
  • Cloud SQL
  • Cloud Data Fusion
  • Apigee
  • Dataflow
  • GKE

Products Still Recovering:

  • Google Cloud Bigtable: Customers may be experiencing elevated latency and errors.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 09:26 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with Persistent Disk affecting multiple services including BigQuery, Cloud Networking, Cloud SQL, Google Kubernetes Engine (GKE), Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services, Cloud Data Fusion (CDF) beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation is completed and most of the affected services have recovered.

We will provide more information by Friday, 2022-05-06 10:00 US/Pacific.

Products Recovered:

  • BigQuery Engine:Cloud Pub/Sub
  • Cloud Networking
  • Compute Engine
  • Datastream
  • Cloud Filestore
  • Cloud Memorystore
  • Cloud SQL
  • Apigee
  • Dataflow

Products Still Recovering:

  • Google Cloud Bigtable: Customers may be experiencing elevated latency and errors.

  • Google Kubernetes Engine: Customers may experience issues interacting with the control plane. New workloads won’t be scheduled. Auto scaling may not be operational.

  • Cloud Data Fusion: CDF operations like instance creation, pipeline launch might fail in us-central1 region due to a Compute Engine issue.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 08:56 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with Persistent Disk affecting multiple services including BigQuery, Cloud Networking, Cloud SQL, Google Kubernetes Engine (GKE), Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services, Cloud Data Fusion (CDF) beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation is completed and we see that most of the affected services have recovered.

We will provide more information by Friday, 2022-05-06 09:30 US/Pacific.

Products Recovered:

  • BigQuery Engine:Cloud Pub/Sub, Cloud Networking, Compute Engine, Datastream, Cloud Filestore, Cloud Memorystore, Cloud SQL, Apigee, Dataflow

Products Still Recovering:

  • Google Cloud BigTable: Customers may be experiencing elevated latency and errors.

  • Google Kubernetes Engine: Customers may experience issues interacting with the control plane. New workloads won’t be scheduled. Auto scaling may not be operational.

  • Cloud Data Fusion: CDF operations like instance creation, pipeline launch might fail in us-central1 region due to an issue on Compute Engine.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 08:27 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with Persistent Disk affecting multiple services including BigQuery, Cloud Networking, Cloud SQL, Google Kubernetes Engine (GKE), Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services, Cloud Data Fusion (CDF) beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation is completed and we see that most of the affected services have recovered.

We will provide more information by Friday, 2022-05-06 09:00 US/Pacific.

Products Recovered:

BigQuery Engine:Cloud Pub/Sub, Cloud Networking, Compute Engine, Datastream, Cloud Filestore, Cloud Memorystore, Cloud SQL, Apigee, Dataflow

Products Still Recovering:

  • Google Cloud BigTable: Customers may be experiencing issues.

  • Google Kubernetes Engine: Customers may experience issues interacting with the control plane. New workloads won’t be scheduled. Auto scaling may not be operational.

  • Cloud Data Fusion: CDF operations like instance creation, pipeline launch might fail in us-central1 region due to an issue on Compute Engine.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 08:01 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with Persistent Disk affecting multiple services including BigQuery, Cloud Networking, Cloud SQL, Google Kubernetes Engine (GKE), Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services, Cloud Data Fusion (CDF) beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation is completed and we see that most of the affected services have recovered.

We will provide more information by Friday, 2022-05-06 08:30 US/Pacific.

Products Recovered:

BigQuery Engine:Cloud Pub/Sub, Cloud Networking, Compute Engine, Datastream, Cloud Filestore, Cloud Memorystore, Cloud SQL, Apigee, Dataflow

Products Still Recovering:

  • Google Kubernetes Engine: Customers may experience issues interacting with the control plane. New workloads won’t be scheduled. Auto scaling may not be operational.

  • Cloud Data Fusion: CDF operations like instance creation, pipeline launch might fail in us-central1 region due to an issue on Compute Engine.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 07:25 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with Persistent Disk affecting multiple services including BigQuery, Cloud Networking, Cloud SQL, GKE, Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services, Cloud Data Fusion beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation is completed and we see that most of the affected services have recovered.

We will provide more information by Friday, 2022-05-06 08:00 US/Pacific.

Product Impact:

  • BigQuery Engine: Customers may see increased query latencies and/or failures.

  • Cloud Pub/Sub: Customer may see missing metrics for backlog statistics.

  • Cloud Networking: Customers may see connectivity issues.

  • Compute Engine: Customers may see issues with VM availability in us-central1-b.

  • Datastream: Customers may see streams enter into "Failed" state on the Datastream UI, notice no new data ingested by Datastream into GCS bucket or metrics not being reported.

  • Cloud Filestore: Customers may experience many hung tasks in filestore VMs.

  • Cloud Memorystore: Redis nodes in us-central1-b may be unavailable.

  • Cloud SQL: Customers may not be able to connect to their instance in us-central1-b through proxy-server.

  • Apigee: Customers may see 5XX errors for their API traffic with Datastore Errors.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 06:54 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with Persistent Disk affecting multiple services including BigQuery, Cloud Networking, Cloud SQL, GKE, Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation work is currently underway by our engineering team. We see partial recovery for some services.

We will provide more information by Friday, 2022-05-06 07:30 US/Pacific.

Product Impact:

  • BigQuery Engine: Customers may see increased query latencies and/or failures.

  • Cloud Pub/Sub: Customer may see missing metrics for backlog statistics

  • Cloud Networking : Customers may see connectivity issues.

  • Compute Engine : Customers may see issues with VM availability in us-central1-b

  • Datastream: Customers may see streams enter into "Failed" state on the Datastream UI, notice no new data ingested by Datastream into GCS bucket or metrics not being reported.

  • Cloud Filestore : Customers may experience many hung tasks in filestore VMs.

  • Cloud Memorystore: Redis nodes in us-central1-b may be unavailable

  • Cloud SQL: Customers may not be able to connect to their instance in us-central1-b through proxy-server.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 06:20 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with Persistent Disk affecting multiple services including BigQuery, Cloud Networking, Cloud SQL, GKE Control Plane, Cloud Filestore, Cloud Bigtable, Cloud Memorystore, Apigee, Cloud Dataflow services beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation work is currently underway by our engineering team.

We will provide more information by Friday, 2022-05-06 07:00 US/Pacific.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 06:04 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: We are experiencing an issue with Persistent Disk affecting multiple services including Bigquery, Cloud Networking, Cloud SQL, GKE beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1-b.

Mitigation work is currently underway by our engineering team.

We will provide more information by Friday, 2022-05-06 06:30 US/Pacific.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 05:58 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1-b

Description: Mitigation work is currently underway by our engineering team.

We will provide more information by Friday, 2022-05-06 06:30 US/Pacific.

Diagnosis: Customers might see connectivity issues in us-central1-b

Workaround: Move the workloads to a different zone if possible

6 May 2022 05:45 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1

Description: We are experiencing an issue with Persistent Disk affecting multiple services beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1. Our engineering team continues to investigate the issue.

We will provide an update by Friday, 2022-05-06 06:30 US/Pacific with current details. with current details.

We apologize to all who are affected by the disruption.

Diagnosis: Some I/O operations in Persistent Disk Standard devices are stuck for a long time (>1 min)

Workaround: Move the workloads to a different zone if possible

6 May 2022 05:01 PDT

Summary: We are experiencing an issue with Persistent Disk affecting multiple services in us-central1

Description: We are experiencing an issue with Persistent Disk affecting multiple services beginning at Friday, 2022-05-06 01:20 US/Pacific in us-central1.

Our engineering team continues to investigate the issue.

We will provide an update by Friday, 2022-05-06 06:30 US/Pacific with current details. with current details.

We apologize to all who are affected by the disruption.

Diagnosis: Some I/O operations in Persistent Disk Standard devices are stuck for a long time (>1 min)

Workaround: Move the workloads to a different zone if possible