This page provides status information on the services that are part of Google Cloud. Check back here to view the current status of the services listed below. If you are experiencing an issue not listed here, please contact Support. Learn more about what's posted on the dashboard in this FAQ. For additional information on these services, please visit https://cloud.google.com/.
Available
Service information
Service disruption
Service outage
Incident affecting Google BigQuery, Apigee, Cloud Filestore, Cloud Load Balancing, Cloud Run, Data Catalog, Datastream, Google App Engine, Google Cloud Composer, Google Cloud Dataflow, Google Cloud Datastore, Google Cloud Functions, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Tasks, Google Compute Engine, Google Kubernetes Engine, Persistent Disk, Virtual Private Cloud (VPC), Cloud Data Fusion, Google Cloud Bigtable
Incident began at 2022-06-04 10:47 and ended at 2022-06-04 17:03(all times are US/Pacific).
Previously affected location(s)
South Carolina (us-east1)
Date
Time
Description
6 Jun 2022
02:14 PDT
Mini Incident Report
We apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Support by opening a case at https://cloud.google.com/support.
(All Times US/Pacific)
Incident Start: 04 June 2022 10:47
Incident End: 04 June 2022 17:03
Duration: 6 hours 16 minutes
Affected Services and Features:
Multiple Google Cloud Services
Regions/Zones: us-east1-c
Description:
Due to elevated latencies, several Google Cloud Products experienced increased error rates for a period of 3 hrs 04 minutes. The issues were triggered by high memory consumption on the file storage system in the Google Cloud distributed storage infrastructure [1] within us-east1-c. These issues were predominantly isolated to us-east1-c for zonal services, but some regional services experienced degradation until their traffic was redirected away from the impacted zone.
BigQuery : Customers would have experienced failures or slowness in their requests in the us-east1 region.
Cloud Dataflow: Customers would have experienced slow or stuck jobs in us-east1.
App Engine: Customers would have experienced brief periods of unavailability.
Cloud Tasks: Customers would have experienced elevated latencies with task delivery in us-east1.
Cloud Pub/Sub: Customers would have experienced elevated error rate for Pub/Sub operations in us-east1.
Persistent Disk: Customers would have experienced slow or stuck disk reads in the us-east1-c zone leading to unresponsive instances. A small number of Regional Persistent Disk volumes experienced slow disk reads in the us-east1 region.
Cloud Bigtable: Customers would have experienced elevated latency or errors in the us-east1-c zone.
Cloud Run: A small number of customers in us-east1 would have experienced brief periods of unavailability.
Google Compute Engine: Customers would have experienced elevated latencies during instance creation or deletion operations in us-east1 and us-east4.
Google Cloud Networking: Customers would have experienced delays in connecting to newly created instances or delays in propagation of new changes to firewall programming in us-east1. Network traffic for existing instances was not affected. Customers would have also experienced delays in Cloud Load Balancing creation or modification.
Cloud Functions: Customers would have experienced elevated availability issues in us-east1.
Cloud Composer: Customers would have experienced failure with new cloud composer creations in us-east1 and us-east4 . Existing cloud composer environments were not affected.
Cloud Datastore: Customers would have experienced elevated query latencies in us-east1.
4 Jun 2022
17:09 PDT
The issue with Apigee, Cloud Data Fusion, Cloud Filestore, Cloud Load Balancing, Cloud Run, Data Catalog, Datastream, Google App Engine, Google BigQuery, Google Cloud Bigtable, Google Cloud Composer, Google Cloud Dataflow, Google Cloud Datastore, Google Cloud Functions, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Tasks, Google Compute Engine, Google Kubernetes Engine, Persistent Disk, Virtual Private Cloud (VPC) has been resolved for all affected users as of Saturday, 2022-06-04 17:03 US/Pacific.
Most service impact was mitigated by 2022-06-04 13:20 US/Pacific.
We thank you for your patience while we worked on resolving the issue.
Description: We believe the issue with Apigee, Cloud Data Fusion, Cloud Filestore, Cloud Load Balancing, Cloud Run, Data Catalog, Datastream, Google App Engine, Google BigQuery, Google Cloud Bigtable, Google Cloud Composer, Google Cloud Dataflow, Google Cloud Datastore, Google Cloud Functions, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Tasks, Google Compute Engine, Google Kubernetes Engine, Persistent Disk, Virtual Private Cloud (VPC) is partially resolved.
We believe most service impact was mitigated by 2022-06-04 13:30 US/Pacific. We do not have an ETA for full resolution at this point.
We will provide an update by Saturday, 2022-06-04 18:00 US/Pacific with current details.
Diagnosis: Please see the additional details below for product specific impact where available:
BigQuery: [Mitigated] Elevated BigQuery latency and errors on Queries.
Dataflow: [Mitigated] Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
Persistent Disk: [Mitigated] 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
Cloud Bigtable: Elevated latency and errors in east1-c.
Cloud Run: [Mitigated] Elevated unavailability.
Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
Cloud Networking: [Mitigated] Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
Cloud Composer: [Mitigated] Creations fail because of errors creating App Engine apps, may be in additional regions.
Datastore: [Mitigated] Increased latency particular for queries.
Data Catalog: [Mitigated]
Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:
Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
Persistent Disk: Use a different zone or PD-SSD disks.
Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
Description: We believe the issue with Apigee, Cloud Data Fusion, Cloud Filestore, Cloud Load Balancing, Cloud Run, Data Catalog, Datastream, Google App Engine, Google BigQuery, Google Cloud Bigtable, Google Cloud Composer, Google Cloud Dataflow, Google Cloud Datastore, Google Cloud Functions, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Tasks, Google Compute Engine, Google Kubernetes Engine, Persistent Disk, Virtual Private Cloud (VPC) is partially resolved.
We believe most service impact was mitigated by 2022-06-04 13:30 US/Pacific. We do not have an ETA for full resolution at this point.
We will provide an update by Saturday, 2022-06-04 17:00 US/Pacific with current details.
Diagnosis: Please see the additional details below for product specific impact where available:
BigQuery: [Mitigated] Elevated BigQuery latency and errors on Queries.
Dataflow: [Mitigated] Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
Persistent Disk: [Mitigated] 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
Cloud Bigtable: Elevated latency and errors in east1-c.
Cloud Run: [Mitigated] Elevated unavailability.
Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
Cloud Networking: [Mitigated] Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
Cloud Composer: [Mitigated] Creations fail because of errors creating App Engine apps, may be in additional regions.
Datastore: [Mitigated] Increased latency particular for queries.
Data Catalog: [Mitigated]
Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:
Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
Persistent Disk: Use a different zone or PD-SSD disks.
Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
Description: We believe the issue with Apigee, Cloud Data Fusion, Cloud Filestore, Cloud Load Balancing, Cloud Run, Data Catalog, Datastream, Google App Engine, Google BigQuery, Google Cloud Bigtable, Google Cloud Composer, Google Cloud Dataflow, Google Cloud Datastore, Google Cloud Functions, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Tasks, Google Compute Engine, Google Kubernetes Engine, Persistent Disk, Virtual Private Cloud (VPC) is partially resolved.
We believe we have addressed the root cause and are seeing substantial improvements. Currently we are working to verify mitigation for each of the services. We do not have an ETA for full resolution at this point.
We will provide an update by Saturday, 2022-06-04 15:30 US/Pacific with current details.
Diagnosis: Please see the additional details below for product specific impact where available:
BigQuery: [Mitigated] Elevated BigQuery latency and errors on Queries.
Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
App Engine: [Mitigated] Elevated unavailability.
Cloud Tasks: [Mitigated] Elevated unavailability.
Cloud Pub/Sub: Increased errors and/or latency.
Persistent Disk: 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
Cloud Bigtable: Elevated latency and errors in east1-c.
Cloud Run: [Mitigated] Elevated unavailability.
Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
Cloud Networking: Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
Cloud Composer: Creations fail because of errors creating App Engine apps, may be in additional regions.
Datastore: [Mitigated] Increased latency particular for queries.
Data Catalog: [Mitigated]
Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:
Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
Persistent Disk: Use a different zone or PD-SSD disks.
Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
Description: Mitigation work is still underway by our engineering team.
We will provide more information by Saturday, 2022-06-04 14:00 US/Pacific.
Diagnosis: Please see the additional details below for product specific impact where available:
BigQuery: [Mitigated] Elevated BigQuery latency and errors on Queries.
Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
App Engine: [Mitigated] Elevated unavailability.
Cloud Tasks: [Mitigated] Elevated unavailability.
Cloud Pub/Sub: Increased errors and/or latency.
Persistent Disk: 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
Cloud Bigtable: Elevated latency and errors in east1-c.
Cloud Run: [Mitigated] Elevated unavailability.
Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
Cloud Networking: Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
Cloud Composer: Creations fail because of errors creating App Engine apps, may be in additional regions.
Datastore: [Mitigated] Increased latency particular for queries.
Data Catalog: [Mitigated]
Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:
Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
Persistent Disk: Use a different zone or PD-SSD disks.
Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
Description: Mitigation work is still underway by our engineering team.
We will provide more information by Saturday, 2022-06-04 14:00 US/Pacific.
Diagnosis: Please see the additional details below for product specific impact where available:
BigQuery: Elevated BigQuery latency and errors on Queries.
Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
App Engine: [Mitigated] Elevated unavailability.
Cloud Tasks: [Mitigated] Elevated unavailability.
Cloud Pub/Sub: Increased errors and/or latency.
Persistent Disk: 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
Cloud Bigtable: Elevated latency and errors in east1-c.
Cloud Run: [Mitigated] Elevated unavailability.
Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
Cloud Networking: Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:
Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
Persistent Disk: Use a different zone or PD-SSD disks.
Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
Description: Mitigation work is still underway by our engineering team.
We will provide more information by Saturday, 2022-06-04 13:00 US/Pacific.
Diagnosis: Please see the additional details below for product specific impact where available:
BigQuery: Elevated BigQuery latency and errors on Queries.
Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
App Engine: [Mitigated] Elevated unavailability.
Cloud Tasks: [Mitigated] Elevated unavailability.
Cloud Pub/Sub: Increased errors and/or latency.
Persistent Disk: 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
Cloud Bigtable: Elevated latency and errors in east1-c.
Cloud Run: [Mitigated] Elevated unavailability.
Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
Cloud Networking: Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:
Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
Persistent Disk: Use a different zone or PD-SSD disks.
Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
Description: Mitigation work is currently underway by our engineering team.
We do not have an ETA for mitigation at this point.
We will provide more information by Saturday, 2022-06-04 13:00 US/Pacific.
Diagnosis:
Please see the additional details below for product specific impact where available:
BigQuery: Elevated BigQuery latency and errors on Queries.
Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
App Engine: Elevated unavailability.
Cloud Tasks: Elevated unavailability.
Cloud Pub/Sub: Increased errors and/or latency.
Persistent Disk: 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
Cloud Bigtable: Elevated latency and errors in east1-c.
Workaround: Retry failed requests or use an alternative region that us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible.
Description: Our engineering team continues to investigate the issue, which appears to be impacting multiple products in us-east1.
We will provide an update by Saturday, 2022-06-04 12:30 US/Pacific with current details.
Diagnosis: Please see the following details for product specific impact:
BigQuery: Elevated BigQuery latency and errors on Queries.
Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
App Engine: Elevated unavailability.
Cloud Tasks: Elevated unavailability.
Cloud Pub/Sub: Elevated unavailability.
Persistent DIsk: Small number of PD-Standard (PD-HDD) disks in us-east1-c only are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.