Service Health

This page provides status information on the services that are part of Google Cloud. Check back here to view the current status of the services listed below. If you are experiencing an issue not listed here, please contact Support. Learn more about what's posted on the dashboard in this FAQ. For additional information on these services, please visit https://cloud.google.com/.

Incident affecting Google BigQuery, Apigee, Cloud Filestore, Cloud Load Balancing, Cloud Run, Data Catalog, Datastream, Google App Engine, Google Cloud Composer, Google Cloud Dataflow, Google Cloud Datastore, Google Cloud Functions, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Tasks, Google Compute Engine, Google Kubernetes Engine, Persistent Disk, Virtual Private Cloud (VPC), Cloud Data Fusion, Google Cloud Bigtable

us-east1: Elevated errors affecting multiple services.

Incident began at 2022-06-04 10:20 and ended at 2022-06-04 13:24 (all times are US/Pacific).

Previously affected location(s)

South Carolina (us-east1)

Date Time Description
6 Jun 2022 02:14 PDT

We apologize for the inconvenience this service disruption/outage may have caused. We would like to provide some information about this incident below. Please note, this information is based on our best knowledge at the time of posting and is subject to change as our investigation continues. If you have experienced impact outside of what is listed below, please reach out to Google Support by opening a case https://cloud.google.com/support or help article https://support.google.com/a/answer/1047213.

(All Times US/Pacific)

Incident Start: 04 June 2022 10:20

Incident End: 04 June 2022 13:24

Duration: 3 hours 04 minutes

Affected Services and Features:

BigQuery

Regions/Zones: us-east1

Description:

BigQuery experienced elevated latency for a period of 3 hrs 04 minutes. Preliminary root cause of the issue was an outage in an internal Google storage system causing some bigquery data operations in the cell to fail.

Customer Impact:

Customers might have experienced failures or slowness in their requests to BigQuery's us-east1 region.

4 Jun 2022 17:09 PDT

The issue with Apigee, Cloud Data Fusion, Cloud Filestore, Cloud Load Balancing, Cloud Run, Data Catalog, Datastream, Google App Engine, Google BigQuery, Google Cloud Bigtable, Google Cloud Composer, Google Cloud Dataflow, Google Cloud Datastore, Google Cloud Functions, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Tasks, Google Compute Engine, Google Kubernetes Engine, Persistent Disk, Virtual Private Cloud (VPC) has been resolved for all affected users as of Saturday, 2022-06-04 17:03 US/Pacific.

Most service impact was mitigated by 2022-06-04 13:20 US/Pacific.

We thank you for your patience while we worked on resolving the issue.

4 Jun 2022 16:43 PDT

Summary: us-east1: Elevated errors affecting multiple services.

Description: We believe the issue with Apigee, Cloud Data Fusion, Cloud Filestore, Cloud Load Balancing, Cloud Run, Data Catalog, Datastream, Google App Engine, Google BigQuery, Google Cloud Bigtable, Google Cloud Composer, Google Cloud Dataflow, Google Cloud Datastore, Google Cloud Functions, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Tasks, Google Compute Engine, Google Kubernetes Engine, Persistent Disk, Virtual Private Cloud (VPC) is partially resolved.

We believe most service impact was mitigated by 2022-06-04 13:30 US/Pacific. We do not have an ETA for full resolution at this point.

We will provide an update by Saturday, 2022-06-04 18:00 US/Pacific with current details.

Diagnosis: Please see the additional details below for product specific impact where available:

  • BigQuery: [Mitigated] Elevated BigQuery latency and errors on Queries.
  • Dataflow: [Mitigated] Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
  • App Engine: [Mitigated] Elevated unavailability.
  • Cloud Tasks: [Mitigated] Elevated unavailability.
  • Cloud Pub/Sub: [Mitigated] Increased errors and/or latency.
  • Persistent Disk: [Mitigated] 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
  • Cloud Bigtable: Elevated latency and errors in east1-c.
  • Cloud Run: [Mitigated] Elevated unavailability.
  • Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
  • Cloud Networking: [Mitigated] Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
  • Cloud Functions: [Mitigated] Elevated unavailability.
  • Cloud Composer: [Mitigated] Creations fail because of errors creating App Engine apps, may be in additional regions.
  • Datastore: [Mitigated] Increased latency particular for queries.
  • Data Catalog: [Mitigated]

Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:

  • Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
  • Persistent Disk: Use a different zone or PD-SSD disks.
  • Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
4 Jun 2022 15:31 PDT

Summary: us-east1: Elevated errors affecting multiple services.

Description: We believe the issue with Apigee, Cloud Data Fusion, Cloud Filestore, Cloud Load Balancing, Cloud Run, Data Catalog, Datastream, Google App Engine, Google BigQuery, Google Cloud Bigtable, Google Cloud Composer, Google Cloud Dataflow, Google Cloud Datastore, Google Cloud Functions, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Tasks, Google Compute Engine, Google Kubernetes Engine, Persistent Disk, Virtual Private Cloud (VPC) is partially resolved.

We believe most service impact was mitigated by 2022-06-04 13:30 US/Pacific. We do not have an ETA for full resolution at this point.

We will provide an update by Saturday, 2022-06-04 17:00 US/Pacific with current details.

Diagnosis: Please see the additional details below for product specific impact where available:

  • BigQuery: [Mitigated] Elevated BigQuery latency and errors on Queries.
  • Dataflow: [Mitigated] Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
  • App Engine: [Mitigated] Elevated unavailability.
  • Cloud Tasks: [Mitigated] Elevated unavailability.
  • Cloud Pub/Sub: [Mitigated] Increased errors and/or latency.
  • Persistent Disk: [Mitigated] 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
  • Cloud Bigtable: Elevated latency and errors in east1-c.
  • Cloud Run: [Mitigated] Elevated unavailability.
  • Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
  • Cloud Networking: [Mitigated] Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
  • Cloud Functions: [Mitigated] Elevated unavailability.
  • Cloud Composer: [Mitigated] Creations fail because of errors creating App Engine apps, may be in additional regions.
  • Datastore: [Mitigated] Increased latency particular for queries.
  • Data Catalog: [Mitigated]

Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:

  • Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
  • Persistent Disk: Use a different zone or PD-SSD disks.
  • Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
4 Jun 2022 14:21 PDT

Summary: us-east1: Elevated errors affecting multiple services.

Description: We believe the issue with Apigee, Cloud Data Fusion, Cloud Filestore, Cloud Load Balancing, Cloud Run, Data Catalog, Datastream, Google App Engine, Google BigQuery, Google Cloud Bigtable, Google Cloud Composer, Google Cloud Dataflow, Google Cloud Datastore, Google Cloud Functions, Google Cloud Networking, Google Cloud Pub/Sub, Google Cloud SQL, Google Cloud Tasks, Google Compute Engine, Google Kubernetes Engine, Persistent Disk, Virtual Private Cloud (VPC) is partially resolved.

We believe we have addressed the root cause and are seeing substantial improvements. Currently we are working to verify mitigation for each of the services. We do not have an ETA for full resolution at this point.

We will provide an update by Saturday, 2022-06-04 15:30 US/Pacific with current details.

Diagnosis: Please see the additional details below for product specific impact where available:

  • BigQuery: [Mitigated] Elevated BigQuery latency and errors on Queries.
  • Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
  • App Engine: [Mitigated] Elevated unavailability.
  • Cloud Tasks: [Mitigated] Elevated unavailability.
  • Cloud Pub/Sub: Increased errors and/or latency.
  • Persistent Disk: 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
  • Cloud Bigtable: Elevated latency and errors in east1-c.
  • Cloud Run: [Mitigated] Elevated unavailability.
  • Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
  • Cloud Networking: Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
  • Cloud Functions: [Mitigated] Elevated unavailability.
  • Cloud Composer: Creations fail because of errors creating App Engine apps, may be in additional regions.
  • Datastore: [Mitigated] Increased latency particular for queries.
  • Data Catalog: [Mitigated]

Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:

  • Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
  • Persistent Disk: Use a different zone or PD-SSD disks.
  • Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
4 Jun 2022 13:57 PDT

Summary: us-east1: Elevated errors affecting multiple services.

Description: Mitigation work is still underway by our engineering team.

We will provide more information by Saturday, 2022-06-04 14:00 US/Pacific.

Diagnosis: Please see the additional details below for product specific impact where available:

  • BigQuery: [Mitigated] Elevated BigQuery latency and errors on Queries.
  • Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
  • App Engine: [Mitigated] Elevated unavailability.
  • Cloud Tasks: [Mitigated] Elevated unavailability.
  • Cloud Pub/Sub: Increased errors and/or latency.
  • Persistent Disk: 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
  • Cloud Bigtable: Elevated latency and errors in east1-c.
  • Cloud Run: [Mitigated] Elevated unavailability.
  • Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
  • Cloud Networking: Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
  • Cloud Functions: [Mitigated] Elevated unavailability.
  • Cloud Composer: Creations fail because of errors creating App Engine apps, may be in additional regions.
  • Datastore: [Mitigated] Increased latency particular for queries.
  • Data Catalog: [Mitigated]

Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:

  • Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
  • Persistent Disk: Use a different zone or PD-SSD disks.
  • Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
4 Jun 2022 12:50 PDT

Summary: us-east1: Elevated errors affecting multiple services.

Description: Mitigation work is still underway by our engineering team.

We will provide more information by Saturday, 2022-06-04 14:00 US/Pacific.

Diagnosis: Please see the additional details below for product specific impact where available:

  • BigQuery: Elevated BigQuery latency and errors on Queries.
  • Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
  • App Engine: [Mitigated] Elevated unavailability.
  • Cloud Tasks: [Mitigated] Elevated unavailability.
  • Cloud Pub/Sub: Increased errors and/or latency.
  • Persistent Disk: 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
  • Cloud Bigtable: Elevated latency and errors in east1-c.
  • Cloud Run: [Mitigated] Elevated unavailability.
  • Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
  • Cloud Networking: Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
  • Cloud Functions: [Mitigated] Elevated unavailability.

Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:

  • Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
  • Persistent Disk: Use a different zone or PD-SSD disks.
  • Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
4 Jun 2022 12:01 PDT

Summary: us-east1: Elevated errors affecting multiple services.

Description: Mitigation work is still underway by our engineering team.

We will provide more information by Saturday, 2022-06-04 13:00 US/Pacific.

Diagnosis: Please see the additional details below for product specific impact where available:

  • BigQuery: Elevated BigQuery latency and errors on Queries.
  • Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
  • App Engine: [Mitigated] Elevated unavailability.
  • Cloud Tasks: [Mitigated] Elevated unavailability.
  • Cloud Pub/Sub: Increased errors and/or latency.
  • Persistent Disk: 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
  • Cloud Bigtable: Elevated latency and errors in east1-c.
  • Cloud Run: [Mitigated] Elevated unavailability.
  • Compute Engine: No impact outside of side effects from underlying Persistent Disk impact.
  • Cloud Networking: Entire zone us-east1-c has delayed firewall programming for new changes to data plane, and delays launching or ssh'ing to instances. Existing programming should continue to work.
  • Cloud Functions: [Mitigated] Elevated unavailability.

Workaround: In general, retry failed requests or use an alternative region than us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible. Please see the additional details below for product specific workarounds where available:

  • Dataflow: If feasible, customers can consider restarting jobs again. They should be restarted in an unimpacted zone.
  • Persistent Disk: Use a different zone or PD-SSD disks.
  • Cloud Networking: If you are using load balancers or HA configuration, failover away from us-east1-c
4 Jun 2022 11:43 PDT

Summary: us-east1: Elevated errors affecting multiple services.

Description: Mitigation work is currently underway by our engineering team.

We do not have an ETA for mitigation at this point.

We will provide more information by Saturday, 2022-06-04 13:00 US/Pacific.

Diagnosis: Please see the additional details below for product specific impact where available:

  • BigQuery: Elevated BigQuery latency and errors on Queries.
  • Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
  • App Engine: Elevated unavailability.
  • Cloud Tasks: Elevated unavailability.
  • Cloud Pub/Sub: Increased errors and/or latency.
  • Persistent Disk: 1% of PD-Standard (PD-HDD) disks in us-east1-c, and regional disks in us-east-1 are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.
  • Cloud Bigtable: Elevated latency and errors in east1-c.

Workaround: Retry failed requests or use an alternative region that us-east1 for regional services, and an alternative zone than us-east1-c for zonal services where possible.

4 Jun 2022 11:22 PDT

Summary: us-east1: Elevated errors affecting multiple services.

Description: Our engineering team continues to investigate the issue, which appears to be impacting multiple products in us-east1.

We will provide an update by Saturday, 2022-06-04 12:30 US/Pacific with current details.

Diagnosis: Please see the following details for product specific impact:

  • BigQuery: Elevated BigQuery latency and errors on Queries.
  • Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.
  • App Engine: Elevated unavailability.
  • Cloud Tasks: Elevated unavailability.
  • Cloud Pub/Sub: Elevated unavailability.
  • Persistent DIsk: Small number of PD-Standard (PD-HDD) disks in us-east1-c only are experiencing slow or stuck disk reads (operations hanging indefinitely), which may cause instances to become unresponsive.

Workaround: None at this time.

4 Jun 2022 10:55 PDT

Summary: us-east1: Elevated errors affecting multiple services.

Description: Our engineering team continues to investigate the issue, which appears to be impacting multiple products in us-east1.

We will provide an update by Saturday, 2022-06-04 12:00 US/Pacific with current details.

Diagnosis:

  • BigQuery: Elevated BigQuery errors.
  • Dataflow: Some customers will see stuckness or slowness in their Dataflow batch and streaming jobs.

Workaround: None at this time.

4 Jun 2022 10:40 PDT

Summary: us-east1: Elevated BigQuery errors.

Description: We are experiencing an issue with Google BigQuery beginning at Saturday, 2022-06-04 10:20 US/Pacific.

Our engineering team continues to investigate the issue.

We will provide an update by Saturday, 2022-06-04 11:45 US/Pacific with current details.

We apologize to all who are affected by the disruption.

Diagnosis: None at this time.

Workaround: None at this time.