Google Cloud Status Dashboard

This page provides status information on the services that are part of Google Cloud Platform. Check back here to view the current status of the services listed below. If you are experiencing an issue not listed here, please contact Support. Learn more about what's posted on the dashboard in this FAQ. For additional information on these services, please visit cloud.google.com.

Google Cloud Dataflow Incident #20001

We are investigating an issue with Cloud Dataflow.

Incident began at 2020-02-22 13:25 and ended at 2020-02-22 17:25 (all times are US/Pacific).

Date Time Description
Feb 22, 2020 17:25

The issue with Cloud Dataflow has been resolved for all affected users as of Saturday, 2020-02-22 17:24 US/Pacific.
If you have a job in an unhealthy state, either failed or queued for more than 6 hours, please restart it so that it is executed correctly, or contact Support for more information.

We thank you for your patience while we've worked on resolving the issue.

The issue with Cloud Dataflow has been resolved for all affected users as of Saturday, 2020-02-22 17:24 US/Pacific.
If you have a job in an unhealthy state, either failed or queued for more than 6 hours, please restart it so that it is executed correctly, or contact Support for more information.

We thank you for your patience while we've worked on resolving the issue.

Feb 22, 2020 16:31

Description: We are investigating an issue with Cloud Dataflow.
Jobs using Dataflow Shuffle may fail or (when using Flexible Resource Scheduling) be queued indefinitely.
Mitigation work is still underway by our engineering team. At this time we don't have an ETA for a full recovery yet.

We will provide more information by Saturday, 2020-02-22 17:30 US/Pacific at the latest.

Workaround: Jobs that fail to run should be re-tried by canceling any pipelines affected by the incident and re-running them.

Description: We are investigating an issue with Cloud Dataflow.
Jobs using Dataflow Shuffle may fail or (when using Flexible Resource Scheduling) be queued indefinitely.
Mitigation work is still underway by our engineering team. At this time we don't have an ETA for a full recovery yet.

We will provide more information by Saturday, 2020-02-22 17:30 US/Pacific at the latest.

Workaround: Jobs that fail to run should be re-tried by canceling any pipelines affected by the incident and re-running them.

Feb 22, 2020 15:31

Description: We are investigating an issue with Cloud Dataflow.
Jobs using Dataflow Shuffle may fail or (when using Flexible Resource Scheduling) be queued indefinitely.
Mitigation work is still underway by our engineering team.

We will provide more information by Saturday, 2020-02-22 16:30 US/Pacific at the latest.

Workaround: Jobs that fail to run should be re-tried by canceling any pipelines affected by the incident and re-running them.

Description: We are investigating an issue with Cloud Dataflow.
Jobs using Dataflow Shuffle may fail or (when using Flexible Resource Scheduling) be queued indefinitely.
Mitigation work is still underway by our engineering team.

We will provide more information by Saturday, 2020-02-22 16:30 US/Pacific at the latest.

Workaround: Jobs that fail to run should be re-tried by canceling any pipelines affected by the incident and re-running them.

Feb 22, 2020 14:31

Description: We are investigating an issue with Cloud Dataflow.
Jobs using Dataflow Shuffle may fail or (when using Flexible Resource Scheduling) be queued indefinitely.
Mitigation work is still underway by our engineering team.

We will provide more information by Saturday, 2020-02-22 15:30 US/Pacific at the latest.

Workaround: Jobs that fail to run should be re-tried by canceling any pipelines affected by the incident and re-running them.

Description: We are investigating an issue with Cloud Dataflow.
Jobs using Dataflow Shuffle may fail or (when using Flexible Resource Scheduling) be queued indefinitely.
Mitigation work is still underway by our engineering team.

We will provide more information by Saturday, 2020-02-22 15:30 US/Pacific at the latest.

Workaround: Jobs that fail to run should be re-tried by canceling any pipelines affected by the incident and re-running them.

Feb 22, 2020 14:09

Description: We are investigating an issue with Cloud Dataflow.
Jobs using Dataflow Shuffle may fail or (when using Flexible Resource Scheduling) be queued indefinitely.
Mitigation work is currently underway by our engineering team.

We will provide more information by Saturday, 2020-02-22 14:30 US/Pacific.

Workaround: Jobs that fail to run should be re-tried by canceling any pipelines affected by the incident and re-running them.

Description: We are investigating an issue with Cloud Dataflow.
Jobs using Dataflow Shuffle may fail or (when using Flexible Resource Scheduling) be queued indefinitely.
Mitigation work is currently underway by our engineering team.

We will provide more information by Saturday, 2020-02-22 14:30 US/Pacific.

Workaround: Jobs that fail to run should be re-tried by canceling any pipelines affected by the incident and re-running them.