PG&E's Repeat Failures at Mission Substation Expose a Deeper Operational Blind Spot
When the same infrastructure fails more than once, the question isn't just what broke — it's why no one fixed it.
Nearly three months after a fire at a Mission District substation knocked out power to 137,000 San Francisco residents in December 2025, city officials are still pressing for answers. The outage — one of the most disruptive in recent San Francisco...
Nearly three months after a fire at a Mission District substation knocked out power to 137,000 San Francisco residents in December 2025, city officials are still pressing for answers. The outage — one of the most disruptive in recent San Francisco history — has triggered a formal push for investigation, with municipal leaders demanding accountability from Pacific Gas & Electric over what caused the failure and, more pointedly, why it happened at a location with a documented history of similar incidents.
That last part is the real story here.
A single substation fire is a serious operational event. A recurring fire at the same substation is an institutional failure. The distinction matters enormously, not just for PG&E's regulators and ratepayers, but for the broader utility industry as it grapples with aging infrastructure, escalating climate risk, and the accelerating demands of electrification.
The Problem Isn't Just the Fire
When a critical piece of infrastructure fails repeatedly at the same location, it signals one of a few uncomfortable truths: the root cause was never properly identified the first time, corrective actions were insufficient or poorly implemented, or the asset was deprioritized in capital planning despite known risk indicators. In any of these scenarios, the failure is not simply technical — it is procedural.
A substation that has caught fire before is not an unknown risk. It is a documented risk. The question regulators should be asking is what data PG&E had, when they had it, and what decisions were made — or deferred — as a result.
This is precisely where process intelligence and operational visibility become non-negotiable. Utilities generate enormous volumes of maintenance records, inspection reports, work order histories, and equipment performance data. The challenge is rarely a lack of data — it is whether that data is being translated into action before an asset fails catastrophically, not after.
What Better Operational Intelligence Looks Like
Forward-thinking utilities are increasingly deploying continuous monitoring frameworks that flag deteriorating assets based on historical incident patterns, maintenance cycle deviations, and real-time sensor inputs. When a substation has a prior fire event in its record, that history should automatically elevate its risk classification and trigger more aggressive inspection and remediation timelines. It should not take a second blackout affecting six figures of customers to prompt a review.
The San Francisco case also raises a service quality dimension that utilities can no longer afford to treat as secondary. Prolonged outages disproportionately affect vulnerable populations — those who depend on medical equipment, lack resources to relocate, or live in older housing stock without backup capacity. The 137,000 figure is not just a statistic; it represents a service obligation that was not met, and a trust deficit that compounds with every delayed explanation.
A Regulatory Reckoning in Progress
San Francisco's push for investigation may ultimately produce fines, mandated audits, or accelerated infrastructure spending. But the more durable outcome — for PG&E and the industry — would be a genuine shift toward predictive, data-driven asset management that treats prior failure as a forward-looking risk signal rather than a closed case.
The utilities that will lead in the next decade are not those that respond best to crises. They are those that build the operational intelligence to see crises coming — and make the organizational commitment to act before 137,000 people lose power on a December night.