In order for Facebook IP ranges to be blackholed by a 'configuration error' the following conditions would have to be met/breached.
Background
-
A major network like the one Facebook has will peer with other network providers at multiple locations in multiple DC's
-
Large networks like this will have out of band (OOB) access to border routers (think modem via a mobile phone link)
-
Large companies with critical assets have very strict change control procedures
Change control is a key area to understand with regards to this outage. Most people might not be aware, but it is often a quite involved process that incorporates a number of checks and failsafes.
Consider a 'normal' change for a large company.
-
Someone in the business decides that a change needs to be made for whatever reason, so they raise a change request.
-
This initial change request will often be high level, and a business approver will have to sign it off as required and worth the risk
-
Technical engineers will create a more detailed change request, this will include such things as
-
Details of what is going to be changed and why, and on what equipment
-
Identify if the change poses a risk to critical services
-
A detailed change script of what actual changes will be performed, including a backout plan should the change fail
-
Key services will be identified before hand and a test plan put in place to occur prior to the change (to make sure it's all ok) and a re-run of the tests after the change has been made.
-
Another engineer will peer review the detailed change request and provide technical approval, or push back on things that might be wrong or unclear. They also provide assurance that the technical changes being made will actually achieve the stated business goals
-
Once all this is done the change will go to a final approval team who have a 'big picture' view and can juggle changes between various Data Centers and ensure there are no overlaps between different change requests that could cause unexpected issues.
Once all this is approved the change will be scheduled for out of hours change, depending on which timezone the relevant DC is in.
It is highly unlikely changes of this nature would be made at all of Facebooks datacenters at the same time..
https://www.datacenters.com/facebook-data-center-locations
And certainly not within business hours.
Even assuming all this was done, the person conducting the change would be using the out of band connectivity to perform it. This is done so that if you make a mistake or there is an undocumented bug in the IOS code (it happens) then they are not kicked off the device and can still remediate the problem.
The very fact that engineers could not get into the building to fix the problem once it started means that there was no-one actually making the change.
The above procedure is typical for a large enterprise with public facing critical assets - Facebook's policies are likely to be even tighter.
TL;DR - In order for this 'mis-configuration' to be a thing, all of the checks would have to have missed the potential issue, the change would have to be implemented simulataneously in all of Facebooks datacenters where they peer with other internet providers - and all by some kind of automated system where no human had any oversight during normal business hours.
It simply isn't feasable.
UPDATE: Here is more information on the 'official' version..
https://www.theregister.com/2021/10/06/facebook_outage_explained_in_detail/
Tech anon here:
This is an excellent breakdown. Just a couple points.
OOB connections are still ran to DC's via POTS (telephone, dial up modem) for just a contingency. The engineer would not have to physically be on-site to make a change via OOB, so the FB statement that they couldn't get in to fix the issue is sus.
The notion that FB would not have an OOB (or similar failsafe) network (likely multiple) operating at each DC is absurd. They surely do. What happened was outside of FB internal infra, or from within, and out of their control.
Business hours are different across the globe. FB traffic may, actually be lowest on mid-day Monday. That said, OP is right. Something like this would have been patched in phases to hit their low traffic times in each timezone, and to mitigate potential, unknown risks.
If FB's internal processes were completely broken down, an outage still would not cause this effect.
It reeks of an attack by a sophisticated, (likely state) actor.
If this was a mistake, the routers would have been announcing their new address.
Since this was an attack, the routers totally withdrew.
Big difference. The internet is designed to self-heal. The simplicity of this attack screams of an ingenious penetration ... and the "locking them out of their own buildings" is just ironic icing on the cake.