Calling BS on the excuse why it went down. Enterprise DC (Data Centers - colos) especially in CA are redundant due to best practices around disaster recovery. A commercial DC like this will have on site electrical generation. Along with a minimum of two power providers. There is also redundency in the cooling systems.
What I would be interested in is the data archival. There should be Full data backups going back a minimum of a year (bare min) along with daily weekly monthly. All of the bots and related activity I woiuld think would be able to be captured or evidence of said activity be seen.
My reading comprehension was bad with your post, the whole down for a week doesnt make sense. From a DR perspective, a site like this and I am sure that they're internally supported versus outsourced, they would fail over to their warm (not temp but running spare) site as a DR fail over. That I can see but cannot see a DC- Colo (I am sure that Twitter is running in a 3rd party DC - Colo) in San Jose being down for a week. Very few enterprise class companies of this calibre run their own DC's most rent cage space in colo dc's. So a DC- Colo being down for a week means that multiple other companies would be down. Doesnt happen and if it does, it's major news. DC Colos are redundant and hardend, especially in CA due to the seismic activity. Multiple feeds to the demark, redundant power providers - Condiitoned power with battery backup - onsite power generation with a min of 30 days of on site diesel to run it. DC's dont go down for a week and not make all kinds of down stream noise.
Twitter's entire datacenter went down last Monday and they didn't inform anyone else company wide until Friday? Calling bullcrap on that.
Calling BS on the excuse why it went down. Enterprise DC (Data Centers - colos) especially in CA are redundant due to best practices around disaster recovery. A commercial DC like this will have on site electrical generation. Along with a minimum of two power providers. There is also redundency in the cooling systems.
What I would be interested in is the data archival. There should be Full data backups going back a minimum of a year (bare min) along with daily weekly monthly. All of the bots and related activity I woiuld think would be able to be captured or evidence of said activity be seen.
I had similar thoughts but maybe they just suck at it.
Any datacenter team I know would be scrambling to get it back up. A week+ down is crazy.
My reading comprehension was bad with your post, the whole down for a week doesnt make sense. From a DR perspective, a site like this and I am sure that they're internally supported versus outsourced, they would fail over to their warm (not temp but running spare) site as a DR fail over. That I can see but cannot see a DC- Colo (I am sure that Twitter is running in a 3rd party DC - Colo) in San Jose being down for a week. Very few enterprise class companies of this calibre run their own DC's most rent cage space in colo dc's. So a DC- Colo being down for a week means that multiple other companies would be down. Doesnt happen and if it does, it's major news. DC Colos are redundant and hardend, especially in CA due to the seismic activity. Multiple feeds to the demark, redundant power providers - Condiitoned power with battery backup - onsite power generation with a min of 30 days of on site diesel to run it. DC's dont go down for a week and not make all kinds of down stream noise.
Yeah that's why this makes no sense, even if they have 2-3 other supporting DCs.