Why was Sabisu not affected by the Amazon Web Services S3 outage?

It’s all about the architecture being fault tolerant by design.

On February 28th, 2017, Amazon Web Services lost S3 in the US-East region, which is basically a huge datacenter in Northern Virginia.

You can read their full explanation here but it all came down to human error compounded by the scale of S3 usage across the internet – only AWS has the figures but some estimates put the number of sites affected by this one outage at 150,000.

S3 is a great place to put all kinds of data. Sabisu uses it to store raw files of all kinds from customers; MS Excel, highly structured raw process and sensor data, images, fragments of documents and so on. Right now Sabisu has many terabytes of data split into various S3 buckets of structured and unstructured data.

A good chunk of that is in US-East and was taken out by the S3 outage. So what happened?

Well, nothing much.

If you were a user working with the Sabisu browser client, you’d be totally unaware. All the cloud Sabisu Units continued to operate as they run their own integral storage and redundancy.

Customers with a hybrid-cloud implementation linked to US-East would have found their Appliance aggregating data locally, waiting for S3 to come back. No problem. It’s exactly what it’s designed to do if connection is lost, or connection quality dips.

Of course customers running an On-Premise implementation would have seen no effect at all. Their servers are lightly tethered to Sabisu Central but as with Units, it has it’s own redundancy arrangements.

If you were a customer with a Unit in US-East, publishing an MS Excel document through Go, then perhaps you’d have seen a slight delay as Go tried to write to S3.

We monitored Sabisu performance throughout the outage and it there were no problems to report and indeed, none were reported.

Contact Us

We’re always interested in hearing from you with any comments or suggestions, feel free to get in touch.

Start typing and press Enter to search