Network hardware complete outage and storage issues
Incident Report for Civo
This incident has been resolved.
Posted Sep 28, 2020 - 10:41 BST
We've replaced the faulty network hardware and got the network up and running. The storage platform has had some issues, but we've rebooted almost all instances which should resolve most of them. If you have an instance you can't SSH to, please try rebooting it (maybe up to 3 times) via Civo's API or control panel. If you can SSH to it still, but the root filesystem is "readonly", you can either try to reboot from SSH (again up to 3 times), or try `sudo fsck /dev/vda1` which works on a readonly filesystem. We'll be keeping an eye on everything for a while, but the hardware issues are resolved and the storage cluster is behaving normally again.
Posted Sep 06, 2020 - 08:59 BST
We are continuing to work on a fix for the storage issue.
Posted Sep 06, 2020 - 06:37 BST
We have fixed the network issue, but notice that a storage issue remains.
Posted Sep 06, 2020 - 06:37 BST
We have managed to resolve the network issue now and we are bringing up instances as fast as we can. Unfortunately however the nature of the issue has meant that we have had an issue with some of the storage on our platform. Whilst it is in service, it appears that some instances have gone into a "read-only" mode (and it is possible that data is unrecoverable).

If you get an instance in this state, doing a few reboots may strangely fix it. And we're working hard on getting Civo up and running to help you be able to do that (if you can't SSH in and see that it has a readonly filesystem). If it's a node within a k3s cluster it may be easier to recycle that node. If it's a master node and a few reboots don't fix it, then sadly, it looks like your cluster needs rebuilding completely. If you are an IaaS customer in this state and a few reboots don't fix it, unfortunately you will need to delete and rebuild your instance.

However we would recommend you try and reboot an instance a few times first. If you have a snapshot of your instance we would recommend rebuilding from that.

Once again we apologise for the inconvenience caused.
Posted Sep 06, 2020 - 05:11 BST
We are still experiencing a major network issue with the platform and we are trying to recover the networking as fast as we can. We apologise for the inconvenience caused and will update this thread when we have more information.
Posted Sep 06, 2020 - 02:13 BST
This is affecting all incoming network traffic, not just our cluster, so unfortunately all customer instances may/will be affected. We're looking in to it.
Posted Sep 05, 2020 - 17:55 BST
A number of customer instances/cluster nodes may also be inaccessible at the moment. We are investigating the cause and which instances are affected.
Posted Sep 05, 2020 - 17:53 BST
Investigating and the Civo API are unresponsive at the moment. We are investigating the reason and will update as soon as we can.
Posted Sep 05, 2020 - 17:37 BST
This incident affected: API,, Neutron, and Storage.