Troubleshooting

Where is the report from the cioctl report command?

The output of the cioctl report command is in the /var/lib/storidge directory.

Please forward the report to support@storidge.com with details of the error you are troubleshooting.

Insufficient cluster capacity available to create this vdisk

Error message: "Fail: Add vd: Insufficient cluster capacity available to create this vdisk. Use smaller size"

If you are running a Storidge cluster on virtual servers or VMs, this error comes from a data collection process that creates twenty volumes and runs fio to collect performance data for Storidge's QoS feature.

The Storidge software will normally only run the data collection on physical servers. However the data collection can be started on virtual servers or VMs that are not on the supported list.

Please run the cioctl report command and forward the report in /var/lib/storidge directory to support@storidge.com. The report command will collect configuration information and logs including information on the virtual server. When forwarding the report, please make a request to add the virtual server to the supported list.

cio node ls shows node in maintenance mode and missing the node name. How do I recover the node?

This situation is likely a result of a node being cordoned or shutdown for maintenance. Then the cluster was rebooted or power cycled.

Once the cluster is rebooted, the node that was previously in maintenance mode will still stay in maintenance mode. The output of the cio node ls command may look something like this:

root@u1:~# cio node ls
NODENAME             IP                NODE_ID    ROLE       STATUS      VERSION
                     192.168.3.95      d12a81bd   sds        maintenance
u3                   192.168.3.29      7517e436   backup1    normal      V1.0.0-2986
u4                   192.168.3.91      91a78c14   backup2    normal      V1.0.0-2986
u1                   192.168.3.165     a11314f0   storage    normal      V1.0.0-2986
u5                   192.168.3.160     888a7dd3   storage    normal      V1.0.0-2986

To restore the cordoned node, you can:

  1. Login to the cordoned node and run cioctl node uncordon to rejoin the node to the cluster

  2. Uncordon the node by running cioctl node uncordon <IP address> from any node. In the example above, run cioctl node uncordon 192.168.3.95. The Storidge software does not depend on identifiers that can be changed by users, e.g. hostname.

  3. Reset or power cycle the cordoned node and it will automatically rejoin the cluster after rebooting

dockerd: msg="Node 085d698b3d2e/10.0.2.235, added to failed nodes list"

Error message: dockerd: time="2019-10-19T03:18:22.862011422Z" level=info msg="Node 085d698b3d2e/10.0.2.235, added to failed nodes list"

The error message indicates that internode cluster traffic is being interrupted. This could be a result of network interface failure or network bandwidth being saturated with too much incoming data. This will impact the ability of the Storidge cluster to maintain state.

Suggestions are:

  1. Monitoring bandwidth usage for each instance to confirm if network bandwidth is being exhausted. Entries in syslog that indicate nodes added to failed list, ISCSI connection issues, or missing heartbeats are also indicators of network congestion.

  2. If there is only one network interface per instance, it will be supporting incoming data streams, orchestrator system internode traffic and Storidge data traffic.

For use cases handling a lot of front end data, consider splitting off the storage traffic to a separate network, e.g. use instances with two network interfaces. Assign an interface for front-end network traffic and assign second interface for storage network.

When creating the Storidge cluster, you can specify which network interface to use with the --ip flag, e.g. run cioctl create --ip 10.0.1.51. When you run the cioctl node join command on the storage nodes, it will suggest an IP address from the same subnet.

  1. Verify if incoming data is going to just one node. Consider approaches such as a load balancer to spread incoming data across multiple nodes.

  2. Calculate the amount of network bandwidth that will be generated by your use case. Verify that the network interface is capable of sustaining the data throughput. For example, a 10GigE interface can sustain about 700MB/s.

  3. In calculations for data throughput, note that for every 100MB/s of incoming data, there is a multiple of the throughput used for replicating data. For 2-copy volumes, 100MB/s will be written to local node and 100MB/s will go through the network interface to other nodes as replicated data, i.e. 100MB/s incoming data stream results in 200MB/s of used network bandwidth.

dockerd: dockerd: level=warning msg="failed to create proxy for port 9999: listen tcp :9999: bind: address already in use"

Error message: dockerd: time="2019-10-10T17:35:59.961861284Z" level=warning msg="failed to create proxy for port 9999: listen tcp :9999: bind: address already in use"

The error message indicates a network port conflict between services. The example above indicates that port number 9999 is being used by more than one service on the node.

Verify there are no conflicts with port numbers used by Storidge cluster.

"iscsid: Kernel reported iSCSI connection 2:0 error"

Error message: iscsid: Kernel reported iSCSI connection 2:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed)

The error message indicates an iscsi connectivity issue between cluster nodes. This could be a result of conflicts such as duplicate iscsi initiator names or other networking issues.

For a multi-node cluster to function correctly, the ISCSI initiator name on each node much be unique. Display the ISCSI initiator name on each node by running cat /etc/iscsi/initiatorname.iscsi, and confirm they are different.

If the ISCSI initiator name is not unique, you can change it with:

echo "InitiatorName=`/sbin/iscsi-iname`" > /etc/iscsi/initiatorname.iscsi

Since the ISCSI initiator name is used to setup connections to ISCSI targets during cluster initialization, it must be made unique before running cioctl create to start a cluster.

"Fail: node is already a member of a multi-node cluster"

Error message: Fail: node is already a member of a multi-node cluster

This error message in syslog indicates an attempt to add a node to the cluster that is already a member. Check your script or playbook to verify that the cioctl join command is being issued to a storage(worker) node and not the primary (sds) node.

This error can result in related message below which indicates the Storidge CIO kernel modules are incorrectly unloaded, breaking cluster initialization.

[DFS] dfs_exit:18218:dfs module unloaded

[VD ] vdisk_exit:2916:vd module unloaded

Get http://172.23.8.104:8282/metrics: dial tcp 172.23.8.104:8282: connect: connection refused

Error message: connect: connection refused

Getting a "Connection refused" errors on requests to an API endpoint likely means that the API server on the node is not running.

Run ps aux |grep cio-api to confirm. If not listed, run cio-api & on the node to restart the API.

Also run cioctl report to generate a cluster report which will be saved to file /var/lib/storidge/report.txz. Please forward the cluster report to support@storidge.com with details of the error for analysis.

Last Updated: 1/23/2020, 1:22:04 AM