NDS-4600 - All That Is Known
This document which covers what we know of the NDS-4600 has been written with help from /u/Offspring.
Tags
NDS-4600 Changing Zoning
Overview
Thanks to a note and a good find by /u/DerUlmer I can now say there is a known way to change the zoning for the NDS-4600:
See the section for "Changing the zoning"
https://blog.carlesmateo.com/2019/06/07/dealing-with-performance-degradation-on-zfs-draid-rebuilds-when-migrating-from-a-single-processor-to-a-multiprocessor-platform/
Steps
Note: I've not tested this (yet).
Tags
The One Problem With My Racks & The NDS-4600-JD-05
My zero-U PDUs get covered up as my rack isn't super super deep.
Tags
NDS-4600 - SATA Drive Failures In Linux
One issue I've recently run into with a failed SATA drive in one of my NDS-4600 units is that Linux frequently tries to recover the drive by resetting the bus. This takes out a few other disks in the group with it. The resulting IO timeouts cause problems for my Ceph OSDs using those disks.
It should be noted that only some types of disk failures cause this. The host bus resets only are done by the Linux kernel in some cases (I think) and I suspect the cause of the other disks errors is said disk.
Tags
23TiB On CephFS & Growing
Original post on Reddit
Hardware
Previously I posted about the used 60-bay DAS units I recently acquired and racked. Since then I've figured out the basics of using them and have them up and working.
Tags
Admin Access To NEWISYS NDS-4600-JD
- One IP per mgmt card
- TCP/23
- Basic telnet
- No login
- Can control various backplane options
- Working on finding out how to control backplane mode
- TCP/1138
- Not TLS
- openssl s_client causes it to restart
- Telnet??
- Need to fuzz this
- Not TLS
Tags
Admin Access To NEWISYS NDS-4600-JD
Guess what has telnet enabled and TCP/1138. Telnet is obvious but I'm still working on TCP/1138.
Tags
Two NDS-4600-JD-05
These two NDS-4600-JD-05 units each have space for 60 3.5" drives with four 6 Gbps SAS ports on each of the two controllers. The plan is to connect two R610s (eventually R620s) to each of them with the DAS units partitioned so that each of the R610s/R620s has 30 disks (well, 15-disks on each of a pair of redundant 6 Gbps SAS lines). There will be a Ceph OSD per disk on each of the 30 disks. Half of the 30 disks will be 8TB and half will be either 3TB or 2TB disks.