Cloud Computing
The real problem with cloud computing isn't what you'd think: It's the inability to use important troubleshooting tools like percussive maintenance.
The real problem with cloud computing isn't what you'd think: It's the inability to use important troubleshooting tools like percussive maintenance.
Need to run a reweight by utilization but can't with octopus? Try this...
ceph osd reweight-by-utilization 0.5 -- 150
Errors:
Up until now I've been cooling the thermally (and physically) isolated server room from the rest of my house and using a large mini-split to keep it cool. However the mini-split has reached its limits; The room now has 9-10 kW of gear in it.
Installing Bacula on CentOS 7 using the Community RPMs (from Bacula.org) for 9.6.7 and other versions is a bit annoying to get Postgres...
yum install bacula-postgresql --exclude=bacula-mysql --exclude=mariadb
Errors you might get otherwise...
A few of my Ceph nodes with 60-ish disks each were experiencing frequent reboots. Turns out kernel.nmi_watchdog was rebooting it due to disks holding it up under very high load. By turning it off via `echo "kernel.nmi_watchdog=0" > /etc/sysctl.d/99-watchdog.conf` the problem was solved. Although I suspect there are better ways to tune NMI Watchdog to fix this. I'm being lazy.
Hosts in my cluster all have two 10GbE links to a switch bonded with LACP. That gives them an effective link of 20GbE normally. But really, even 5Gbps would be enough for most hosts. The kernel in most boxes combined with drivers and such isn't going to be able to push 20GbE, much less 40GbE. So faster isn't worth it.
Note: Don't bother with IPoIB. You're better off with 10GbE x2 via LACP. Although if it's a choice between 40Gbps IB and IPoIB or a couple of 1GbE lines, go IPoIB.
Why these drives? They're the main data drives I've had and used for years. The only ones I'm going to remove soon are a subset of the 2TB drives with high spin times. Some of them have more than 8.5 years of spin time. I'll probably remove any disk with more than 6 years of spin time as a preventive measure.
Once GREAT option for improving OSD performance with spinning disk, esp slow disk, is to use a redundant array of SSDs for the Bluestore Journal. If you've got SSD space to spare you could even put the RocksDB on SSD too. But that needs a LOT more space. The Journal only needs a couple of GiB per OSD. RocksDB needs a LOT more.
A couple of months back I changed the value of "osd_memory_target" for all of my OSDs from 4GiB to 1.5GiB. That change has stopped all RAM related issues on my cluster. While I suspect but can't prove a small performance drop, it's well worth it in my case.