For logical recovery, I presume R-TT is the winner...
And Disagreements?Amongst Sys-Dev Labs
- which product(s) do you feel is best? Which program
identifies a RAID layout the best?
(...or is this just a manual skill that should be cultivated?)
Cloning a healthy drive (OS-independent...)
- I.e., DDRescue seems best for -free- if you're comfortable with linux
- For cloning... Win10? from spinning to ssd (even if they differ in size but fit the GB used?)
- Is there a scenario in which HDD SuperClone wins?
Of course I use hardware imaging equipment - but when duplicating many raid drives, it'd still be nice to be able to clone 8 drives, simultaneously on two computers in under 1 day, for less than the cost of a single DDI4 -- so, we tried something out, and were able to get it done with inexpensive equipment...
Cloning
32TB in 17hrs on 2 machines which cost $300 ea... was cool!
- Dual, hot swappable PSUs
- 8 LFF slots
- Each slot compatible with BOTH: SATA & SAS drives.
Hardware I used - What I still want, How I'd shop to buy these items:Below is these HW I have, am setting up, need to do with it, and the other things I want ...
4x Dell T320 -- STANDARD CONFIGURATION •
ESXi: - Win10 - Win7 - Ubuntu • R-Studio, UFS Pro, &
• 8 LFF drive slots (T830 / T840 as prices down (with 18 LFF, dual SP CPUs, +more PCI slots
• Removed the Dell PERC (RAID only)
• Install standard HBA
• SFP+ with 10GbE & SFP28 (peer to peer until SFP28 switches become affordable).
• QSFP 40GbE operates at a dB that'll drive you insane; peer to peer!!
• Again, QSFP+ can be used for peer to peer as the city
• USB 3.1 gen2 (when local transfers are possible/necessary)
~ UNIT cost, CONFIGURED: < $1,000 ea Excluding SW licensing.....
** TASKS REMAINING **• Getting USB 3.1 Gen2 working in all OS's
• Getting SFP+ (pref SFP28) working in all OS's
• Finding the optimal PCIe SFP28 mfr. so all the networking is reliable and OS-compatible.
•
**I'm new to SFP+. Though I have assistance, I prefer to minimize dependency**Remaining T320 • 8x LFF HDD slots (compatible with both)
-- used with 8x SAS drives (reliability) • SAS offers higher reliability - and versatility allows you to get the BEST deals
• SAS, is ideal to use as 'processing targets' -and- safely store data awaiting customer's container.
• SAS are excellent "Target," logical processing, temp storage for recoveries.
• They
are made to higher standards -- for less.
• Also remain cognizant that NVMe drives consume 4x the PCI slots - yet SAS3 is 2x SATA3 bandwidth
ADVANTAGES• The setup'll simultaneously clone
≤4 HDD's, (2.5/3.5) PER machine (via DDRescue etc)
• ESXi - VMs running linux (imaging job) - concurrently imaging via DOS for DDI4. AND Atola...
• While processing RAID analysis in another window, and performing a Logical Recovery in another.
• Physical "target" space may be moot; with 2x 10GbE ports and 2x USB 3.1gen2 ... xfer = non-issue.
WITH MEMORY ISOLATION, EXTREME CASES ARE POSSIBLETo copy up to 4 drives x machine = 8x 4TB (full images) in 17 hours via ddrescue via 4 CLI windows:
32TB copied within 17 hours. and could have done 48TB.
I see NO reason, and it's perhaps PLAUSIBLE to run additional tasks in other VMs simultaneously:
- DDI4 via it's DOS with dedicated access to it's card. (PCI slots dependent0
- Atola (this one is almost obviously - so long as no conflicting versions of .NET are installed.
- Maybe even ... PC-3000; (with sufficient & INSULATED... RAM) & ... an available PCI slot.
- Logical image-processing, based on RAM & CPU resources of course.
WHY do this?• Some people are constrained by the cost of imaging / recovery systems.
• Some people are constrained by access to quality help, knowledge, or advertising.
• Some people are bursting at the seems, with insufficient ROOM for equipment, etc.
THERE ARE ALWAYS SCALABILITY FUNCTIONS IN BUSINESSES.
- Where land is cheap, it's clients with the budget.
- Were rent is high, it's your competition, reputation, success rate, & economy of space.
- I like the idea of a computer's resources being the limiting function; not something arbitrary.
- Additionally, I regard enterprise grade construction that's cheap and versatile SAS/SATA great!
- The dual CPU configuration's even better you'd have many PCI slots...
- Even subtle things like iDRAC features.
- General build quality (cables) and the rarity of crashes.
The 4th Dell T320 is a ZFS -- RAIDZ2 • Setup with ESXi to run FreeNAS (unless I see good reason to switch to linux + ZFS) ...
• 96GB DDR3
• 8x 10TB IBM-HGST (NEW - SAS drives)
• Formatted ~ 55TB of space for storage (This could be 80TB after compression)
• Compression - enabled; auto skips when ineffective & maximized where the?. So far,
• Protected by [double parity]
• SFP28 PCI card.
• Buy SFP28 / QSFP100
switch when I can afford one; til then? peer to peer it is.
More on this subject elsewhere, maybe the lounge -- as this is already drifted afar.
if anyone gives a darn, lemme know, and i'll follow up on what works. i know computers are cheap. but in some areas, you don't have unlimited space for stuff ... and in scalability, to grow, means to assume substantially more liability (5 yr lease for twice as much with more employees, etc.) all because you were a little too big for a place. This is what makes efficiency so important in high rent areas....