All times are UTC - 5 hours [ DST ]




Post new topic Reply to topic  [ 1 post ] 
Author Message
 Post subject: more on hardware setups, things to think about: efficiency!
PostPosted: November 11th, 2018, 14:20 
Offline

Joined: March 20th, 2018, 7:55
Posts: 65
Location: Los Angeles
First, I was doing one per machine. Then, 2 per; and then I realized, some of it wasn't perfect
"Screw this! Let's do what we can as efficiently as possible; so we cloned 8 drives at once - on just two units. NO. Slowdown. (Truly)

Fav app for cloning a healthy drive, with an OS that works on all computers?
- I.e., DDRescue seems best for -free- if you're comfortable with linux
- For cloning... Win10? from spinning to ssd (even if they differ in size but fit the GB used?)
- Is there a scenario in which HDD SuperClone wins?

I use hardware imaging equipment - but when duplicating many raid drives, it's still nice to be able to clone 8 drives, simultaneously on two computers in under 1 day, for less than the cost of a single DDI4.

Essentially, cloning 32TB in 17hrs on 2 machines that, in that state, cost me $300ea... with:
- Dual, hot swappable PSUs
- 8 LFF slots
- Each slot compatible with BOTH: SATA & SAS drives.


Hardware I used - What I still want, How I'd shop to buy these items:
(some hard-earned tricks will be left out - but they're there for the diligent)

4x Dell T320 -- STANDARD CONFIGURATION
ESXi: - Win10 - Win7 - Ubuntu
• R-Studio, UFS Pro, &
• 8 LFF drive slots (T830 / T840 as prices down (with 18 LFF, dual SP CPUs, +more PCI slots
• Remove Dell PERC
• Install standard HBA
• SFP+ with 10GbE & SFP28 (peer to peer until SFP28 switches).
• SFP+ & SFP28 are limited by switch-costs
• Limiting factor QSFP 40 GbE? It's th dB are irritating!)
• QSFP+ used for peer to peer as the city
• USB 3.1 gen2 (when possible, as well)

Avg UNIT cost, CONFIGURED: < $1,000 ea Excluding SW licensing.
....
PROBLEMS TO WORK OUT STILL
• Getting USB 3.1 G2 working in all OS's
• Getting SFP+ (pref SFP28) working in all OS's
• Finding the optimal PCIe SFP28 mfr. so all the networking is reliable and OS-compatible.
**I'm new to SFP+. Though I have assistance, I prefer to minimize dependency**


Remaining T320
• 8x LFF HDD slots (compatible with both) -- used with 8x SAS drives (reliability)
• SAS HDDs offer optimal 'street value;' not only as targets to process (& safely store) logical recovery
• SAS are excellent "Target," logical processing, temp storage for recoveries;
• They are made to higher standards -- for less.
• Also remain cognizant that NVMe drives consume 4x the PCI slots - yet SAS3 is 2x SATA3 bandwidth

ADVANTAGES
• This setup'll simultaneously clone ≤4 HDD's, (2.5/3.5) PER machine (via DDRescue etc)

• One VM running linux, imaging easy jobs - concurrently imaging via DOS for DDI4. AND Atola...
• While processing RAID analysis in another window, and performing a Logical Recovery in another.
• Physical "target" space may be moot; with 2x 10GbE ports and 2x USB 3.1gen2 ... xfer = non-issue.

WITH MEMORY ISOLATION, EXTREME CASES ARE POSSIBLE

With 2 units -- I copied 8 HGST 4TB drives (full images) in 17 hours via ddrescue in 4 CLI windows.
I see NO reason (AND regard it -as- PLAUSIBLE[/u] to run other tasks in other VMs simultaneously:
- DDI4 via it's DOS, plausibly (perhaps dependent on available PCI slots.
- Atola (easily)
- Perhaps a PC-3000 iteration, with sufficient RAM & PCI slots.
- Clearly, logical image-processing.

WHY do this?
• Some people are constrained by the cost of imaging / recovery systems.
• Some people are constrained by access to quality help, knowledge, or advertising.
• Some people are bursting at the seems, with insufficient ROOM for equipment, etc.

This would provide better efficiency in the cost per number of computers.
It also, gives you enterprise grade constructed hardware, inexpensively, with SAS/SATA combo ports.
With dual CPU configurations, you'd have many PCI slots...
And subtles; things like
• iDRAC
• The need for but ONE high-end SFP+ card per unit.

The 4th Dell T320 is a ZFS -- RAIDZ2
• Setup with ESXi to run FreeNAS (until i figure out the problems for myself causing the exodus) ...
• 96GB DDR3
• 8x 10TB IBM-HGST (NEW - SAS drives)
• Formatted ~ 55TB of space for storage (This could be 80TB after compression)
• Compression - enabled; auto disables where inefficient, & maximized where the CPU hangs?. So far,
• Protected by [double parity]
• PCI card with 10GbE SFP+ and SFP28 (SFP28 switches are pricy, but peer to peer works)
• 56GbE or QSFP28 (100GbE) -- peer - peer again... and of course, via switch when prices drop.


More on this subject elsewhere, maybe the lounge -- as this is already drifted afar.



I'd like to get a deal on a T640, dual CPU (SP, Silver):
A 2 CPU system max for T640 would have [b]96 PCI lanes. (48 per CPU).[/b]

EACH CPU would additionally have a separate 100Gb/s pipe (for say, a QSFP card).
https://software.intel.com/en-us/articl ... l-overview


BUYING TIPS:
- VERIFY SFP+ / QSFP+ MUST BE ON THE HCL OF THE DISTRO YOU USE.
- MAKE SURE THERE ARE DRIVERS FOR THE DEVICE - AND YOU HAVE THE ver. PCI YOU CHOOSE.
- IF YOU CAN AFFORD TO DO SO, GET 8 LFF OR HIGHER.
- IF YOU CAN AFFORD TO DO SO, GET 2 CPU's IF YOU CAN.
- PICK A PLATFORM YOU CAN PURCHASE MULTIPLE OF.
- TELCOM GEAR IS INEXPENPSIVE; HW IS SOLD UPON LEASE END - AND FLOOD THE MARKET
- SWITCHES REQUIRE MODULES, WHICH CAN BE VERY VERY EXPENSIVE. IF THEY MUST BE OEM...
- THERE ARE PRODUCTS WHICH WILL MAKE A $20 CABLE WORK, INSTEAD OF A $700 CABLE. :)

• eBay: UltraStar, SAS, new -refurb in the thick with 3yr+ warranty.
• Thick - OEM ALUMINUM packing. Request a serial3. Check the warranty to verify it's not refurb'd.
• I may be dogmatic about HGSTs or SAS -- but their value for the 4TB - 10TB is GREAT in my book.
• Not a bad idea to ensure the HBA supports FreeNAS, etc. (without having to reprogram it).


Coming down the pike? (Technically here now - but price drops within 2 months expected!
- T630 &/or 18 LFF drives in ONE unit.
- Intel's SP 3647 CPU (I like the Silver ver: good value, 48 PCI lanes!
- Dual, HOT SWAPPABLE PSUs are common.
- Available in RACK OR TOWER CONFIG. **
- For now, 8-bay drive array will now, for ZFS, I look forward to the 18-bay. :) **


**If you intend to use QSFP in a room shared with people, buy one that's returnable.**
** QSFP SWITCHES I'VE USED / READ ABOUT ARE VERY LOUD & DIFFICULT TO MOD FANS **
Please let me know if anyone finds an: SFP+ 10/28/40/56/100 MANAGED SWITCH-
which allows replacement of OEM fans with Noctua's or which's quiet to begin with.


The SFP28 is attractive; offers 25GbS at SFP+ dB (quieter than 1GbE switches with noctua fans)
-- and if a QSFP+ switch includes replaceable fans, it may be equally quiet!)

SFP28 can just be a software upgrade on switches to allow the protocol I believe; which I've also seem discussions which imply it's a license for the 40GbE to advance to the 56GbE... and so on.

It absolutely sounds like over-kill for now ... but 2 years ago NO ONE was thinking that 5GB/s for personal computers that added $500 - $1,000 (dual, RAIDed) would so easily occur. Now it has ... and Intel's Optane and the 3D Xpoint hasn't even hit it's stride, and the counter response of competition hasn't either.

Gb internet is what we're waiting for. the speed is everywhere. integrating it all is on the horizon. Take advantage of telecom gear pricing, and the auspice we have. ;)


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 1 post ] 

All times are UTC - 5 hours [ DST ]


Who is online

Users browsing this forum: No registered users and 9 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group