Quantcast
Channel: ServeTheHome.com Forums - RAID Controllers and Host Bus Adapters
Viewing all 147 articles
Browse latest View live

Perc 6/i and 3tb drives

$
0
0
I searched some and couldn't find an answer to this question, so here goes.

I recently acquired a Dell Poweredge T610 with a Perc 6/i raid controller in it. (YAY!)

I then proceeded to purchase 8 3tb WD RED drives from New Egg.

The drives arrived, I threw them in the server, pressed Ctrl+R when prompted to enter the raid controller and found that the Perc 6/i only sees them as 2tbs :(

Feeling like a right dolt, I did a bit of research and found that Dell doesn't offer a firmware update that would allow the Perc 6/i to see the full 3tbs.

Does anyone know if it is possible to flash/cross flash this card to a firmware that will see the full capacity of 3tb drives?

Any help would be greatly appreciated, as I would like to avoid purchasing an H700. My wife was generous enough to agree to my buying $1200 worth of hard drives, I'd hate to have to sink another $500 into this project.

Thanks again,

Taylor

Intel RMS25KB080 (LSI 2308 HBA) - Abysmal RAID 10, "background initialize"

$
0
0
Hi all,

I finally got my new server setup going, with the Intel S2600COE motherboard with included RMS25KB080 RAID host adapter (Appears to be a standard LSI 2308 controller with Intel firmware), and 8x Seagate Constellation CS 3TB drives.

I've got the machine up and running, and everything looks peachy... I've installed Windows, drivers, updates, the usual routine, and now I've set up the RAID controller to have a single 12TB RAID 10 array.

Only problem now is, it appears to be atrociously slow... way slower than a single drive for writes, about the same as a single drive for reads. (50MB/s and 250MB/s respectively)

In the RAID BIOS and in "Intel RAID Web Console 2", it reports the array is "Optimal" but also running "Background Initialize: 0%"... and it's been on 0% for hours!

I don't understand... it's a newly created array - by implication the data is already mirrored and consistent, by virtue of there not being any yet. Does it really need to run this initialization? Will it really take days/weeks to complete? Will the performance definitely dramatically increase when it completes, or does this seem like a problem elsewhere?

Should be sticky: Samsung 840 and 840 pro are not LSI megaraid compatible

$
0
0
LSI will not support the 840 or 840 pro with LSI megaraid, 9260 through 9271 .

The problem manifests itself in two specific drives, oddly these two drives are both competitors to a company called Sandforce.

Samsung 840/840 pro - arguably the best ssd from a stable company (sorry OCZ but your reputation is tarnished, great drive the vector is).

Intel DC3700.

Odd how these two competitors are the biggest threat to sandforce, but Intel is a huge LSI reseller no less.

So - What's the problem?

1. Controller will not let you alter the Disk Cache policy. So if it is off , the massive 512meg cache on the samsung 840 pro (large sized) is not enabled*

2. Strange performance - typically when building a raid set using CTRL-H the raid will perform on AS SSD 4K WRITE at 1-2MB/s taking 7 minutes to complete that portion of the test.

3. Using the 5.6 firmware,MSM, and Drivers (all dated Mar 02 2013) with windows 2008 R2 we could replicate this. We consulted Webhostingtalk and lowendtalk and this is a known problem manifesting the two drive makers' product. LSI knew about this in the last set of drivers and did not update anything on the Mar 02 2013 release of firmware and driver.

4. Controller resets to RA randomly on reboot.Why the heck would I set the SSD to Read Ahead, that's silly and against the fastpath guidelines. More than that gets reset but I won't say more.

5. Creating the raid volume via MSM however may not cause this problem (WTF!). Benchmarks after creating a volume with the 9266 as secondary and the 9260 as primary show normal performance. The 9266 is about twice as fast as the 9260 without fastpath. Fastpath doesn't really seem to work that hot with 2 drives (4 drive raid-10). More on this**
.
6. You must cold shutdown this controller between changes. Firmware, settings. Shutdown and pull the power plug for a minute. Seriously. Everytime.

7. Why would this affect the DC3700 drive as well? it's on the list of approved. Watch it fall off the list soon. LSI we have google and we can go back in time to retrieve your older approved lists. the 840 pro was on the list a few months ago.

The symptoms are quite simple. Flakey. The AS SSD 4K write taking 7 minutes is slower than a 5400 rpm drive. The drive lights are lit solid. This can be replicated on the 9260 by "ENABLE DISK CACHE POLICY: DISABLE" - but the option is greyed out on the 9266. Both controllers use the same 5.6 driver, and 5.6 MSM. Matter of fact, doesn't the 9260 and 9266 share the same firmware nowadays? I don't know that answer.

The symptom also presents as "BLOCK SSD WRITE CACHE CHANGE: YES" when these drives are present. That's a pretty straight up obvious message (storecli show all, or megacli adpaLInfo ) - this message pretty much goes with the "Enable Disk Cache Policy" being greyed out on the MSM.

** We applied fastpath/cc 2 trial key to the card and it then would allow change of policy on the card. however you should not have to buy this key?

* the 9260 had a dead battery, the 9266 had no CV module, but the cache offload (cachevault) was enabled in advanced software options.

So really, LSI, why don't you go back and fix your firmware and controllers. The folks are webhostingtalk are probably some of your larger customers. The original firmware that Samsung 840/840 pro had did have GC issues, but was replaced a month or so ago with new firmware. The DC3700 should have no such problems, especially since it has super capacitor.

The megaraid controllers are pretty interesting. If you build a raid-1 or 1+0 you can kind of see what's going on. It's like it is tuning itself based on the type of reads, Queue Depth, latency, and linear/random, to use the drives as read ahead. Most RAID 1+0 systems will read from 4 drives, if you build a 4 drive raid. It seems that LSI is dynamically choosing to read from 2 , 3 [blip], 3 , or 4 drives based on the block size, type of read, and read/write mix. Unless I have the read/write thing backwards in which case its reading from all 4 but writing to 2,3, or 4 disks. You can replicate this by building a raid-10 and watching ATTO (QD4,QD10), AS SSD, and CDM - all popular benchmarks. This is very different from every other raid manufacturer and I wonder if it part of their secret sauce or part of their problem?

For reference for my above opinion, I believe LSI needs to work on their drivers (win2k8R2) and firmware. We noticed you could change the C: or E: drive cache policy in windows with the 9266. The 9260 would refuse when I touched this. Likewise the P400,P410,P420 HP [latter two are PMC Adaptec] would refuse to allow us to change these settings. This is very normal since the raid card should be in control of these settings.

Back to the reference:
9260 w/fastpath is just as fast as 9266 without, and quite honestly it is stable with the 840 pro/840 and cheap refurb/used.
PMC-Adaptec PM8020 based HP P420 - I got a deal on these, nobody wants them because they have tiny heatsinks and overheat quickly and go offline, nothing a little arctic goop and a creative copped bridge or active cooling can't fix [idle the P420 is 58C, the 9266 idles at 68C in the same slot on a dl380 G7 - put that in your pipe and smoke it].

So the P420, a rather new card, not designed for the G7, with 0% read-ahead, and 100% write back outruns the 9266. I suspect with more drives it would outrun it further. It can enable the drive write cache on the 840/840 pro. It does not mind that I'm running the non-hp drives. I am sure the 1GB supercap cache helps with writes, but quite honestly, at the end of the day it really is which controller is consistent and performs well.

Folks might notice the 71605e (raid 1/10/1e and job at same time) or 71605h (hba only) use a newer version of this chip. It has 16 ports per card standard. It requires no fastpath to go faster because it has 16 ports - 16 840 pro's will go faster than the 9266/9271 w/fastpath. The Adaptec(PMC) cards also support JBOD+RAID at the same time on the raid. And Cachekade on their expensive 71605q. [The hp P420 enables raid-6/60 and cacheKAde 1.0 read only with the SAAP 2.0 key].

There are other options but all know that pretty much HP uses PMC-adaptec in their servers now, everyone else on the planet uses LSI.

I hope someone else reads this, perhaps someone at LSI, and definitely anyone buying samsung 840/840 pro's and an expensive Megaraid card to save them the time and trouble i've wasted.

I am not the only one. google "840 pro 9266 webhostingtalk" or "840 pro cachecade webhostingtalk" or look around here. Folks are having to cross-flash their LSI branded cards to PERC to get them to perform? How odd is that?.

As always, have a nice day!

hba with freenas

$
0
0
Hi, I have some doubts and I think this is the best place to solve them :)

I explain that I have the hardware and the idea I had for mounting

- ProLiant ml 110 , with 16 ram
-hp sas expander card
-hp-raid card p-410

At first I thought that this could set up a server hard drives to use the freenas with zfs raid, but I have seen on some websites that this is not possible

I have the opportunity to acquire a LSI 3041E HBA R-A a good price, I can mount the FreeNAS software RAID?
* Another question is the LSI card I can connect to the HP SAS Expander Card

HP RAID Controllers on the Intel 2600GZ Platform (P212 and P420)

$
0
0
Just as a note to folks who were wondering, dba and I spent a good 2+ hours today trying to get the HP P420 and HP P212 controllers working on an Intel 2600GZ platform. Tried just about everything we could think of to no avail.

The drive cannot be extended because the number of clusters will exceed the maximum

$
0
0
I guess I finally crossed the 16TB barrier with Win 7 x64 and 8X 3TB Hitachi 5K3000 drives. I currently have 64K stripes in the RAID card, 4096 bytes per cluster, and 512 byte sectors.

I expanded the array from 7 to 8 3TB drives in RAID6 using an old Highpoint 3520 IOP348 card, but cannot extend the volume in windows. How can I convert the cluster size without losing data, or buying a multi-thousand $$ array to transfer the data to while I rebuild?

HP SAS Expanders Daisy Chained

$
0
0
Can anyone tell me if this will work. I want to use 2 hp expander cards one in each supermicro sc846 case.Case one will have 2 x ibm m1015 cards and the sas expander, case 2 will just have the sas expander connected to 24 drives. I will then connect them both together using the external sas port on both cards. Also can anyone point me in the right direction for a simple board that will power the sas expander in case 2?

Will this be the year of 12gbps SATA and SAS?

$
0
0
Wanted to get perspective on this. If so, don't mind ebayin 6gbps stuff but won't buy 3g and won't buy new 6g.

Whitebox motherboard for IBM M5015?

$
0
0
Hi, 1st post here.

I'm planning buying a IBM M5015 or M5014. I was thinking about a M1015 but the not having a battery and no write-back seems to be an issue.

My question is if the M5015 would work with a desktop motherboard (asus or intel)?

I want a vSphere whitebox, with 4 drive in RAID10. Maybe a 2nd raid1 of 128GB SSD


Many thanks
Oliver

Lsi sas2308 (lsi 9207/9217) hba/raid sgpio

$
0
0
Does this card support SGPIO via Sideband for a Supermicro SAS-743-TQ Backplane?

I ordered a LSI 3Ware 8087 to Sata with Sideband, but the Sideband connecter isn't wired properly and the cable says "Custom i2c with sideband" on the bag.

Where can you get a compatible Forward fanout cable with a sideband wired correctly for this backplane?

I'm certain this card should support it as the LSI 2008 does SGPIO

I really want SGPIO or i2c working so I get a buzzer alert, LED drive failure etc.
Also handy for drive identification.

I can get SGPIO working perfectly with the onboard SATA so I know the backplane is fine.

Lost Raid 6 on LSI 9266-8i

$
0
0
I was messing around with the 9266-8i I just got. I wanted to load Dell H710 Firmware so I could have array power down.

The firmware didn't take so I reloaded the LSI firmware, but lost all options including ability to have Raid 6,60.

Anybody know how to re-enable the standard card options

Highpoint Rocket 750 - 40 Port PCI-E 2.0 x8 HBA

$
0
0
Here's the PR on xbitlabs.com

HighPoint Launches 40-Port Serial ATA-6Gb/s Controller Card - X-bit labs

Looks interesting, However theoretical per drive bandwidth in a perfect world is about 100MB/s.

I've had some Highpoint 2320's in the past, and was less than impressed with them. I'm also leery of their driver support for various OS's as I seem to recall them being binary blobs in the past.

However planned pricing is $739, which isn't bad.

</editorial>

Intel RES2CV240 vs RES2SV240 SAS Expander Differences

$
0
0
Looking for some advice on Intel SAS Expanders

Intel RES2CV240 - around $270 says 24 port expander but looks like 7 sff-8087 connections right angle to pcb
Intel RES2CV240 | eBay


Intel Intel RES2SV240 - around $300. looks like 6 sff-8087 connectors that are parallel to pcb
Intel RES2SV240 | eBay

Is the C model really only 24 ports? What is the 7th connector? Or just configuration and price differences?

Which Expander for a LSI 9240-8i?

LSI 9240-8i Not Working

$
0
0
Hello guys,

I'm new here, but I heard this forum is a great place to get help with RAID controllers.

So, initially I had bought an IBM M1015 RAID controller on eBay. It needed the "feature unlock key" in order to add RAID5 abilities - and when I had that hooked up, it continually bluescreened until I removed the feature unlock key and did RAID0 instead.

So I ordered myself one of these:
LSI MegaRAID Internal Low-Power SATA/SAS 9240-8i 6Gb/s PCI-Express 2.0 RAID Controller Card, Single - Newegg.com

The RAID BIOS does show up, and it can see the RAID0 from my previous controller. However, Windows gives me an Error 10 "Device cannot start" message when I try to install the drivers - even if I forcibly install them using the downloaded files from LSI's site, it says those drivers are already installed.

I tried updating the firmware to the newest (December 2012) version - I used an older computer, to ensure that it would work. Still get error 10. I'm trying to install a Linux distribution on that machine to see if it is indeed a hardware problem, or just Windows being dumb - and for both Fedora and CentOS, it hung on the installation so I can't even test it.

Additionally, all 8 of the attached hard drives show constant (100%) activity, even when the computer is idling. Does this sound like a DOA card or other hardware problem to you guys? The motherboard I have is an ASUS M5A97 (see info here. I called LSI and they said that while they haven't tested any ASUS AMD boards, the worst that would happen is that the RAID BIOS won't show up correctly (which it does).

So... what do you guys suggest? Is this a dead card? Or is it a common problem? Should I ditch that model and get a different one? I'm all ears.

Stretching Infiniband a bit too far (ZFS & GbE & NFSoRDMA) ?

$
0
0
Ok,

So I am trying to formulate the best way to both move and share data utilizing the Infiniband network but also making the data accessable by lan connected machines.

Requirements;
1. ZFS protected base storage.
2. Fast Infiniband transfers between data creation server (CentOS) and data sharing server (Windows Server).
3. Data sharing server should present the data to Lan (SMB / NFS etc).

My current thoughts are;
Solaris 11.1 ZFS -> SRP target -> Windows SBS 2011 Ess -> NTFS disks -> NFSoRDMA -> CentOS server.

If that chain may work then the CentOS server can dump the data on the mounted NFS share and it will update the Windows 'pseudo' disks which are really disks on the Solaris SAN. How would I then be able to present the Windows 'pseudo' disks to the Lan from the Windows server ?.

Second thought would be;
Solaris 11.1 ZFS -> NFSoRDMA -> Windows SBS 2011 Ess & CentOS server.

Seems simpler but still need to find a way of bridging the RDMA / Lan networks and the storage would not be managed by the Windows server (DC for the network, access rights etc).

Any suggestion of viable alternatives. Needs to be supported by ConnectX or ConnectX-2 cards.

Update: I guess an upload cache NFSoRDMA mounted on the Windows and CentOS boxes may work but would require an extra step to upload from the cache on the Windows server to the lan share.

RB

Areca 1882 RAID5 Volume Issue - Help

$
0
0
Hi,
I installed the trial version of HDSentinel on my Windows 2012 server earlier and it appears to have borked one of my Raidsets/Volumes on my Areca 1882IX-12 (FW: V1.51).

11 out of 16 drives dropped from the card after the software was started with Device Removed/Device Failed/Time Out Error messages displayed in the logs.



I uninstalled the software, rebooted the server and activated all of the failed disks.
The issue being a 3 disk RAID5 array is now in a failed state and there are 2 raid sets with the same name.
One of them contains 2 of the original member disks and also has a missing device - but no volume set. The other has 3 disks present (1 original and 2 which were previously pass through drives). This raidset has the original volume set attached, but is in a failed state.



I assume the above has happened due to the Hot Plugged Disk For Rebuilding being enabled by default (I've subsequently disabled it).

I'd just moved 1TB of data to the array which wasn't backed up elsewhere (it's not critical, but will be a pain to replace), so ideally would like to recover it.
I don't have R-Studio, but quite happy to buy that if it helps the situation.

I've emailed Areca support, and although I've had initial contact I haven't heard back from them in 2 days.

I've seen a similar thread over on hard forum (and also posted this question over there but haven't had a response), Areca 1880i RAID Set/Volume Set Missing - Help Needed - [H]ard|Forum and that issue was resolved by running a LeVeL2ReScUe (then a SIGNAT and several Volume checks) - should I just do that or should I make an image of the disks first with R-Studio?

Thanks for your help

Recovering IBM MR10is

$
0
0
Hi All,
I was trying to flash IBM ServeRAID MR10is controller by MegaRec utility. As it was recommended at the various forums, I erased internal flash first. Everything went Ok, MegaRec confirms erasing of 4Mbyte flash chip. At this moment, when I was ready to flash new firmware back ... short power outage spoiled the whole thing. The controller has an empty flash now.
Ok, controller didn't come online after power is back. Reboot, start MegaRec, the chip is recognized as LSI 1078DE, but initialization fails with timeout ... I have been trying few days with four motherboards with different chipsets, RAM sizes and RAM mappings - no luck :( I always had the same timeout while initializing host bus adapter. I also tried controller on-board jumper, it changes pci id but brings no luck.
I am afraid I need another utility for my LSI 1078DE card, but I have no idea where to find it. Did anybody have any experience recovering these chips? Many thanks in advance!

IBM m1015 -> LSI -IT firmware - no drives detected

$
0
0
Hey Guys

First post here - long time reader :)

Im trying to flash my newly bought m1015 controller and use it for disc passthrough in vmware.

I think (or thought) that I had flashed correctly. But the controller doesnt find any discs attatched to my controller.

I bought a 3ware breakout cable to connect my discs Køb 3ware Multilane Seriel ATA / SAS kabel - 60 cm (CBL-SFF8087OCR-06M)

I flashed with the latest package from SAS2008 (LSI9240/9211) Firmware files - Projects, Tools, Utilities & Customized INFs - LaptopVideo2Go Forums

Running on a Supermicro x9scm-f motherboard and did the flashing through EFI Shell.

here is a print of sas2flash.efi -list command:



Any thoughts? - Really need some help here :)

LSI 9271-8iCC, FastPath and RAID5 question

$
0
0
I'm planning on setting up a RAID5 array with 4xIntel 520 480GB SSDs and a MegaRAID SAS 9271-8iCC card. FastPath will be enabled too.

What I'm not able to clearly understand from the online documentation of the LSI card is if I can add more drives to the RAID5 array in the future (without destroying and rebuilding the array of course). Googling also didn't help much...

Does anybody know about this?

Thanks
Viewing all 147 articles
Browse latest View live