Skip to content
This repository has been archived by the owner on Feb 8, 2024. It is now read-only.

Write performance degrades overtime on ssd disks #83

Open
mozg31337 opened this issue Sep 27, 2014 · 4 comments
Open

Write performance degrades overtime on ssd disks #83

mozg31337 opened this issue Sep 27, 2014 · 4 comments

Comments

@mozg31337
Copy link

I've been using Enhanceio for a few weeks now following a pretty good benchark results with enhanceio using ssd disks with hdds in the backend.

However, a few days ago i've noticed that both of my ssd cache disks are performing very poorly. I was not getting over 30MB/s writes with disks utilisation 100% according to iostat. The disk performance was well over 450MB/s in the past.

Doing some investigation i've noticed that the reads are still performing very well. The average disk read speed (tested with dd and iflag=direct) is still around 480MB/s just like before.

As a test, I've repartitioned one of the ssd, created ext4 partition, created a file and filled it with 0s up to the full disk size and removed the file afterwards. After that i've done fstrim on the mounted fs and it trimmed about 510GB. Following this procedure, my write performance was back to 450MB/s+.

Following the above procedure for both ssds and recreating the enhanceio cache and I was back in business with good performance.

Does anyone experience similar behaviour? Is there a way to deal with the above problem?

Thanks

@davidebaldini
Copy link

That's a good question and I second you on waiting for others' experience or opinions.
That is, bump.

@Ape
Copy link

Ape commented Aug 12, 2015

Does EnhanceIO ever do discards to the SSD device?

@sammcj
Copy link
Contributor

sammcj commented Sep 10, 2015

I'm assuming as the flash memory got to 100% utilisation it slowed down as discards are not issued by EnhanceIO (As @Ape suggested), It also seems that the project is now largely unmaintained :(

@bash99
Copy link

bash99 commented Dec 22, 2016

I've a similar condition as write performance slow down a lot after 1 week's run. (benchmark a lot before put into production, it's very fast in benchmark, and write a lot in the start few days).

But it's looks like something on enhanceio and ssd. The confirmration is behind a H710 raidcard, a 6 disks raid10 of HDD, and a raid1 of two 3710. raid10 is set to writeback in raidcard, SSD raid1 is set to writethrough. I use parted --align optimal to alloc 0%-100% as cache device.

Use dd oflag=direct if=/dev/zero of=test bs=4k count=1024 or FIO can confirm the slowdown.
dd only got 2.4 MB/s or 600 iops.

If I set ssd raid1 to writeback in H710, dd got 7000 iops(by raidcard cache!), so it seems a SSD problem.

But after I delete eio device, and mount ssd raid1in ext4 or test /dev/sdb directly, performance is still great without HWRaid cache!

But after recreate eio device, slowdown is still there, even if I create /dev/sdb1 with 75% space of ssd.

As I'm use a H710 which don't support trim or secure-erase on SSD disk, the only workaound I find is recreate eio with -b 8192, performance restore to good but not great, and still has few slowdown when repeat test many times.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants