Jump to content

F2FS revisited


Recommended Posts

Since I've been asked recently why Armbian doesn't ship with F2FS by default I thought let's have a look again what to gain from F2FS (a filesystem specially designed for use with flash media by Samsung few years ago). For a history of F2FS discussion please use the search function: https://forum.armbian.com/index.php?/search/&q=f2fs

 

Armbian fully supports F2FS partitions for a long time now but we don't provide OS images with F2FS for a couple of reasons:

 

  • F2FS still doesn't support resizing so our default installation variant (ship with a minimized rootfs that gets resized on first boot automagically to the size of SD card / eMMC) wouldn't work
  • F2FS requires a pretty recent kernel and is therefore not an option for default images (since most of downloads use legacy kernel)
  • Unfortunately those installations that would benefit the most from faster random IO (writes) are those using the kernels most outdated (Allwinner A20/H3 who are used by many people as 'desktop linux' where performance heavily depends on fast random writes)

 

To use F2FS with Armbian you need to choose a SoC family that is supported by mainline kernel (or at least 4.4 LTS) and then build an image yourself using these options (choosing $FIXED_IMAGE_SIZE to be less than the capacity of the final installation media!)

ROOTFS_TYPE=f2fs FIXED_IMAGE_SIZE=n

In the past I tested through this many times and all my tests didn't show that great performance improvements would justify another installation variant but... let's have a look again and focus on the claims first. F2FS has been invented to solve certain problems with flash based media and promises better performance and higher longevity. At least measuring the latter is somewhat questionable but performance can be tested. So I decided to pick 3 different SD cards that represent roughly the kind of flash media people out there use:

 

  • Crap: An Intenso 4GB Class 4 SD card
  • Average: A random SanDisk 8GB card
  • Superiour: An expensive SanDisk Extreme Plus with 16GB

 

For the tests I used an OrangePi One with kernel 4.10 and a Debian Jessie variant to be able to use a scripted OpenMediaVault installation as some sort of a real-world benchmark (the script is roughly based on the results of our OMV PoC). The other 3 benchmarks are our usual iozone3 call, then ioping and a mixed workload measured with fio. Test script is this: https://pastebin.com/pdex14L9

 

First results with an average SanDisk 8 GB card (debug output with F2FS and with EXT4):

                                    F2FS    EXT4
     iozone 4K random write IOPS     208     196
      iozone 4K random read IOPS    1806    1842
iozone 1MB sequential write KB/s    7161   10318
 iozone 1MB sequential read KB/s   22429   22476
                   ioping k iops    1.42    1.31
                  fio write iops     128     132
                   fio read iops     384     395
    OMV installation time in sec     886     943

I consider benchmark numbers that vary by less than 10% as identical and then it's easy: ext4 outperforms F2FS since all results are identical but sequential reads are +40%  faster with ext4. Test results in detail: F2FS and EXT4. I'm really not impressed by any differences -- only two things are interesting: faster sequential reads with ext4 but very low random IO write performance at 16K blocksize (that's something we noticed with a lot of SD cards already, see first post in 'SD card performance' thread).

 

At the moment I'm not that impressed by performance gains (but that might change later when the crappy 4GB card has finished) and just want to point out that there are other criteria too for choosing a filesystem for systems that are running with a high potential for bit flips (due to users using crappy chargers, bad DRAM clock settings when not using Armbian and so on). Just to give an idea please read through the PDF link here: https://news.ycombinator.com/item?id=11469535 (ext4 more than 1,000 times more reliable than F2FS when running the AFL test against)

 

BTW: mkfs.f2fs info at image creation time (no idea whether something could be 'tuned' here):

Info: Debug level = 0
Info: Label = 
Info: Segments per section = 1
Info: Sections per zone = 1
Info: Trim is enabled
Info: sector size = 512
Info: total sectors = 7649280 (3735 MB)
Info: zone aligned segment0 blkaddr: 512
Info: format version with
  "Linux version 4.4.0-72-generic (buildd@lcy01-17) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #93-Ubuntu SMP Fri Mar 31 14:07:41 UTC 2017"
Info: Discarding device
Info: Discarded 7649280 sectors
Info: Overprovision ratio = 3.300%
Info: Overprovision segments = 126 (GC reserved = 68)
Info: format successful

 

Link to comment
Share on other sites

13 minutes ago, tkaiser said:
  • F2FS still doesn't support resizing so our default installation variant (ship with a minimized rootfs that gets resized on first boot automagically to the size of SD card / eMMC) wouldn't work

Resize is supported by the kernel, but the userspace part (in f2fs-tools) is not present in Xenial and I'm not sure if it is present in Stretch. It is definitely present in Debian testing branch, which is ahead of frozen Stretch.

 

Edit: Also not sure if online resize is supported.

Link to comment
Share on other sites

2 minutes ago, zador.blood.stained said:

Resize is supported by the kernel

Sure, but we need the userspace part too. And only if both parts fit we can rely on this. :)

 

At least I don't want to waste a single second of my life diagnosing/supporting first boot issues caused by relying on unreliable tools.

Link to comment
Share on other sites

Now results with the 16GB SanDisk Extreme Plus SD card. Debug logs here for F2FS and EXT4:

                                    F2FS    EXT4
     iozone 4K random write IOPS     746     764
      iozone 4K random read IOPS    1616    1568
iozone 1MB sequential write KB/s   20354   21988
 iozone 1MB sequential read KB/s   22474   22548
                   ioping k iops    1.57    1.55
                  fio write iops     311     399
                   fio read iops     934    1197
    OMV installation time in sec     862     791

No performance difference between F2FS and ext4 as before, funnily fio numbers again better with ext4 than f2fs (this is a synthetic benchmark trying to emulate a mixed workload of 75% reads and 25% writes) and this time also OMV installation faster with ext4 compared to f2fs.

 

Link to comment
Share on other sites

Can't really fix poor random performance with file system. If hardware sux, there is little to be had.

In theory you could fix random write performance with some additional write cache, by catching all random writes into a ram buffer and then dumping  them later to the flash. 

https://wiki.cc.gatech.edu/epl/index.php/FlashFire used to be used back in EeePC days (those had awful flash "ssds") and it did mitigate stutters a little bit as far as i can remember.

Link to comment
Share on other sites

13 minutes ago, manuti said:

I don't want to be polemic but the last Phoronix test shows a different figures related to F2FS

 

That article compares F2FS and BTRFS on multiple SSDs. Here is comparison between F2FS and EXT4 on single SD card. Apple and oranges.

Link to comment
Share on other sites

3 hours ago, manuti said:

I don't want to be polemic but the last Phoronix test shows a different figures related to F2FS

 

Phoronix as usual produces only numbers without meaning. If there's one thing to learn from that 'benchmark' then that stuff like 'Copy on Write' matters (he used btrfs defaults where CoW is enabled for a database test. Any administrator out there doing this would be fired immediately) and that default block sizes matter. Btrfs uses 4K (and in case files with less size get changed btrfs always has to read those 4K first and then write them back entirely) and F2FS at least in the past used 512 bytes (no idea which record size he used in his test, not telling this alone disqualifies the whole 'benchmark').

 

With such a setup he has (which is something completely different compared to our SBC situation as @jernejalready pointed out) there would be a lot of interesting stuff to test (eg. adjusting filesystem blocksizes so they match database record sizes or page and/or erase block sizes of the SSDs). But it's just the usual Phoronix clickbait throwing out some weird numbers in as less time as possible and in passive benchmarking mode. It's data but not information.

Link to comment
Share on other sites

And in the meantime the crappy 4GB card was also able to finish the tests. Debug log for EXT4 and for F2FS. Full storage test logs for EXT4 and F2FS.

                                    F2FS    EXT4
     iozone 4K random write IOPS      38      33
      iozone 4K random read IOPS    1454    1461
iozone 1MB sequential write KB/s    8644    9506
 iozone 1MB sequential read KB/s   15638   14834
                   ioping k iops    1.06    1.19
                  fio write iops      31      28
                   fio read iops      93      85
    OMV installation time in sec    3729    1480

Again no real performance difference. The only number that really varies is the OMV installation time. What does this mean? No idea but since I did the F2FS test after EXT4 we just might watch this crappy card dying :)

Link to comment
Share on other sites

SD_Cards.jpg

 

From left to right: Good, average and crap. The numbers combined below:

                                        Good            Average           Crap
                                    F2FS    EXT4      F2FS    EXT4      F2FS    EXT4
     iozone 4K random write IOPS     746     764       208     196        38      33
      iozone 4K random read IOPS    1616    1568      1806    1842      1454    1461
iozone 1MB sequential write KB/s   20354   21988      7161   10318      8644    9506
 iozone 1MB sequential read KB/s   22474   22548     22429   22476     15638   14834
                   ioping k iops    1.57    1.55      1.42    1.31      1.06    1.19
                  fio write iops     311     399       128     132        31      28
                   fio read iops     934    1197       384     395        93      85
    OMV installation time in sec     862     791       886     943      3729    1480

If I look at these numbers I immediately stop thinking about filesystems but start to focus on hardware instead. Obviously avoiding crappy cards and choosing good or at least average ones helps a lot with performance.

 

Interestingly such a task like OMV installation that does a lot of disk IO is somewhat affected by storage (random) performance but not that much. Installation duration with good and average SD card are more or less the same. Then we should keep in mind when looking at sequential speeds that most if not all SBC currently are limited to ~23MB/s here anyway (ASUS Tinkerboard one of the rare exceptions). So it's useless to get a SD card rated for 90MB/s since it will be bottlenecked by the interface anyway.

 

Still random IO performance seems to be the most interesting number to focus on and I would assume those cards showing best performance/price ratio are still Samsung's EVO/EVO+ with 32GB or 64GB (can't test right now since all Samsung I own are busy in installations).

 

One final word regarding F2FS 'real-world' performance. This filesystem comes from an Android vendor and maybe situation with Android where apps constantly issue fsync calls is really different compared to ext4. But for our uses cases this doesn't matter that much (especially having in mind that Armbian ships with ext4 defaults that try to minimize writes to SD card -- for some details regarding this you might want to read through article and especially comments here: http://tech.scargill.net/a-question-of-lifespan/)

Link to comment
Share on other sites

3 hours ago, jernej said:

That article compares F2FS and BTRFS

Yes, I know but I never think in F2FS as an option in GNU/Linux system and finding two benchmarks close in time I'm thinking in take a look to all this numbers.

Link to comment
Share on other sites

4 hours ago, manuti said:

Recently I saw a sysbench thats is really mind blowing: http://sirlagz.net/2016/10/27/exagear-benchmarking-part-1/

... or not. Again, it's a benchmark. It doesn't necessarily hit the bottleneck in the system it was designed to measure, it doesn't tell you where the bottleneck is that is responsible for the numbers you are getting and it doesn't take all the factors into account.

 

4 hours ago, manuti said:

Yes, after reading some of your "real" benchmarks I'm starting to be very agnostic with all this kind of stuff.

Well, results in this thread show some numbers related to the I/O performance, nothing more.

If you wanted to really compare filesystems and not underlying storage, you would need to test a lot of other things:

  • CPU utilization by the FS driver in different scenarios
  • memory utilization by the FS driver in different scenarios
  • storage space overhead for the internal FS data
  • comparison of available FS tweaks that may affect 3 previous points

and for the "scenarios" you may wanted to test

  • creating a large number of files and directories
  • removing a large number of files and directories
  • renaming or changing metadata for the large number of files and directories
  • appending or removing data in existing files
  • traversing a large directory tree

 

So in the end if I/O performance numbers do not differ very much it doesn't mean that filesystems will show similar performance results in different real world scenarios not related to reading or writing files

Link to comment
Share on other sites

8 hours ago, zador.blood.stained said:

Well, results in this thread show some numbers related to the I/O performance, nothing more.

 

Exactly. And the focus of this comparison was trying to get a clue about average 'real world' Armbian scenarios and also the claim F2FS would outperform ext4 on slow storage. I was interested to use a limited set of tests to get a clue whether it's worth a look to investigate whether F2FS is an option for OS images we provide and even more how the SD card 'class' affects this all (good, average, crappy).

 

For example booting times: I provided the 'debug logs' for a reason (to be able to spot mistakes I made and to search for further stuff in the logs). And if we do a search for two common strings in dmesg output we get the confirmation that booting times of this mainline OS image are neither affected by random nor sequential performance of the storage media used:

dmesg|grep $string               Good            Average            Crap
                             F2FS    EXT4      F2FS    EXT4      F2FS    EXT4
'Starting Various fixups'   12.19   12.32     12.88   12.09     12.32   12.19
           'Mounted /tmp'   12.38   12.51     13.12   12.27     12.51   12.38

Another interesting test that reflects a real-world Armbian scenario would be installation of kernel headers. We already know that users running off really crappy SD cards report hours or even days for this to finish. But I don't know whether I would further waste my time producing numbers here since

 

  • the results are already known (don't be stupid and simply use storage with fast random (write) IO and this is done in minutes)
  • the approach is totally wrong. With passive benchmarking I could come back with numbers that show that the SanDisk Extreme Plus will finish this in 90 seconds while the crappy Class 4 card takes 20 hours. But instead we should focus on improving this (we already started with: a 10 minute timeout for header compilation and if this timeout is triggered we should add a permanent motd warning to the user that he should immediately replace his SD card since way too slow) 

For me the above tests are sufficient to draw the following conclusions already:

  • due to lack of real advantages and two major disadvantages a consideration of switching to F2FS on any default OS image is not justified (yet)
  • the 'storage performance' issue is more one of documentation and educating users than of choosing filesystems. Instead of encouraging users to try out crappy storage (the 'Class 4 SD card' they found somewhere in the drawer) we should improve in pointing out that choosing a good new SD card is the way to go

 

Link to comment
Share on other sites

8 hours ago, tkaiser said:

Another interesting test that reflects a real-world Armbian scenario would be installation of kernel headers.

 

Since I had to wait for other stuff and had already 3 installations on SD card I thought let's give it a try (with the fast cards first and then let the crappy 4GB card run the next hours). Surprisingly my Phoronix style 'benchmark' already finished (time in seconds below):

                                 Good            Average            Crap
                             F2FS    EXT4      F2FS    EXT4      F2FS    EXT4
apt purge linux-headers        17       7        21       9        35      10
apt install linux-headers      37     119       196     165       205     289

I simply ran the two above calls to deinstall linux-header package and to reinstall it later (no idea whether that relates to the usual real-world scenario with Armbian where most likely the kernel headers get updated instead). The duration in seconds results from looking at /var/log/dpkg.log and doing the math dividing 'startup packages configure' entry from 'startup packages purge'/'startup archives unpack' entries (no idea whether that's correct).

 

It should be noted that 3 runs were done with a full OMV installation on SD card. After I re-imaged the SD cards the test ran against a fresh Armbian install (so numbers without meaning anyway). I also don't trust that much in the 37 seconds for 'apt install linux-headers' on the good SD card with F2FS -- most likely it's an error caused by me reading timestamps in dpkg.log wrong and it should be 60 seconds more (can't check any more since I've overwritten the SD card for the ext4 test).

 

Anyway it's somewhat surprising to me that kernel header compilation was that fast even on my crappy SD card (maybe not crappy enough? Maybe that what usually happens -- an apt upgrade -- generates a totally different IO pattern?). But it's still obvious that good or average SD cards should be preferred and crappy ones should be avoided regardless of the FS used.

 

Link to comment
Share on other sites

@tkaiser

From what I have seen performance is similar to ext4 and not sure if it still suffers the level of performance degradation over time that it once had.

 Just wondering if you Armbian guys ever did any tests on block wear vs ext4 as just wondering as never really seen many real world examples.
In a way f2fs should take a dumb old cheap sd card and use the boards cpu to provide a high level wear levelling algs & process with over-provision.
Not really sure aside from the perf tests how you can capture the block wear as a comparison however. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines