Jump to content

Search the Community

Showing results for 'ERASE CMD38'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Armbian
    • Armbian project administration
  • Community
    • Announcements
    • SBC News
    • Framework and userspace feature requests
    • Off-topic
  • Using Armbian
    • Beginners
    • Software, Applications, Userspace
    • Advanced users - Development
  • Standard support
    • Amlogic meson
    • Allwinner sunxi
    • Rockchip
    • Other families
  • Community maintained / Staging
    • TV boxes
    • Amlogic meson
    • Allwinner sunxi
    • Marvell mvebu
    • Rockchip
    • Other families
  • Support

Categories

  • Official giveaways
  • Community giveaways

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Matrix


Mastodon


IRC


Website URL


XMPP/Jabber


Skype


Github


Discord


Location


Interests

Found 4 results

  1. Depends. IMO the vast majority of problems with suddenly dying flash media (SD cards or USB pendrives) is related to fraud flash: flash memory products that fake their capacity so all of a sudden they stop working once the total amount of data written to exceeds the drive's real capacity (see here for a tool to check for this). If you manage to buy genuine flash products (not the brands matter but where you buy -- with eBay, Aliexpress & Co. chances to get fake flash are most probably well above 50%) then there are still huge differences. Pine's cheap FORESEE eMMC modules with low capacity are way slower than the Samsung or SanDisk (Pine and other SBC vendors use for their higher capacity modules). But no idea about reliability since AFAIK all you can do here is to trust and believe since without extensive testing it's impossible to predict longevity of SD cards, eMMC and USB flash storage. My personal take on this is trying to minimize writes to flash storage ('Write Amplifcation' matters here, keeping stuff like logs in RAM and only write them once per hour and so on) When using low-end flash storage preferring media that supports TRIM (that's SD cards and most probably also eMMC supporting ERASE CMD38 -- you can then TRIM them manually from time to time and maybe we get filesystem drivers in Linux that will support TRIM on (e)MMC too in the future) Weighing eMMC with A1 rated SD cards If huge amounts of important data need to be written to the media then always using SSDs that can be queried with SMART. The better ones provide a SMART attribute that indicates how much the storage is already worn out Some more info: https://docs.armbian.com/User-Guide_Getting-Started/#how-to-prepare-a-sd-card https://forum.armbian.com/topic/6444-varlog-file-fills-up-to-100-using-pihole/?do=findComment&comment=50833 https://forum.openmediavault.org/index.php/Thread/24128-Asking-for-a-opinion/?postID=182858#post182858
  2. Please be careful since this is an entirely different workload compared to 'using SD cards for the rootfs' since storing images and video is more ore less only sequential IO coming with a write amplification close to the optimum: 1. This means that the amount of data written at the filesystem layer, the block device layer and the flash layer are almost identical. With our use cases when running a Linux installation on the SD card this looks totally different since the majoritiy of writes are of very small sizes and write amplification with such small chunks of data is way higher which can result in 1 byte changed at the filesystem or block device layer generating a whole 8K write at the flash layer (so we have a worst case 1:8192 ratio of 'data that has changed' vs. 'real writes to flash cells') Please see here for the full details why this is important, how it matters and what it affects: https://en.wikipedia.org/wiki/Write_amplification Our most basic take on that in Armbian happens at the filesystem layer due to mount options since we use 'noatime,nodiratime,commit=600' by default. What do they do? noatime prevents the filesystem generating writes when data is only accessed (default is that access times are logged in filesystem metadata which leads to updated filesystem structures and therefore unnecessary writes all the time filesystem objects are only read) nodiratime is the same for directories (not that relevant though) commit=600 is the most important one since this tells the filesystem to flush changes back to disk/card only every 600 seconds (10 min) Increasing the commit interval from the default 5 to 600 seconds results in the majority of writes waiting in DRAM to be flushed to disk only every 10 minutes. Those changes sit in the Page Cache (see here for the basics) and add as so called 'dirty pages'. So the amount of dirty pages increases every 10 minutes to be set to 0 after flushing the changes to disk. Can be watched nicely with monitoring tools or something simple as: watch -n 5 grep Dirty /proc/meminfo While @Fourdee tries to explain 'dirty pages' would be something bad or even an indication for degraded performance it's exactly the opposite and just how Linux basics work with a tunable set for a specific use case (rootfs on SD card). To elaborate on the effect: let's think about small changes affecting only 20 byte of change every minute. With filesystem defaults (commit interval = 5 seconds) this will result in 80KB written within 10 minutes (each write affects at least a whole flash page and that's AFAIK 8K or 16K on most cards so at least 8K * 10) while with a 10 minute commit interval only 8KB will be written. Ten times less wear. But unfortunately it's even worse with installations where users run off low capacity SD cards. To my knowledge in Linux we still have no TRIM functionality with MMC storage (SD cards, eMMC) so once the total amount of data written to the card exceeds its native capacity the card controller has no clue how to distinguish between free and occupied space and has therefore to start to delete (there's no overwrite with flash storage, see for example this explanation). So all new writes now might even affect not just pages but whole so called 'Erase Blocks' that might be much larger (4MB or 16MB for example on all the cards I use). This is for example explained here. In such a case (amount of writes exceed card's native capacity) we're now talking about writes affecting Erase Blocks that might be 4MB in size. With the above example of changing 20 bytes every minute with the default commit interval of 5 seconds at the flash layer now even 40 MB would be written while with a 10 min commit interval it's 4MB (all with just 200 bytes having changed in reality). So if you really care about the longevity of your card you buy good cards with capacities much 'larger than needed', clone them from time to time to another card and now perform a TRIM operation manually by using your PC or Mac and SD Association's 'SD Formatter' to do a quick erase there. This will send ERASE (CMD38) for all flash pages to the card's controller which now treats all pages as really empty so new writes to the card from now on do NOT generate handling of whole Erase Blocks but happen at the page size level again (until the card's capacity is fully used, then you would need to repeat the process). There's a downside with an increased commit interval as usual and that affects unsafe power-offs / crashes. Everything that sits in the Page Cache and is not already flushed back to disk/card is lost in case a power loss occurs or something similar. On the bright side this higher commit interval makes it less likely that you run into filesystem corruption since filesystem structures are updated on disk also only every 10 minutes. Besides that we try to cache other stuff in RAM as much as possible (eg. browser caches and user profiles using 'profile sync daemon') and same goes for log files which are amongst those candidates that show worst write amplification possible when allowing to update logs every few seconds on 'disk' (unfortunately we can't throw logs just away as for example DietPi does it by default so we have to fiddle around with stuff like our log2ram implementation showing lots of room for improvements)
  3. Usually we use 'ondemand' in Armbian (interactive is only useful on some sunxi legacy kernels) but apply the following tweaks: https://github.com/armbian/build/blob/751aa7194f77eabcb41b19b8d19f17f6ea23272a/packages/bsp/common/etc/init.d/armhwinfo#L82-L94 So the eMMC on this board is not that slow as on earlier board revisions (I had the opportunity to verify this recently since I let f3write/f3read run yesterday and the tool reported an average write speed of just 6.3MB/s on my BPi M2+). And what you observed is only some form of eMMC internal caching. As soon as the amount of data is small enough write speeds are high just to drop after some time. Easy to test for by combining two benchmark runs: iozone -e -I -a -s 3000M -r 16384k -i 0 ; iozone -e -I -a -s 100M -r 16384k -i 0 The 2nd iozone call will report the real write speed without any caching involved. And similar effects can be observed with small/cheap SSDs too (I've 5 small 120 / 128 GB here lying around just for testing, four show write performance of +300 MB/s but slow down to as low as low as 60 MB/s after a certain amount of data has been written, only my most recent one, a Transcend TS120GMTS420, is able to retain write performance of +500 MB/s regardless how much data will be written). And of course every write to flash media shorten its life but the FTL (flash translation layer) takes care of wear leveling to evenly spreads writes across flash cells so the most important thing to have in mind is 'write amplification'. And yes, on most flash media a performance degradation happens after total amount of data being written exceeding the storage capacity since now the FTL (controller) without TRIM has no clue which pages contain data and which not and starts to move around unused data as part of garbage collection and wear leveling (that's why quality SD cards can be considered the better choice than eMMC since with those SD cards you can use SD association's SD Formatter after some time and send an ERASE CMD38 to the device which restores factory performance -- see last 3 paragraphs here)
  4. So this is a 64GB Samsung. EVO, EVO+, Pro or Pro+? Anyway there's still the 1% reserve present and this is not 'the average' SD card but Samsungs all contain rather good controllers and NAND dies. Regarding 'reserve sectors' and overprovisioning: If a card claims it's n bytes in capacity it has internally a larger capacity. This is used as reserve (if the controller detects bad sectors, then reserve sectors are mapped in) and to allow somewhat ok-ish write performance when the card gets full. On flash media you can't overwrite directly, it's always a very time consuming read/delete/write cycle, the number a flash cell can be written to is determined by the count of program/erase (P/E) cycles it is designed for, the controller has to take care of this so that all flash cells wear out equally (wear leveling). Since the controller has no idea which sectors contain real data and which not (there's no TRIM support for SD cards) as soon as you completely fill the card once (all space partitioned) from now on the controller considers every sector containing useful data (even if you deleted the data in the meantime -- since there's no TRIM support the controller doesn't know what's empty or not, from now on the whole capacity is considered in use). Now only the 'reserve sectors' are available to perform read/delete/write cycles and if this amount of sectors is small things slow down a lot on average SD cards (not those more recent Samsung). Just check articles explaining SSDs and keep in mind that SD cards behave like crappy SSDs from a decade ago containing slow/primitive controllers and do not support TRIM. BTW: This is the only great use case for SD Association's 'SD Formatter'. This tool is used to format SD cards appropriately (partitions it while choosing the 'correct' file system which is either FAT or exFAT) which obviously is pretty useless from Armbian's perspective since burning an OS image as next step both overwrites the partition table and the filesystems present before. So why using SD Formatter in the first place? Since this tool implements ERASE CMD38. It tells the card's controller that every sector/block of the card does not contain any real data any more and can be considered empty. On 'the average' SD card this also might restore horribly low performance back to 'factory default' performance. But more recent SD card controllers especially when paired with many reserve sectors aren't that much affected. With Armbian's auto resize behaviour leaving at least 1% unpartitioned all that's happening in the background is that we ensure there are more sectors available for the controller's housekeeping which prevents older cards from becoming slow like hell as soon as they've been 'filled completely'. Edit: A small note regarding 'SD cards don't support TRIM'. The SD protocol defines a block erase command and tools like fstrim are supposed to do the job. Whether your kernel + SD card combination supports that or not a simple 'sudo fstrim -v /' might tell. Whether this has the desired effect or not is a different question though (see this attempt to test this -- I'm not sure whether the method is sufficient since the point of marking data segments as already erased should not involve overwriting them, it's just that the SD card's controller knows that specific sectors/pages can be added to the wear level pool since marked 'emtpy' now)
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines