Hardware considerations
硬件考虑

Storage model 存储模型

A storage model is a model that captures key physical aspects of data structure in a data store. A filesystem is the logical structure organizing data on top of the storage device.
存储模型是捕捉数据存储中数据结构的关键物理方面的模型。文件系统是在存储设备顶部组织数据的逻辑结构。

The filesystem assumes several features or limitations of the storage device and utilizes them or applies measures to guarantee reliability. BTRFS in particular is based on a COW (copy on write) mode of writing, i.e. not updating data in place but rather writing a new copy to a different location and then atomically switching the pointers.
文件系统假定存储设备具有几个特性或限制,并利用它们或采取措施来保证可靠性。特别是 BTRFS 基于写时复制(COW)模式,即不在原地更新数据,而是将新副本写入不同位置,然后原子地切换指针。

In an ideal world, the device does what it promises. The filesystem assumes that this may not be true so additional mechanisms are applied to either detect misbehaving hardware or get valid data by other means. The devices may (and do) apply their own detection and repair mechanisms but we won’t assume any.
在理想世界中,设备会实现其承诺。文件系统假定这可能不是真实的,因此会应用额外的机制来检测设备的异常行为或通过其他方式获取有效数据。设备可能(也确实)应用其自己的检测和修复机制,但我们不会假设任何情况。

The following assumptions about storage devices are considered (sorted by importance, numbers are for further reference):
考虑了关于存储设备的以下假设(按重要性排序,数字供进一步参考):

  1. atomicity of reads and writes of blocks/sectors (the smallest unit of data the device presents to the upper layers)
    读取和写入块/扇区(设备向上层呈现的最小数据单元)的原子性

  2. there’s a flush command that instructs the device to forcibly order writes before and after the command; alternatively there’s a barrier command that facilitates the ordering but may not flush the data
    有一个刷新命令,指示设备在命令之前和之后强制排序写入;或者有一个屏障命令,促进排序但可能不刷新数据

  3. data sent to write to a given device offset will be written without further changes to the data and to the offset
    发送到给定设备偏移量的写入数据将被写入,而不会对数据和偏移量进行进一步更改

  4. writes can be reordered by the device, unless explicitly serialized by the flush command
    写入可以被设备重新排序,除非通过刷新命令明确序列化

  5. reads and writes can be freely reordered and interleaved
    读取和写入可以自由重新排序和交错

The consistency model of BTRFS builds on these assumptions. The logical data updates are grouped, into a generation, written on the device, serialized by the flush command and then the super block is written ending the generation. All logical links among metadata comprising a consistent view of the data may not cross the generation boundary.
BTRFS 的一致性模型建立在这些假设之上。逻辑数据更新被分组成一代,写入设备,通过刷新命令序列化,然后写入超级块结束该代。所有元数据之间的逻辑链接组成了数据的一致视图,可能不会跨越代边界。

When things go wrong
当事情出错时 

No or partial atomicity of block reads/writes (1)
块读/写的不完全原子性 (1)

  • Problem: a partial block contents is written (torn write), e.g. due to a power glitch or other electronics failure during the read/write
    问题: 写入部分块内容 (断裂写入),例如由于读/写期间的电源故障或其他电子故障

  • Detection: checksum mismatch on read
    读取时校验和不匹配

  • Repair: use another copy or rebuild from multiple blocks using some encoding scheme
    使用另一个副本或使用某种编码方案从多个块重建

The flush command does not flush (2)
刷新命令不刷新(2)

This is perhaps the most serious problem and impossible to mitigate by filesystem without limitations and design restrictions. What could happen in the worst case is that writes from one generation bleed to another one, while still letting the filesystem consider the generations isolated. Crash at any point would leave data on the device in an inconsistent state without any hint what exactly got written, what is missing and leading to stale metadata link information.
这可能是最严重的问题,文件系统无法通过限制和设计约束来减轻。在最糟糕的情况下可能发生的是,一个世代的写入会泄漏到另一个世代,同时让文件系统认为这些世代是隔离的。在任何时候崩溃都会使设备上的数据处于不一致状态,而没有任何线索表明到底写入了什么,缺少了什么,导致了陈旧的元数据链接信息。

Devices usually honor the flush command, but for performance reasons may do internal caching, where the flushed data are not yet persistently stored. A power failure could lead to a similar scenario as above, although it’s less likely that later writes would be written before the cached ones. This is beyond what a filesystem can take into account. Devices or controllers are usually equipped with batteries or capacitors to write the cache contents even after power is cut. (Battery backed write cache)
设备通常会遵守刷新命令,但出于性能原因可能会进行内部缓存,刷新的数据尚未持久存储。断电可能导致与上述类似的情况,尽管后续写入在缓存写入之前的可能性较小。这超出了文件系统所能考虑的范围。设备或控制器通常配备电池或电容器,以便在断电后仍能写入缓存内容。(带电池备份写缓存)

Data get silently changed on write (3)
数据在写入时会悄悄地发生变化(3)

Such thing should not happen frequently, but still can happen spuriously due the complex internal workings of devices or physical effects of the storage media itself.
这种事情不应该经常发生,但由于设备的复杂内部工作或存储介质本身的物理效应,仍然可能偶尔发生。

  • Problem: while the data are written atomically, the contents get changed
    问题:虽然数据是以原子方式写入的,但内容发生了变化

  • Detection: checksum mismatch on read
    检测:读取时校验和不匹配

  • Repair: use another copy or rebuild from multiple blocks using some encoding scheme
    修复:使用另一个副本或使用某种编码方案从多个块重新构建

Data get silently written to another offset (3)
数据悄悄地写入另一个偏移量(3)

This would be another serious problem as the filesystem has no information when it happens. For that reason the measures have to be done ahead of time. This problem is also commonly called ghost write.
当发生这种情况时,这将是另一个严重问题,因为文件系统在其发生时没有任何信息。因此,必须提前采取措施。这个问题通常也被称为幽灵写入。

The metadata blocks have the checksum embedded in the blocks, so a correct atomic write would not corrupt the checksum. It’s likely that after reading such block the data inside would not be consistent with the rest. To rule that out there’s embedded block number in the metadata block. It’s the logical block number because this is what the logical structure expects and verifies.
元数据块中嵌入了校验和,因此正确的原子写入不会损坏校验和。很可能在读取这样的块之后,内部数据与其余部分不一致。为了排除这种可能性,在元数据块中嵌入了块编号。这是逻辑块编号,因为这是逻辑结构所期望并验证的内容。

The following is based on information publicly available, user feedback, community discussions or bug report analyses. It’s not complete and further research is encouraged when in doubt.
以下内容基于公开信息、用户反馈、社区讨论或错误报告分析。这并不完整,当有疑问时鼓励进一步研究。

Main memory 主存储器 

The data structures and raw data blocks are temporarily stored in computer memory before they get written to the device. It is critical that memory is reliable because even simple bit flips can have vast consequences and lead to damaged structures, not only in the filesystem but in the whole operating system.
在将数据结构和原始数据块写入设备之前,它们会临时存储在计算机内存中。内存的可靠性至关重要,因为即使是简单的位翻转也可能导致严重后果,并导致结构受损,不仅仅是在文件系统中,还包括整个操作系统。

Based on experience in the community, memory bit flips are more common than one would think. When it happens, it’s reported by the tree-checker or by a checksum mismatch after reading blocks. There are some very obvious instances of bit flips that happen, e.g. in an ordered sequence of keys in metadata blocks. We can easily infer from the other data what values get damaged and how. However, fixing that is not straightforward and would require cross-referencing data from the entire filesystem to see the scope.
根据社区的经验,内存位翻转比人们想象的要常见。当发生这种情况时,树检查器或在读取块后发生校验和不匹配会报告。有一些非常明显的位翻转实例会发生,例如在元数据块中键的有序序列中。我们可以轻松地从其他数据推断出哪些值受损以及受损程度。然而,修复这个问题并不简单,需要跨引用整个文件系统的数据来查看范围。

If available, ECC memory should lower the chances of bit flips, but this type of memory is not available in all cases. A memory test should be performed in case there’s a visible bit flip pattern, though this may not detect a faulty memory module because the actual load of the system could be the factor making the problems appear. In recent years attacks on how the memory modules operate have been demonstrated (rowhammer) achieving specific bits to be flipped. While these were targeted, this shows that a series of reads or writes can affect unrelated parts of memory.
如果可用,ECC 内存应该降低位翻转的机会,但并非所有情况下都有这种类型的内存。如果存在可见的位翻转模式,应进行内存测试,尽管这可能无法检测到有问题的内存模块,因为系统的实际负载可能是导致问题出现的因素。近年来,对内存模块操作的攻击已经得到证明(rowhammer),实现特定位的翻转。尽管这些是有针对性的,但这表明一系列读取或写入可能会影响内存的不相关部分。

Block group profiles with redundancy (like RAID1) will not protect against memory errors as the blocks are first stored in memory before they are written to the devices from the same source.
具有冗余的块组配置文件(如 RAID1)不会保护内存错误,因为块首先存储在内存中,然后从相同来源写入设备。

A filesystem mounted read-only will not affect the underlying block device in almost 100% (with highly unlikely exceptions). The exception is a tree-log that needs to be replayed during mount (and before the read-only mount takes place), working memory is needed for that and that can be affected by bit flips. There’s a theoretical case where bit flip changes the filesystem status from read-only to read-write.
挂载为只读的文件系统几乎不会影响底层块设备(几乎 100%,极少数例外)。例外情况是在挂载期间需要重放的树日志(在只读挂载之前进行),这需要工作内存,而这可能会受到位翻转的影响。存在一种理论情况,即位翻转将文件系统状态从只读更改为读写。

Further reading: 进一步阅读:

What to do: 该怎么做:

  • run memtest, note that sometimes memory errors happen only when the system is under heavy load that the default memtest cannot trigger
    运行内存测试,注意有时候内存错误只会在系统承受重负载时发生,而默认的内存测试无法触发这种情况

  • memory errors may appear as filesystem going read-only due to “pre write” check, that verify meta data before they get written but fail some basic consistency checks
    内存错误可能会导致文件系统变为只读,这是由于“预写”检查引起的,该检查在写入之前验证元数据,但未通过一些基本一致性检查

  • newly built systems should be tested before being put to production use, ideally start a IO/CPU load that will be run on such system later; namely systems that will utilize overclocking or special performance features
    新构建的系统在投入生产使用之前应该进行测试,最好启动一个会在该系统上运行的 IO/CPU 负载测试;特别是那些将利用超频或特殊性能功能的系统

Direct memory access (DMA)
直接内存访问(DMA)

Another class of errors is related to DMA (direct memory access) performed by device drivers. While this could be considered a software error, the data transfers that happen without CPU assistance may accidentally corrupt other pages. Storage devices utilize DMA for performance reasons, the filesystem structures and data pages are passed back and forth, making errors possible in case page life time is not properly tracked.
另一类错误与设备驱动程序执行的 DMA(直接内存访问)有关。虽然这可能被视为软件错误,但在没有 CPU 协助的情况下发生的数据传输可能会意外损坏其他页面。存储设备利用 DMA 来提高性能,文件系统结构和数据页面来回传递,如果页面寿命没有正确跟踪,可能会出现错误。

There are lots of quirks (device-specific workarounds) in Linux kernel drivers (regarding not only DMA) that are added when found. The quirks may avoid specific errors or disable some features to avoid worse problems.
在 Linux 内核驱动程序中存在许多怪癖(特定设备的解决方法),当发现时会添加到其中(不仅限于 DMA)。这些怪癖可能避免特定错误或禁用某些功能以避免更严重的问题。

What to do: 该做什么:

  • use up-to-date kernel (recent releases or maintained long term support versions)
    使用最新的内核(最新版本或维护的长期支持版本)

  • as this may be caused by faulty drivers, keep the systems up-to-date
    由于可能是由于驱动程序故障引起的,请保持系统保持最新状态

Rotational disks (HDD) 旋转磁盘(HDD)

Rotational HDDs typically fail at the level of individual sectors or small clusters. Read failures are caught on the levels below the filesystem and are returned to the user as EIO - Input/output error. Reading the blocks repeatedly may return the data eventually, but this is better done by specialized tools and filesystem takes the result of the lower layers. Rewriting the sectors may trigger internal remapping but this inevitably leads to data loss.
旋转式硬盘驱动器(HDD)通常在单个扇区或小簇的级别上发生故障。读取失败会在文件系统下的级别上被捕获,并作为 EIO - 输入/输出错误返回给用户。重复读取块可能最终会返回数据,但最好由专门工具完成,文件系统会接收下层的结果。重写扇区可能会触发内部重映射,但这不可避免地会导致数据丢失。

Disk firmware is technically software but from the filesystem perspective is part of the hardware. IO requests are processed, and caching or various other optimizations are performed, which may lead to bugs under high load or unexpected physical conditions or unsupported use cases.
硬盘固件在技术上是软件,但从文件系统的角度来看,它是硬件的一部分。IO 请求会被处理,并执行缓存或各种其他优化,这可能会在高负载或意外物理条件或不支持的用例下导致错误。

Disks are connected by cables with two ends, both of which can cause problems when not attached properly. Data transfers are protected by checksums and the lower layers try hard to transfer the data correctly or not at all. The errors from badly-connecting cables may manifest as large amount of failed read or write requests, or as short error bursts depending on physical conditions.
硬盘通过两端连接的电缆连接,当连接不正确时,两端都可能导致问题。数据传输受校验和保护,较低的层次会尽力正确传输数据,或者根本不传输。由于连接不良的电缆可能导致大量失败的读取或写入请求,或者根据物理条件而定,表现为短暂的错误突发。

What to do: 该怎么做:

  • check smartctl for potential issues
    检查 smartctl 是否存在潜在问题

Solid state drives (SSD)
固态硬盘(SSD)

The mechanism of information storage is different from HDDs and this affects the failure mode as well. The data are stored in cells grouped in large blocks with limited number of resets and other write constraints. The firmware tries to avoid unnecessary resets and performs optimizations to maximize the storage media lifetime. The known techniques are deduplication (blocks with same fingerprint/hash are mapped to same physical block), compression or internal remapping and garbage collection of used memory cells. Due to the additional processing there are measures to verify the data e.g. by ECC codes.
信息存储机制与 HDD 不同,这也影响了故障模式。数据存储在以大块为单位的单元中,具有有限数量的重置和其他写入约束。固件试图避免不必要的重置,并执行优化以最大化存储介质的寿命。已知的技术包括去重(具有相同指纹/哈希的块被映射到同一物理块)、压缩或内部重映射以及已使用内存单元的垃圾收集。由于额外的处理,有措施来验证数据,例如通过 ECC 码。

The observations of failing SSDs show that the whole electronic fails at once or affects a lot of data (e.g. stored on one chip). Recovering such data may need specialized equipment and reading data repeatedly does not help as it’s possible with HDDs.
失败的固态硬盘的观察表明,整个电子设备一次性失效或影响大量数据(例如存储在一个芯片上的数据)。恢复这些数据可能需要专门的设备,反复读取数据并不能像 HDD 那样有助于恢复。

There are several technologies of the memory cells with different characteristics and price. The lifetime is directly affected by the type and frequency of data written. Writing “too much” distinct data (e.g. encrypted) may render the internal deduplication ineffective and lead to a lot of rewrites and increased wear of the memory cells.
内存单元有几种不同特性和价格的技术。寿命直接受到写入数据类型和频率的影响。写入“过多”不同的数据(例如加密数据)可能使内部去重失效,并导致大量重写和增加内存单元的磨损。

There are several technologies and manufacturers so it’s hard to describe them but there are some that exhibit similar behaviour:
有几种技术和制造商,因此很难描述它们,但有一些表现出类似的行为:

  • expensive SSD will use more durable memory cells and is optimized for reliability and high load
    昂贵的固态硬盘将使用更耐用的内存单元,并针对可靠性和高负载进行了优化。

  • cheap SSD is projected for a lower load (“desktop user”) and is optimized for cost, it may employ the optimizations and/or extended error reporting partially or not at all
    廉价的固态硬盘适用于较低负载(“桌面用户”),并且经过成本优化,可能部分或完全采用优化和/或扩展的错误报告

It’s not possible to reliably determine the expected lifetime of an SSD due to lack of information about how it works or due to lack of reliable stats provided by the device.
由于缺乏关于其工作原理的信息或由于设备提供的可靠统计数据不足,无法可靠地确定固态硬盘的预期寿命。

Metadata writes tend to be the biggest component of lifetime writes to a SSD, so there is some value in reducing them. Depending on the device class (high end/low end) the features like DUP block group profiles may affect the reliability in both ways:
元数据写入往往是固态硬盘寿命写入的最大组成部分,因此减少它们具有一定的价值。根据设备类别(高端/低端),像 DUP 块组配置文件这样的功能可能会以两种方式影响可靠性。

  • high end are typically more reliable and using single for data and metadata could be suitable to reduce device wear
    高端产品通常更可靠,并且单独用于数据和元数据可能适合减少设备磨损

  • low end could lack ability to identify errors so an additional redundancy at the filesystem level (checksums, DUP) could help
    低端产品可能缺乏识别错误的能力,因此在文件系统级别增加额外的冗余(校验和、DUP)可能有所帮助

Only users who consume 50 to 100% of the SSD’s actual lifetime writes need to be concerned by the write amplification of btrfs DUP metadata. Most users will be far below 50% of the actual lifetime, or will write the drive to death and discover how many writes 100% of the actual lifetime was. SSD firmware often adds its own write multipliers that can be arbitrary and unpredictable and dependent on application behavior, and these will typically have far greater effect on SSD lifespan than DUP metadata. It’s more or less impossible to predict when a SSD will run out of lifetime writes to within a factor of two, so it’s hard to justify wear reduction as a benefit.
只有消耗 SSD 实际寿命写入量的 50 到 100% 的用户需要关注 btrfs DUP 元数据的写放大。大多数用户的写入量远低于实际寿命的 50%,或者会写入到设备死亡并发现 100% 实际寿命的写入量是多少。SSD 固件通常会添加自己的写入倍增器,这些倍增器可能是任意的、不可预测的,并且取决于应用程序的行为,这些倍增器通常对 SSD 寿命的影响要远远大于 DUP 元数据。几乎不可能准确预测 SSD 何时耗尽寿命写入量,因此很难将减少磨损作为一个好处来证明。

Further reading: 进一步阅读:

What to do: 该做什么:

  • run smartctl or self-tests to look for potential issues
    运行 smartctl 或自检以查找潜在问题

  • keep the firmware up-to-date
    保持固件最新

NVM express, non-volatile memory (NVMe)
NVM Express,非易失性存储器(NVMe)

NVMe is a type of persistent memory usually connected over a system bus (PCIe) or similar interface and the speeds are an order of magnitude faster than SSD. It is also a non-rotating type of storage, and is not typically connected by a cable. It’s not a SCSI type device either but rather a complete specification for logical device interface.
NVMe 是一种通常通过系统总线(PCIe)或类似接口连接的持久性存储器类型,速度比固态硬盘快一个数量级。它也是一种非旋转式存储,通常不通过电缆连接。它也不是 SCSI 类型设备,而是逻辑设备接口的完整规范。

In a way the errors could be compared to a combination of SSD class and regular memory. Errors may exhibit as random bit flips or IO failures. There are tools to access the internal log (nvme log and nvme-cli) for a more detailed analysis.
在某种程度上,错误可以被比作 SSD 类和常规内存的组合。错误可能表现为随机位翻转或 IO 故障。有工具可以访问内部日志(nvme 日志和 nvme-cli)进行更详细的分析。

There are separate error detection and correction steps performed e.g. on the bus level and in most cases never making in to the filesystem level. Once this happens it could mean there’s some systematic error like overheating or bad physical connection of the device. You may want to run self-tests (using smartctl).
有单独的错误检测和纠正步骤,例如在总线级别上执行,在大多数情况下从未进入文件系统级别。一旦发生这种情况,可能意味着存在一些系统性错误,如过热或设备的不良物理连接。您可能希望运行自检(使用 smartctl)。

Drive firmware 驱动器固件 

Firmware is technically still software but embedded into the hardware. As all software has bugs, so does firmware. Storage devices can update the firmware and fix known bugs. In some cases the it’s possible to avoid certain bugs by quirks (device-specific workarounds) in Linux kernel.
固件在技术上仍然是软件,但嵌入到硬件中。正如所有软件都有漏洞一样,固件也有漏洞。存储设备可以更新固件并修复已知的漏洞。在某些情况下,可以通过 Linux 内核中的技巧(设备特定的解决方法)来避免某些漏洞。

A faulty firmware can cause wide range of corruptions from small and localized to large affecting lots of data. Self-repair capabilities may not be sufficient.
故障的固件可能导致从小范围和局部的损坏到影响大量数据的广泛损坏。自我修复能力可能不足。

What to do: 该怎么办:

  • check for firmware updates in case there are known problems, note that updating firmware can be risky on itself
    检查固件更新以防存在已知问题,请注意更新固件本身可能存在风险

  • use up-to-date kernel (recent releases or maintained long term support versions)
    使用最新的内核(最新发布版或长期维护版本)

SD flash cards SD 闪存卡 

There are a lot of devices with low power consumption and thus using storage media based on low power consumption too, typically flash memory stored on a chip enclosed in a detachable card package. An improperly inserted card may be damaged by electrical spikes when the device is turned on or off. The chips storing data in turn may be damaged permanently. All types of flash memory have a limited number of rewrites, so the data are internally translated by FTL (flash translation layer). This is implemented in firmware (technically a software) and prone to bugs that manifest as hardware errors.
有许多低功耗设备,因此使用基于低功耗的存储介质,通常是存储在可拆卸卡包中的芯片上的闪存存储器。当设备开启或关闭时,不正确插入的卡可能会受到电压峰值的损坏。存储数据的芯片可能会永久受损。所有类型的闪存存储器都有有限的重写次数,因此数据通过 FTL(闪存转换层)进行内部转换。这是在固件中实现的(技术上是软件),容易出现表现为硬件错误的错误。

Adding redundancy like using DUP profiles for both data and metadata can help in some cases but a full backup might be the best option once problems appear and replacing the card could be required as well.
增加冗余,例如同时为数据和元数据使用 DUP 配置文件,在某些情况下可能有所帮助,但一旦出现问题,完整备份可能是最佳选择,同时可能需要更换卡。

Hardware as the main source of filesystem corruptions
硬件是文件系统损坏的主要原因。

If you use unreliable hardware and don’t know about that, don’t blame the filesystem when it tells you.
如果你使用不可靠的硬件并且对此一无所知,当文件系统告诉你时不要责怪它。