Volume management 卷管理 

BTRFS filesystem can be created on top of single or multiple block devices. Devices can be then added, removed or replaced on demand. Data and metadata are organized in allocation profiles with various redundancy policies. There’s some similarity with traditional RAID levels, but this could be confusing to users familiar with the traditional meaning. Due to the similarity, the RAID terminology is widely used in the documentation. See mkfs.btrfs(8) for more details and the exact profile capabilities and constraints.
BTRFS 文件系统可以在单个或多个块设备的顶部创建。然后可以根据需要添加、删除或替换设备。数据和元数据以各种冗余策略组织在分配配置文件中。与传统的 RAID 级别有一些相似之处,但这可能会让熟悉传统含义的用户感到困惑。由于相似性,RAID 术语在文档中被广泛使用。有关更多详细信息和确切的配置文件功能和约束,请参阅 mkfs.btrfs(8)。

The device management works on a mounted filesystem. Devices can be added, removed or replaced, by commands provided by btrfs device and btrfs replace.
设备管理在挂载的文件系统上运行。设备可以通过 btrfs 设备和 btrfs 替换提供的命令进行添加、移除或替换。

The profiles can be also changed, provided there’s enough workspace to do the conversion, using the btrfs balance command and namely the filter convert.
如果有足够的工作空间进行转换,可以使用 btrfs 平衡命令以及特别是过滤器转换来更改配置文件。

Type 类型

The block group profile type is the main distinction of the information stored on the block device. User data are called Data, the internal data structures managed by filesystem are Metadata and System.
块组配置文件类型是存储在块设备上的信息的主要区别。用户数据称为数据,文件系统管理的内部数据结构称为元数据和系统。

Profile 配置文件

A profile describes an allocation policy based on the redundancy/replication constraints in connection with the number of devices. The profile applies to data and metadata block groups separately. E.g. single, RAID1.
配置文件描述了基于冗余/复制约束条件与设备数量相关的分配策略。配置文件分别应用于数据块组和元数据块组。例如,单个,RAID1。

RAID level RAID 等级

Where applicable, the level refers to a profile that matches constraints of the standard RAID levels. At the moment the supported ones are: RAID0, RAID1, RAID10, RAID5 and RAID6.
在适用的情况下,该级别指的是符合标准 RAID 级别约束的配置文件。目前支持的级别有:RAID0、RAID1、RAID10、RAID5 和 RAID6。

Typical use cases 典型用例 

Starting with a single-device filesystem
从单设备文件系统开始 

Assume we’ve created a filesystem on a block device /dev/sda with profile single/single (data/metadata), the device size is 50GiB and we’ve used the whole device for the filesystem. The mount point is /mnt.
假设我们已经在一个块设备 /dev/sda 上使用单个/single(数据/元数据)配置文件系统,设备大小为 50GiB,并且我们已经将整个设备用于文件系统。挂载点是 /mnt

The amount of data stored is 16GiB, metadata have allocated 2GiB.
存储的数据量为 16GiB,元数据已分配 2GiB。

Add new device 添加新设备

We want to increase the total size of the filesystem and keep the profiles. The size of the new device /dev/sdb is 100GiB.
我们希望增加文件系统的总大小并保留配置文件。新设备 /dev/sdb 的大小为 100GiB。

$ btrfs device add /dev/sdb /mnt

The amount of free data space increases by less than 100GiB, some space is allocated for metadata.
空闲数据空间的增加量不到 100GiB,一些空间被分配给元数据。

Convert to RAID1 转换为 RAID1 

Now we want to increase the redundancy level of both data and metadata, but we’ll do that in steps. Note, that the device sizes are not equal and we’ll use that to show the capabilities of split data/metadata and independent profiles.
现在我们希望增加数据和元数据的冗余级别,但我们将分步进行。请注意,设备大小不相等,我们将利用这一点来展示分割数据/元数据和独立配置文件的能力。

The constraint for RAID1 gives us at most 50GiB of usable space and exactly 2 copies will be stored on the devices.
RAID1 的约束条件给出了最多 50GiB 的可用空间,并且确切地将 2 份副本存储在设备上。

First we’ll convert the metadata. As the metadata occupy less than 50GiB and there’s enough workspace for the conversion process, we can do:
首先,我们将转换元数据。由于元数据占用的空间不到 50GiB,并且转换过程有足够的工作空间,我们可以执行:

$ btrfs balance start -mconvert=raid1 /mnt

This operation can take a while, because all metadata have to be moved and all block pointers updated. Depending on the physical locations of the old and new blocks, the disk seeking is the key factor affecting performance.
这个操作可能需要一段时间,因为所有元数据都必须被移动,所有块指针都必须被更新。根据旧块和新块的物理位置,磁盘寻道是影响性能的关键因素。

You’ll note that the system block group has been also converted to RAID1, this normally happens as the system block group also holds metadata (the physical to logical mappings).
您会注意到系统块组也已转换为 RAID1,这通常发生在系统块组也保存元数据(物理到逻辑映射)的情况下。

What changed: 发生了什么变化:

  • available data space decreased by 3GiB, usable roughly (50 - 3) + (100 - 3) = 144 GiB
    可用数据空间减少了 3GiB,可用空间大约为(50 - 3) + (100 - 3) = 144 GiB

  • metadata redundancy increased
    元数据冗余增加

IOW, the unequal device sizes allow for combined space for data yet improved redundancy for metadata. If we decide to increase redundancy of data as well, we’re going to lose 50GiB of the second device for obvious reasons.
换句话说,不均匀的设备大小允许为数据提供组合空间,同时为元数据提供改进的冗余。如果我们决定增加数据的冗余,我们将因为明显的原因而失去第二个设备的 50GiB。

$ btrfs balance start -dconvert=raid1 /mnt

The balance process needs some workspace (i.e. a free device space without any data or metadata block groups) so the command could fail if there’s too much data or the block groups occupy the whole first device.
平衡过程需要一些工作空间(即没有任何数据或元数据块组的空闲设备空间),因此如果数据过多或块组占据整个第一个设备,则命令可能会失败。

The device size of /dev/sdb as seen by the filesystem remains unchanged, but the logical space from 50-100GiB will be unused.
由文件系统看到的 /dev/sdb 设备大小保持不变,但从 50-100GiB 的逻辑空间将不被使用。

Remove device 移除设备

Device removal must satisfy the profile constraints, otherwise the command fails. For example:
设备移除必须满足配置文件约束,否则命令将失败。例如:

$ btrfs device remove /dev/sda /mnt
ERROR: error removing device '/dev/sda': unable to go below two devices on raid1

In order to remove a device, you need to convert the profile in this case:
要移除设备,您需要转换配置文件,如下:

$ btrfs balance start -mconvert=dup -dconvert=single /mnt
$ btrfs device remove /dev/sda /mnt