ceph空间不足报错,文件夹100000文件数限制问题处理

张 佃栋 ceph分享, linux运维 2018-08-03 821 次浏览 , , 没有评论

cephfs导入文件,磁盘空间足够,但是提示“no space left” 空间不足,查找资料发现官方说法

A directory fragment is elegible for splitting when its size exceeds mds_bal_split_size (default 10000). Ordinarily this split is delayed by mds_bal_fragment_interval, but if the fragment size exceeds a factor of mds_bal_fragment_fast_factorthe split size, the split will happen immediately (holding up any client metadata IO on the directory).

mds_bal_fragment_size_max is the hard limit on the size of directory fragments. If it is reached, clients will receive ENOSPC errors if they try to create files in the fragment. On a properly configured system, this limit should never be reached on ordinary directories, as they will have split long before. By default, this is set to 10 times the split size, giving a dirfrag size limit of 100000. Increasing this limit may lead to oversized directory fragment objects in the metadata pool, which the OSDs may not be able to handle.

A directory fragment is elegible for merging when its size is less than mds_bal_merge_size. There is no merge equivalent of the “fast splitting” explained above: fast splitting exists to avoid creating oversized directory fragments, there is no equivalent issue to avoid when merging. The default merge size is 50.

翻译过来意思是

当目录片段的大小超过mds_bal_split_size(默认10000)时,可以使用目录片段进行拆分 。通常,此拆分会延迟mds_bal_fragment_interval,但如果碎片大小超过mds_bal_fragment_fast_factor拆分大小的因素,则会立即进行拆分(在目录上保留任何客户端元数据IO)。

mds_bal_fragment_size_max是目录片段大小的硬限制。如果达到,客户端将尝试在片段中创建文件时收到ENOSPC错误。在正确配置的系统上,永远不应该在普通目录上达到此限制,因为它们很久就会拆分。默认情况下,此值设置为拆分大小的10倍,dirfrag大小限制为100000.增加此限制可能会导致元数据池中的超大目录片段对象(OSD可能无法处理)。

当目标片段的大小小于时,目录片段可以合并mds_bal_merge_size。没有合并上面解释的“快速拆分”:快速拆分存在以避免创建超大的目录碎片,合并时没有相同的问题要避免。默认合并大小为50。

 

ceph文件夹dirfrag大小限制为100000 ,如果想增加需要修改mds_bal_fragment_size_max 参数

本文出自张佃栋de博客,转载时请注明出处及相应链接。

本文永久链接: https://zhangdd.com/741.html

发表评论

电子邮件地址不会被公开。 必填项已用*标注

Protected with IP Blacklist CloudIP Blacklist Cloud
回顶部