How to Select Distributed File System for Linux

How to Select Distributed File System for LinuxThere are dozens of file systems, all of them provide user interfaces for data storage.Each system is good in its own way and there is major confusion  about the  Selection of Distributed File System for Linux.

In this article we have tried to Resolve selection issue and help you to select Distributed File System for Linux based on certain parameters.

In this age of high stress and petabytes of data for processing, it is pretty easy to find what you need, we need only think of distributed data, load balancing, multiple mounting rw and the other cluster charms.

 

Objective: To organize a distributed file storage
– Without Self Assembly cores, modules, patches, 
– With the ability to mount multiple-mode rw, 
– POSIX compatibility, 
– Failover, 
– Compatibility with existing technologies are used, 
– A reasonable overhead for I / O operations compared with the local file systems 
– Ease of configuration, maintenance and administration. In this paper we use Proxmox and container virtualization OpenVZ . This is convenient, it flies, this solution more advantages than that of similar products. At least for our projects and our realities.himself on mounted storage anywhere FC.

OCFS2

We had a successful experience using this file system, we decided first to try it. Proxmox recently moved to redhatovskoe core, it ocfs2 support disabled. Module in the kernel is, but on the forums and openvz proxmox not recommend its use. We try and rebuild the kernel. Module version 1.5.0, a cluster of four iron machines based on debian squeeze, proxmox 2.0beta3, kernel 2.6.32-6-pve. For our tests, we used stress. Problems for several years remained the same. All wound up setting of the binder takes half an hour at the most. However, under load cluster can spontaneously collapse, leading to total kernel panic on all servers at once. During the day, test machine rebooted a total of five times.It is treated, but to bring this system up to operating state pretty hard. Also had to recompile the kernel and include ocfs2. Minus.

GFS2

Although the core and redhatovskoe, the module is enabled by default, to get here and we could not. The thing proxmox, who came up with the second version of its cluster with chess and poetesses to store its configuration files. There cman, corosync and other packages from the gfs2-tools, only to rebuild everything especially for pve.Accessories for gfs2, thus, of the packages just do not put as first offer to demolish the entire proxmox, that we could not do. For three hours depending managed to win, but all over again kernel panic. Trying to adapt packages to proxmox to solve our problems was not successful, after two hours it was decided to abandon the idea.

CEPH

Stopped while on her. POSIX-compatible, high performance, excellent scalability, some bold and interesting approaches to implementation. filesystem consists of the following components: 

1. ClientsUsers of the data. 

2. Metadata server. Cache and synchronize distributed metadata. With the help of metadata client in any period of time knows where the data he needed. Also, the metadata server perform the distribution of new data. 

3.Cluster storage facilitiesHere, in the form of objects are stored as data and metadata.

4. Cluster monitorsMonitor the health of the whole system. The actual file I / O takes place between a client and a cluster of storage facilities. Thus, management of high-level functions of POSIX (opening, closing and renaming) by means of metadata servers, and management of the usual functions of POSIX (read and write) directly through a cluster of storage facilities. Any component may be several, depending on the challenges faced by the administrator . The file system can be connected either directly or through a kernel module, so through FUSE. From a user perspective, Ceph file system is transparent. They just have access to a huge storage system and are unaware of the servers used for this metadata, and monitors individual devices that make up the massive pool of storage. Users just see a mount point, which can be made ??the standard file I / O performance. From the perspective of an administrator has the ability to transparently extend the cluster by adding any number of necessary components, monitors, storage, server, metadata. Developers proudly called Ceph ecosystem. GPFS, Lustre, and other file systems, as well as add-ins, we have not considered at this time, they either very difficult to set up, or do not develop, or are not suitable for the job.

CEPH-1

CEPH-2

Configuration and testing

Standard configuration, all taken from the Ceph wiki. In general, the file system has left a good impression. Assembled an array of 2TB, half of SAS and SATA disks (block devices to export FC), partition into ext3. 
Ceph Storage mounted inside the 12 and the virtual machines on four hardware nodes, is read-write all the mount points. The fourth day of the stress tests are normal, I / O issued an average of 75 MB / sec. to record the peak. We do not consider other functions of Ceph (and they still had quite a lot), there are also problems with FUSE. But although the developers warn that the system is experimental, that it should not be used in production, we believe that if you really want, you can perform experiments.

References : Site of the project – An overview 

Spread the love