[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-users
Subject:    Re: [Gluster-users] [Gluster-Maintainers] Proposal to mark few features as Deprecated / SunSet from 
From:       Sankarshan Mukhopadhyay <sankarshan.mukhopadhyay () gmail ! com>
Date:       2019-03-20 1:50:38
Message-ID: CAJWA-5Z3DR25Vt=D+qoQJRXPFx6y0dgZwHDEFHSfhpPET25K9Q () mail ! gmail ! com
[Download RAW message or body]

Now that there are sufficient detail in place, could a Gluster team
member file a RHBZ and post it back to this thread?

On Wed, Mar 20, 2019 at 2:51 AM Jim Kinney <jim.kinney@gmail.com> wrote:
> 
> Volume Name: home
> Type: Replicate
> Volume ID: 5367adb1-99fc-44c3-98c4-71f7a41e628a
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp,rdma
> Bricks:
> Brick1: bmidata1:/data/glusterfs/home/brick/brick
> Brick2: bmidata2:/data/glusterfs/home/brick/brick
> Options Reconfigured:
> performance.client-io-threads: off
> storage.build-pgfid: on
> cluster.self-heal-daemon: enable
> performance.readdir-ahead: off
> nfs.disable: off
> 
> 
> There are 11 other volumes and all are similar.
> 
> 
> On Tue, 2019-03-19 at 13:59 -0700, Vijay Bellur wrote:
> 
> Thank you for the reproducer! Can you please let us know the output of `gluster \
> volume info`? 
> Regards,
> Vijay
> 
> On Tue, Mar 19, 2019 at 12:53 PM Jim Kinney <jim.kinney@gmail.com> wrote:
> 
> This python will fail when writing to a file in a glusterfs fuse mounted directory.
> 
> import mmap
> 
> # write a simple example file
> with open("hello.txt", "wb") as f:
> f.write("Hello Python!\n")
> 
> with open("hello.txt", "r+b") as f:
> # memory-map the file, size 0 means whole file
> mm = mmap.mmap(f.fileno(), 0)
> # read content via standard file methods
> print mm.readline()  # prints "Hello Python!"
> # read content via slice notation
> print mm[:5]  # prints "Hello"
> # update content using slice notation;
> # note that new content must have same size
> mm[6:] = " world!\n"
> # ... and read again using standard file methods
> mm.seek(0)
> print mm.readline()  # prints "Hello  world!"
> # close the map
> mm.close()
> 
> 
> 
> 
> 
> 
> 
> On Tue, 2019-03-19 at 12:06 -0400, Jim Kinney wrote:
> 
> Native mount issue with multiple clients (centos7 glusterfs 3.12).
> 
> Seems to hit python 2.7 and 3+. User tries to open file(s) for write on long \
> process and system eventually times out. 
> Switching to NFS stops the error.
> 
> No bug notice yet. Too many pans on the fire :-(
> 
> On Tue, 2019-03-19 at 18:42 +0530, Amar Tumballi Suryanarayan wrote:
> 
> Hi Jim,
> 
> On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney <jim.kinney@gmail.com> wrote:
> 
> 
> Issues with glusterfs fuse mounts cause issues with python file open for write. We \
> have to use nfs to avoid this. 
> Really want to see better back-end tools to facilitate cleaning up of glusterfs \
> failures. If system is going to use hard linked ID, need a mapping of id to file to \
> fix things. That option is now on for all exports. It should be the default If a \
> host is down and users delete files by the thousands, gluster _never_ catches up. \
> Finding path names for ids across even a 40TB mount, much less the 200+TB one, is a \
> slow process. A network outage of 2 minutes and one system didn't get the call to \
> recursively delete several dozen directories each with several thousand files. 
> 
> 
> Are you talking about some issues in geo-replication module or some other \
> application using native mount? Happy to take the discussion forward about these \
> issues. 
> Are there any bugs open on this?
> 
> Thanks,
> Amar
> 
> 
> 
> 
> nfs
> On March 19, 2019 8:09:01 AM EDT, Hans Henrik Happe <happe@nbi.dk> wrote:
> 
> Hi,
> 
> Looking into something else I fell over this proposal. Being a shop that are going \
> into "Leaving GlusterFS" mode, I thought I would give my two cents. 
> While being partially an HPC shop with a few Lustre filesystems,  we chose \
> GlusterFS for an archiving solution (2-3 PB), because we could find files in the \
> underlying ZFS filesystems if GlusterFS went sour. 
> We have used the access to the underlying files plenty, because of the continuous \
> instability of GlusterFS'. Meanwhile, Lustre have been almost effortless to run and \
> mainly for that reason we are planning to move away from GlusterFS. 
> Reading this proposal kind of underlined that "Leaving GluserFS" is the right thing \
> to do. While I never understood why GlusterFS has been in feature crazy mode \
> instead of stabilizing mode, taking away crucial features I don't get. With RoCE, \
> RDMA is getting mainstream. Quotas are very useful, even though the current \
> implementation are not perfect. Tiering also makes so much sense, but, for large \
> files, not on a per-file level. 
> To be honest we only use quotas. We got scared of trying out new performance \
> features that potentially would open up a new back of issues. 
> Sorry for being such a buzzkill. I really wanted it to be different.
> 
> Cheers,
> Hans Henrik
> 
> On 19/07/2018 08.56, Amar Tumballi wrote:
> 
> Hi all,
> 
> Over last 12 years of Gluster, we have developed many features, and continue to \
> support most of it till now. But along the way, we have figured out better methods \
> of doing things. Also we are not actively maintaining some of these features. 
> We are now thinking of cleaning up some of these ‘unsupported' features, and mark \
> them as ‘SunSet' (i.e., would be totally taken out of codebase in following \
> releases) in next upcoming release, v5.0. The release notes will provide options \
> for smoothly migrating to the supported configurations. 
> If you are using any of these features, do let us know, so that we can help you \
> with ‘migration'.. Also, we are happy to guide new developers to work on those \
> components which are not actively being maintained by current set of developers. 
> List of features hitting sunset:
> 
> ‘cluster/stripe' translator:
> 
> This translator was developed very early in the evolution of GlusterFS, and \
> addressed one of the very common question of Distributed FS, which is "What happens \
> if one of my file is bigger than the available brick. Say, I have 2 TB hard drive, \
> exported in glusterfs, my file is 3 TB". While it solved the purpose, it was very \
> hard to handle failure scenarios, and give a real good experience to our users with \
> this feature. Over the time, Gluster solved the problem with it's ‘Shard' \
> feature, which solves the problem in much better way, and provides much better \
> solution with existing well supported stack. Hence the proposal for Deprecation. 
> If you are using this feature, then do write to us, as it needs a proper migration \
> from existing volume to a new full supported volume type before you upgrade. 
> ‘storage/bd' translator:
> 
> This feature got into the code base 5 years back with this patch[1]. Plan was to \
> use a block device directly as a brick, which would help to handle disk-image \
> storage much easily in glusterfs. 
> As the feature is not getting more contribution, and we are not seeing any user \
> traction on this, would like to propose for Deprecation. 
> If you are using the feature, plan to move to a supported gluster volume \
> configuration, and have your setup ‘supported' before upgrading to your new \
> gluster version. 
> ‘RDMA' transport support:
> 
> Gluster started supporting RDMA while ib-verbs was still new, and very high-end \
> infra around that time were using Infiniband. Engineers did work with Mellanox, and \
> got the technology into GlusterFS for better data migration, data copy. While \
> current day kernels support very good speed with IPoIB module itself, and there are \
> no more bandwidth for experts in these area to maintain the feature, we recommend \
> migrating over to TCP (IP based) network for your volume. 
> If you are successfully using RDMA transport, do get in touch with us to prioritize \
> the migration plan for your volume. Plan is to work on this after the release, so \
> by version 6.0, we will have a cleaner transport code, which just needs to support \
> one type. 
> ‘Tiering' feature
> 
> Gluster's tiering feature which was planned to be providing an option to keep your \
> ‘hot' data in different location than your cold data, so one can get better \
> performance. While we saw some users for the feature, it needs much more attention \
> to be completely bug free. At the time, we are not having any active maintainers \
> for the feature, and hence suggesting to take it out of the ‘supported' tag. 
> If you are willing to take it up, and maintain it, do let us know, and we are happy \
> to assist you. 
> If you are already using tiering feature, before upgrading, make sure to do gluster \
> volume tier detach all the bricks before upgrading to next release. Also, we \
> recommend you to use features like dmcache on your LVM setup to get best \
> performance from bricks. 
> ‘Quota'
> 
> This is a call out for ‘Quota' feature, to let you all know that it will be ‘no \
> new development' state. While this feature is ‘actively' in use by many people, \
> the challenges we have in accounting mechanisms involved, has made it hard to \
> achieve good performance with the feature. Also, the amount of extended attribute \
> get/set operations while using the feature is not very ideal. Hence we recommend \
> our users to move towards setting quota on backend bricks directly (ie, XFS project \
> quota), or to use different volumes for different directories etc. 
> As the feature wouldn't be deprecated immediately, the feature doesn't need a \
> migration plan when you upgrade to newer version, but if you are a new user, we \
> wouldn't recommend setting quota feature. By the release dates, we will be \
> publishing our best alternatives guide for gluster's current quota feature. 
> Note that if you want to contribute to the feature, we have project quota based \
> issue open[2] Happy to get contributions, and help in getting a newer approach to \
> Quota. 
> 
> ________________________________
> 
> These are our set of initial features which we propose to take out of ‘fully' \
> supported features. While we are in the process of making the user/developer \
> experience of the project much better with providing well maintained codebase, we \
> may come up with few more set of features which we may possibly consider to move \
> out of support, and hence keep watching this space. 
> [1] - http://review.gluster.org/4809
> 
> [2] - https://github.com/gluster/glusterfs/issues/184
> 
> Regards,
> 
> Vijay, Shyam, Amar
> 
> 
> 
> _______________________________________________
> 
> Gluster-users mailing list
> 
> Gluster-users@gluster.org
> 
> 
> https://lists.glus
> 
> 
> --
> 
> 
> Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect \
> authenticity.ter.org/mailman/listinfo/gluster-users 
> 
> 
> --
> 
> 
> James P. Kinney III
> 
> 
> Every time you stop a school, you will have to build a jail. What you
> 
> gain at one end you lose at the other. It's like feeding a dog on his
> 
> own tail. It won't fatten the dog.
> 
> - Speech 11/23/1900 Mark Twain
> 
> 
> http://heretothereideas.blogspot.com/
> 
> --
> 
> 
> James P. Kinney III
> 
> 
> Every time you stop a school, you will have to build a jail. What you
> 
> gain at one end you lose at the other. It's like feeding a dog on his
> 
> own tail. It won't fatten the dog.
> 
> - Speech 11/23/1900 Mark Twain
> 
> 
> http://heretothereideas.blogspot.com/
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> --
> 
> James P. Kinney III
> 
> Every time you stop a school, you will have to build a jail. What you
> gain at one end you lose at the other. It's like feeding a dog on his
> own tail. It won't fatten the dog.
> - Speech 11/23/1900 Mark Twain
> 
> http://heretothereideas.blogspot.com/
> 
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic