Description of problem: When we compare same command execution times with master vs v3.3.0 the time difference is drastic. With master: 15:45:56 :( ⚡ time gluster volume set r2 performance.read-ahead off volume set: success real 0m1.259s user 0m0.051s sys 0m0.020s root - ~ 15:46:13 :) ⚡ time gluster volume set r2 performance.read-ahead off ^[[A volume set: success real 0m1.310s user 0m0.052s sys 0m0.018s root - ~ 15:46:40 :) ⚡ time gluster volume set r2 performance.read-ahead off volume set: success real 0m1.250s user 0m0.059s sys 0m0.019s With 3.3.0: root - ~ 15:49:16 :) ⚡ time gluster volume set r2 performance.read-ahead off Set volume successful real 0m0.081s user 0m0.051s sys 0m0.016s root - ~ 15:49:21 :) ⚡ time gluster volume set r2 performance.read-ahead off Set volume successful real 0m0.088s user 0m0.062s sys 0m0.016s root - ~ 15:49:24 :) ⚡ time gluster volume set r2 performance.read-ahead off Set volume successful real 0m0.081s user 0m0.056s sys 0m0.011s Volume information: root - ~ 15:49:25 :) ⚡ gluster v i Volume Name: r2 Type: Replicate Volume ID: 44413bc7-04d3-43f1-a6fa-202a1629cd93 Status: Stopped Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: pranithk-laptop:/home/gfs/r2_0 Brick2: pranithk-laptop:/home/gfs/r2_1 Options Reconfigured: performance.read-ahead: off diagnostics.brick-log-level: DEBUG diagnostics.client-log-level: DEBUG Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
The latency is due to O_SYNC and fsync in gluster-store, which are necessary.
REVIEW: http://review.gluster.org/7370 (mgmt/gluster: Use fsync instead of O_SYNC) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu)
Performance results with this new change: Without this change: [root@localhost ~]# for i in {1..3}; do time gluster volume set r2 performance.read-ahead off; done volume set: success real 0m1.897s user 0m0.233s sys 0m0.088s volume set: success real 0m2.241s user 0m0.228s sys 0m0.081s volume set: success real 0m2.544s user 0m0.130s sys 0m0.047s With the change: [root@localhost ~]# for i in {1..3}; do time gluster volume set r2 performance.read-ahead off; done volume set: success real 0m0.569s user 0m0.132s sys 0m0.038s volume set: success real 0m0.485s user 0m0.130s sys 0m0.034s volume set: success real 0m0.840s user 0m0.125s sys 0m0.031s
REVIEW: http://review.gluster.org/7370 (mgmt/gluster: Use fsync instead of O_SYNC) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/7370 committed in master by Anand Avati (avati) ------ commit 3a35f975fceb89c5ae0e8e3e189545f6fceaf6e5 Author: Pranith Kumar K <pkarampu> Date: Thu May 1 10:29:54 2014 +0530 mgmt/gluster: Use fsync instead of O_SYNC Glusterd uses O_SYNC to write to temp file then performs renames to the actual file and performs fsync on parent directory. Until this rename happens syncing writes to the file can be deferred. In this patch O_SYNC open of temp file is removed and fsync of the fd before rename is done. Change-Id: Ie7da161b0daec845c7dcfab4154cc45c2f49d825 BUG: 908277 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/7370 Reviewed-by: Krishnan Parthasarathi <kparthas> Reviewed-by: Raghavendra Bhat <raghavendra> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Anand Avati <avati>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users