Bug 823768 - nfs: showmount does not display the new volume
Summary: nfs: showmount does not display the new volume
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: GlusterFS
Classification: Community
Component: nfs
Version: pre-release
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Vinayaga Raman
QA Contact:
URL:
Whiteboard:
: 823763 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-05-22 06:44 UTC by Saurabh
Modified: 2016-01-19 06:10 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-07-30 07:12:40 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Saurabh 2012-05-22 06:44:33 UTC
Description of problem:
[root@localhost ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda3              16G  3.8G   11G  26% /
tmpfs                1004M     0 1004M   0% /dev/shm
/dev/vda1             194M   26M  158M  15% /boot
/dev/loop0            1.9G  1.2G  586M  68% /mnt/test
/dev/vdb1              98G   33M   98G   1% /export
[root@localhost ~]# mount 
/dev/vda3 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/vda1 on /boot type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/loop0 on /mnt/test type ext4 (rw,user_xattr)
/dev/vdb1 on /export type xfs (rw)

###########################################################################

[root@localhost ~]# gluster volume info dist-rep
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: 3fffafbc-484c-4092-83ff-2365a68e9521
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 172.17.251.90:/export/dr
Brick2: 172.17.251.92:/export/drr
Brick3: 172.17.251.91:/export/ddr
Brick4: 172.17.251.93:/export/ddrr
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# 
[root@localhost ~]# gluster volume status
Status of volume: dist-rep
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 172.17.251.90:/export/dr				24009	Y	22627
Brick 172.17.251.92:/export/drr				24009	Y	30745
Brick 172.17.251.91:/export/ddr				24009	Y	10915
Brick 172.17.251.93:/export/ddrr			24009	Y	31409
NFS Server on localhost					38467	Y	22633
Self-heal Daemon on localhost				N/A	Y	22638
NFS Server on 172.17.251.91				38467	Y	10920
Self-heal Daemon on 172.17.251.91			N/A	Y	10927
NFS Server on 172.17.251.93				38467	Y	31415
Self-heal Daemon on 172.17.251.93			N/A	Y	31420
NFS Server on 172.17.251.92				38467	Y	30751
Self-heal Daemon on 172.17.251.92			N/A	Y	30756
 
[root@localhost ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    3   tcp  38465  mountd
    100005    1   tcp  38466  mountd
    100003    3   tcp  38467  nfs
    100024    1   udp  48503  status
    100024    1   tcp  48767  status
    100021    4   tcp  38468  nlockmgr
    100021    1   udp    766  nlockmgr
    100021    1   tcp    768  nlockmgr


[root@localhost ~]# showmount -e 0
Export list for 0:
/test *
[root@localhost ~]# 

Version-Release number of selected component (if applicable):

3.3.0qa42
How reproducible:
found once 

Steps to Reproduce:
1. create a loop device with ext4 as filesystem
2. create a volume over this device.
3. install 3.3.0qa42
4. create a new volume, after installing xfs on the storage available.
5. showmount -e 0
6. stop the glusterd on all nodes.
7. clear out data from /var/lib/glusterd
8. restart the glusterd
9. showmount -e 0
  
Actual results:-
step 5 and 9 does not display the newly created volume in the list

Expected results:
the volume should be displyed in the list for showmount

Additional info:

even restart of the volume fails to display it.

Comment 1 Saurabh 2012-05-22 07:05:35 UTC
killing the nfs process using kill <pid> and restarting the volume displays the correct info for showmount, whereas on the other node of the cluster the information is still stale. This is still a bug as the volume start also restarts the nfs server and information updation should be taken at that time itself.

Comment 2 Rajesh 2012-05-22 07:47:21 UTC
*** Bug 823763 has been marked as a duplicate of this bug. ***

Comment 3 Krishna Srinivas 2012-05-22 08:26:51 UTC
Talked to Saurabh.
A stale nfs process is alive whose pid file has been deleted (/var/lib/glusterd was cleaned up) and hence "glusterd stop" command never kills this nfs process. I have requested Saurabh to reproduce this issue, in case its not reproducible we can close this bug.


Note You need to log in before you can comment on or make changes to this bug.