Bug 831699 - Handle multiple networks better
Summary: Handle multiple networks better
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.3-beta
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-06-13 15:32 UTC by Jeff Darcy
Modified: 2018-12-01 14:37 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-12-14 19:40:28 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Jeff Darcy 2012-06-13 15:32:11 UTC
This is kind of a continuation of our long-standing problem with names vs. IPs for identifying cluster nodes.  Configurations with separate front-end and back-end networks are becoming ever more common, and users are often running into some predictable problems.  For today's example, see the following ridiculous URL.

http://community.gluster.org/q/we-have-a-glusterfs-3-3-system-using-a-10gbe-san-being-mounted-by-clients-on-a-1gb-san-the-peers-are-all-on-192-168-12-0-24-however-the-clients-are-on-192-168-10-0-24-with-no-access-to-192-168-12-0-24-is-there-anyway-to-mount-this-using-the-mount--t-glusterfs-rather-than-nfs-it-looks-like-the-peer-ip-is-pushed-to-the-client-which-then-can-t-reach-those-ips-and-fails-can-i-have-two-ips-listed-per-glusterfs-node-on-different-networks

What I propose is that we add a new command (and underlying infrastructure) for "gluster peer alias OLD_NAME_OR_ADDR NEW_NAME_OR_ADDR [public|private] [primary]".  This allows users to manage aliases themselves, instead of e.g. having names supersede addresses as a side effect of "peer probe".  The public/private/primary options have the following meanings.

* A name/address marked "public" will *not* be used for internal cluster communication.

* A name/address marked "private" will *not* be presented to clients e.g. as the remote-host value for a protocol/client translator.

* Among the names/addresses that are valid for a particular purpose, the one marked "primary" (if present) will be used unless overridden.  For example, servers will generally probe one another using the primary address, but "gluster peer probe $not_primary_address" will override that.

Long term, we might need to deal with multiple address groups instead of just public/private, perhaps tied into multi-tenant identities and more complex policies, all the way up to something almost like a VLAN.  This proposal is merely an intermediate step to address the needs that seem to exist in the field today.

Comment 1 Jin Zhou 2013-02-04 21:02:47 UTC
Can we do the following?
1. Create a new configuration file for inter-node communication only.
   Something like this:

   node1_FQDN:
       IP1_addr  Priority
       IB1_addr  Priority
       IP2_addr  Priority
   node2_FQDN:
       IP1_addr  Priority
       IB1_addr  Priority
       IP2_addr  Priority
   node3_FQDN:
       IP1_addr  Priority
       IB1_addr  Priority
       IP2_addr  Priority

2. All inter-node communications will try to use those IP/IB addr in the order of priority. These configuration files exist only on RHS nodes, and they are only used by gluster processes.

3. FUSE clients are not allowed to use these addresses/names if they appear in this file.

4. Be default, if this file is empty (or does not exist), then inter-node communication behaves just like today, it will use the same IP addr as the FUSE clients.

Thanks
Jin Zhou

Comment 2 Andrew Hatfield 2013-02-05 01:47:50 UTC
I like the idea of having multiple groups, rather than just public/private.

That way the user has the option to define any number of access mechanisms, including BE replication.

Comment 3 Niels de Vos 2014-11-27 14:53:42 UTC
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug.

If there has been no update before 9 December 2014, this bug will get automatocally closed.


Note You need to log in before you can comment on or make changes to this bug.