Introducción
Oracle Sun Cluster es el software que gestiona la alta disponibilidad entre dos servidores Solaris. Oracle Sun Cluster gestiona recursos garantizando su disponibilidad moviendo dichos recursos de un servidor a otro en caso de problemas.
Versión
La versión de Sun Cluster está ligada a la versión de Solaris. Cuando se hace un update es necesario realizarlo a la vez. Se actualiza la versión de Solaris y la versión de Sun Cluster en la misma operación.
Para ver la versión de Sun Cluster ejecutamos los siguientes comandos:
phys-schost# clnode show-rev
4.0
phys-schost#% clnode show-rev -v
Oracle Solaris Cluster 4.0 for Solaris 11 sparc
ha-cluster/data-service/apache :4.0.0-0.21
ha-cluster/data-service/dhcp :4.0.0-0.21
ha-cluster/data-service/dns :4.0.0-0.21
ha-cluster/data-service/ha-ldom :4.0.0-0.21
ha-cluster/data-service/ha-zones :4.0.0-0.21
ha-cluster/data-service/nfs :4.0.0-0.21
ha-cluster/data-service/oracle-database :4.0.0-0.21
ha-cluster/data-service/tomcat :4.0.0-0.21
ha-cluster/data-service/weblogic :4.0.0-0.21
ha-cluster/developer/agent-builder :4.0.0-0.21
ha-cluster/developer/api :4.0.0-0.21
ha-cluster/geo/geo-framework :4.0.0-0.21
ha-cluster/geo/manual :4.0.0-0.21
ha-cluster/geo/replication/availability-suite :4.0.0-0.21
ha-cluster/geo/replication/data-guard :4.0.0-0.21
ha-cluster/geo/replication/sbp :4.0.0-0.21
ha-cluster/geo/replication/srdf :4.0.0-0.21
ha-cluster/group-package/ha-cluster-data-services-full :4.0.0-0.21
ha-cluster/group-package/ha-cluster-framework-full :4.0.0-0.21
ha-cluster/group-package/ha-cluster-framework-l10n :4.0.0-0.21
ha-cluster/group-package/ha-cluster-framework-minimal :4.0.0-0.21
ha-cluster/group-package/ha-cluster-framework-scm :4.0.0-0.21
ha-cluster/group-package/ha-cluster-framework-slm :4.0.0-0.21
ha-cluster/group-package/ha-cluster-full :4.0.0-0.21
ha-cluster/group-package/ha-cluster-geo-full :4.0.0-0.21
ha-cluster/group-package/ha-cluster-geo-incorporation :4.0.0-0.21
ha-cluster/group-package/ha-cluster-incorporation :4.0.0-0.21
ha-cluster/group-package/ha-cluster-minimal :4.0.0-0.21
ha-cluster/group-package/ha-cluster-quorum-server-full :4.0.0-0.21
ha-cluster/group-package/ha-cluster-quorum-server-l10n :4.0.0-0.21
ha-cluster/ha-service/derby :4.0.0-0.21
ha-cluster/ha-service/gds :4.0.0-0.21
ha-cluster/ha-service/logical-hostname :4.0.0-0.21
ha-cluster/ha-service/smf-proxy :4.0.0-0.21
ha-cluster/ha-service/telemetry :4.0.0-0.21
ha-cluster/library/cacao :4.0.0-0.21
ha-cluster/library/ucmm :4.0.0-0.21
ha-cluster/locale :4.0.0-0.21
ha-cluster/release/name :4.0.0-0.21
ha-cluster/service/management :4.0.0-0.21
ha-cluster/service/management/slm :4.0.0-0.21
ha-cluster/service/quorum-server :4.0.0-0.21
ha-cluster/service/quorum-server/locale :4.0.0-0.21
ha-cluster/service/quorum-server/manual/locale :4.0.0-0.21
ha-cluster/storage/svm-mediator :4.0.0-0.21
ha-cluster/system/cfgchk :4.0.0-0.21
ha-cluster/system/core :4.0.0-0.21
ha-cluster/system/dsconfig-wizard :4.0.0-0.21
ha-cluster/system/install :4.0.0-0.21
ha-cluster/system/manual :4.0.0-0.21
ha-cluster/system/manual/data-services :4.0.0-0.21
ha-cluster/system/manual/locale :4.0.0-0.21
Estado de los recursos
Con el siguiente comando podemos comprobar el estado de todos los recursos del cluster comprobando cada recurso en qué nodo está.
El comando indica el estado de los nodos, la red de interconexión, el quorum, los grupos de recursos, los recursos, los discos y el estado del zone cluster.
# cluster status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
phys-schost-1 Online
phys-schost-2 Online
=== Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
--------- --------- ------
phys-schost-1:nge1 phys-schost-4:nge1 Path online
phys-schost-1:e1000g1 phys-schost-4:e1000g1 Path online
=== Cluster Quorum ===
--- Quorum Votes Summary ---
Needed Present Possible
------ ------- --------
3 3 4
--- Quorum Votes by Node ---
Node Name Present Possible Status
--------- ------- -------- ------
phys-schost-1 1 1 Online
phys-schost-2 1 1 Online
--- Quorum Votes by Device ---
Device Name Present Possible Status
----------- ------- -------- ------
/dev/did/rdsk/d2s2 1 1 Online
/dev/did/rdsk/d8s2 0 1 Offline
=== Cluster Device Groups ===
--- Device Group Status ---
Device Group Name Primary Secondary Status
----------------- ------- --------- ------
schost-2 phys-schost-2 - Degraded
--- Spare, Inactive, and In Transition Nodes ---
Device Group Name Spare Nodes Inactive Nodes In Transistion Nodes
----------------- ----------- -------------- --------------------
schost-2 - - -
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
test-rg phys-schost-1 No Offline
phys-schost-2 No Online
test-rg phys-schost-1 No Offline
phys-schost-2 No Error--stop failed
test-rg phys-schost-1 No Online
phys-schost-2 No Online
=== Cluster Resources ===
Resource Name Node Name Status Message
------------- --------- ------ -------
test_1 phys-schost-1 Offline Offline
phys-schost-2 Online Online
test_1 phys-schost-1 Offline Offline
phys-schost-2 Stop failed Faulted
test_1 phys-schost-1 Online Online
phys-schost-2 Online Online
Device Instance Node Status
--------------- ---- ------
/dev/did/rdsk/d2 phys-schost-1 Ok
/dev/did/rdsk/d3 phys-schost-1 Ok
phys-schost-2 Ok
/dev/did/rdsk/d4 phys-schost-1 Ok
phys-schost-2 Ok
/dev/did/rdsk/d6 phys-schost-2 Ok
=== Zone Clusters ===
--- Zone Cluster Status ---
Name Node Name Zone HostName Status Zone Status
---- --------- ------------- ------ -----------
sczone schost-1 sczone-1 Online Running
schost-2 sczone-2 Online Running
Configuración Sun Cluster
El siguiente comando muestra la configuración de Sun Cluster. Es una información muy detallada que muestra todos los elementes y sus características.
phys-schost# cluster show
=== Cluster ===
Cluster Name: cluster-1
clusterid: 0x4DA2C888
installmode: disabled
heartbeat_timeout: 10000
heartbeat_quantum: 1000
private_netaddr: 172.11.0.0
private_netmask: 255.255.248.0
max_nodes: 64
max_privatenets: 10
num_zoneclusters: 12
udp_session_timeout: 480
concentrate_load: False
global_fencing: prefer3
Node List: phys-schost-1
Node Zones: phys_schost-2:za
=== Host Access Control ===
Cluster name: clustser-1
Allowed hosts: phys-schost-1, phys-schost-2:za
Authentication Protocol: sys
=== Cluster Nodes ===
Node Name: phys-schost-1
Node ID: 1
Enabled: yes
privatehostname: clusternode1-priv
reboot_on_path_failure: disabled
globalzoneshares: 3
defaultpsetmin: 1
quorum_vote: 1
quorum_defaultvote: 1
quorum_resv_key: 0x43CB1E1800000001
Transport Adapter List: net1, net3
--- Transport Adapters for phys-schost-1 ---
Transport Adapter: net1
Adapter State: Enabled
Adapter Transport Type: dlpi
Adapter Property(device_name): net
Adapter Property(device_instance): 1
Adapter Property(lazy_free): 1
Adapter Property(dlpi_heartbeat_timeout): 10000
Adapter Property(dlpi_heartbeat_quantum): 1000
Adapter Property(nw_bandwidth): 80
Adapter Property(bandwidth): 10
Adapter Property(ip_address): 172.16.1.1
Adapter Property(netmask): 255.255.255.128
Adapter Port Names: 0
Adapter Port State(0): Enabled
Transport Adapter: net3
Adapter State: Enabled
Adapter Transport Type: dlpi
Adapter Property(device_name): net
Adapter Property(device_instance): 3
Adapter Property(lazy_free): 0
Adapter Property(dlpi_heartbeat_timeout): 10000
Adapter Property(dlpi_heartbeat_quantum): 1000
Adapter Property(nw_bandwidth): 80
Adapter Property(bandwidth): 10
Adapter Property(ip_address): 172.16.0.129
Adapter Property(netmask): 255.255.255.128
Adapter Port Names: 0
Adapter Port State(0): Enabled
--- SNMP MIB Configuration on phys-schost-1 ---
SNMP MIB Name: Event
State: Disabled
Protocol: SNMPv2
--- SNMP Host Configuration on phys-schost-1 ---
--- SNMP User Configuration on phys-schost-1 ---
SNMP User Name: foo
Authentication Protocol: MD5
Default User: No
Node Name: phys-schost-2:za
Node ID: 2
Type: cluster
Enabled: yes
privatehostname: clusternode2-priv
reboot_on_path_failure: disabled
globalzoneshares: 1
defaultpsetmin: 2
quorum_vote: 1
quorum_defaultvote: 1
quorum_resv_key: 0x43CB1E1800000002
Transport Adapter List: e1000g1, nge1
--- Transport Adapters for phys-schost-2 ---
Transport Adapter: e1000g1
Adapter State: Enabled
Adapter Transport Type: dlpi
Adapter Property(device_name): e1000g
Adapter Property(device_instance): 2
Adapter Property(lazy_free): 0
Adapter Property(dlpi_heartbeat_timeout): 10000
Adapter Property(dlpi_heartbeat_quantum): 1000
Adapter Property(nw_bandwidth): 80
Adapter Property(bandwidth): 10
Adapter Property(ip_address): 172.16.0.130
Adapter Property(netmask): 255.255.255.128
Adapter Port Names: 0
Adapter Port State(0): Enabled
Transport Adapter: nge1
Adapter State: Enabled
Adapter Transport Type: dlpi
Adapter Property(device_name): nge
Adapter Property(device_instance): 3
Adapter Property(lazy_free): 1
Adapter Property(dlpi_heartbeat_timeout): 10000
Adapter Property(dlpi_heartbeat_quantum): 1000
Adapter Property(nw_bandwidth): 80
Adapter Property(bandwidth): 10
Adapter Property(ip_address): 172.16.1.2
Adapter Property(netmask): 255.255.255.128
Adapter Port Names: 0
Adapter Port State(0): Enabled
--- SNMP MIB Configuration on phys-schost-2 ---
SNMP MIB Name: Event
State: Disabled
Protocol: SNMPv2
--- SNMP Host Configuration on phys-schost-2 ---
--- SNMP User Configuration on phys-schost-2 ---
=== Transport Cables ===
Transport Cable: phys-schost-1:e1000g1,switch2@1
Cable Endpoint1: phys-schost-1:e1000g1
Cable Endpoint2: switch2@1
Cable State: Enabled
Transport Cable: phys-schost-1:nge1,switch1@1
Cable Endpoint1: phys-schost-1:nge1
Cable Endpoint2: switch1@1
Cable State: Enabled
Transport Cable: phys-schost-2:nge1,switch1@2
Cable Endpoint1: phys-schost-2:nge1
Cable Endpoint2: switch1@2
Cable State: Enabled
Transport Cable: phys-schost-2:e1000g1,switch2@2
Cable Endpoint1: phys-schost-2:e1000g1
Cable Endpoint2: switch2@2
Cable State: Enabled
=== Transport Switches ===
Transport Switch: switch2
Switch State: Enabled
Switch Type: switch
Switch Port Names: 1 2
Switch Port State(1): Enabled
Switch Port State(2): Enabled
Transport Switch: switch1
Switch State: Enabled
Switch Type: switch
Switch Port Names: 1 2
Switch Port State(1): Enabled
Switch Port State(2): Enabled
=== Quorum Devices ===
Quorum Device Name: d3
Enabled: yes
Votes: 1
Global Name: /dev/did/rdsk/d3s2
Type: shared_disk
Access Mode: scsi3
Hosts (enabled): phys-schost-1, phys-schost-2
Quorum Device Name: qs1
Enabled: yes
Votes: 1
Global Name: qs1
Type: quorum_server
Hosts (enabled): phys-schost-1, phys-schost-2
Quorum Server Host: 10.11.114.83
Port: 9000
=== Device Groups ===
Device Group Name: testdg3
Type: SVM
failback: no
Node List: phys-schost-1, phys-schost-2
preferenced: yes
numsecondaries: 1
diskset name: testdg3
=== Registered Resource Types ===
Resource Type: SUNW.LogicalHostname:2
RT_description: Logical Hostname Resource Type
RT_version: 4
API_version: 2
RT_basedir: /usr/cluster/lib/rgm/rt/hafoip
Single_instance: False
Proxy: False
Init_nodes: All potential masters
Installed_nodes: <All>
Failover: True
Pkglist: <NULL>
RT_system: True
Global_zone: True
Resource Type: SUNW.SharedAddress:2
RT_description: HA Shared Address Resource Type
RT_version: 2
API_version: 2
RT_basedir: /usr/cluster/lib/rgm/rt/hascip
Single_instance: False
Proxy: False
Init_nodes: <Unknown>
Installed_nodes: <All>
Failover: True
Pkglist: <NULL>
RT_system: True
Global_zone: True
Resource Type: SUNW.HAStoragePlus:4
RT_description: HA Storage Plus
RT_version: 4
API_version: 2
RT_basedir: /usr/cluster/lib/rgm/rt/hastorageplus
Single_instance: False
Proxy: False
Init_nodes: All potential masters
Installed_nodes: <All>
Failover: False
Pkglist: <NULL>
RT_system: True
Global_zone: True
Resource Type: SUNW.haderby
RT_description: haderby server for Oracle Solaris Cluster
RT_version: 1
API_version: 7
RT_basedir: /usr/cluster/lib/rgm/rt/haderby
Single_instance: False
Proxy: False
Init_nodes: All potential masters
Installed_nodes: <All>
Failover: False
Pkglist: <NULL>
RT_system: True
Global_zone: True
Resource Type: SUNW.sctelemetry
RT_description: sctelemetry service for Oracle Solaris Cluster
RT_version: 1
API_version: 7
RT_basedir: /usr/cluster/lib/rgm/rt/sctelemetry
Single_instance: True
Proxy: False
Init_nodes: All potential masters
Installed_nodes: <All>
Failover: False
Pkglist: <NULL>
RT_system: True
Global_zone: True
=== Resource Groups and Resources ===
Resource Group: HA_RG
RG_description: <Null>
RG_mode: Failover
RG_state: Managed
Failback: False
Nodelist: phys-schost-1 phys-schost-2
--- Resources for Group HA_RG ---
Resource: HA_R
Type: SUNW.HAStoragePlus:4
Type_version: 4
Group: HA_RG
R_description:
Resource_project_name: SCSLM_HA_RG
Enabled{phys-schost-1}: True
Enabled{phys-schost-2}: True
Monitored{phys-schost-1}: True
Monitored{phys-schost-2}: True
Resource Group: cl-db-rg
RG_description: <Null>
RG_mode: Failover
RG_state: Managed
Failback: False
Nodelist: phys-schost-1 phys-schost-2
--- Resources for Group cl-db-rg ---
Resource: cl-db-rs
Type: SUNW.haderby
Type_version: 1
Group: cl-db-rg
R_description:
Resource_project_name: default
Enabled{phys-schost-1}: True
Enabled{phys-schost-2}: True
Monitored{phys-schost-1}: True
Monitored{phys-schost-2}: True
Resource Group: cl-tlmtry-rg
RG_description: <Null>
RG_mode: Scalable
RG_state: Managed
Failback: False
Nodelist: phys-schost-1 phys-schost-2
--- Resources for Group cl-tlmtry-rg ---
Resource: cl-tlmtry-rs
Type: SUNW.sctelemetry
Type_version: 1
Group: cl-tlmtry-rg
R_description:
Resource_project_name: default
Enabled{phys-schost-1}: True
Enabled{phys-schost-2}: True
Monitored{phys-schost-1}: True
Monitored{phys-schost-2}: True
=== DID Device Instances ===
DID Device Name: /dev/did/rdsk/d1
Full Device Path: phys-schost-1:/dev/rdsk/c0t2d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d2
Full Device Path: phys-schost-1:/dev/rdsk/c1t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d3
Full Device Path: phys-schost-2:/dev/rdsk/c2t1d0
Full Device Path: phys-schost-1:/dev/rdsk/c2t1d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d4
Full Device Path: phys-schost-2:/dev/rdsk/c2t2d0
Full Device Path: phys-schost-1:/dev/rdsk/c2t2d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d5
Full Device Path: phys-schost-2:/dev/rdsk/c0t2d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d6
Full Device Path: phys-schost-2:/dev/rdsk/c1t0d0
Replication: none
default_fencing: global
=== NAS Devices ===
Nas Device: nas_filer1
Type: sun_uss
nodeIPs{phys-schost-2}: 10.134.112.112
nodeIPs{phys-schost-1 10.134.112.113
User ID: root