Preface
This section is about the deployment and construction of a rabbitmq high-availability cluster. The centos7 system is used. We prepare three servers as rabbitmq's high-availability servers. The rabbitmq cluster itself does not naturally support high availability. We configure the mirror queue of the rabbitmq server. To ensure that messages can be replicated and stored on multiple nodes in the cluster, improve availability and fault tolerance, and avoid single node failure.
rabbitmq high availability cluster server planning
Hostname | IP | Service |
---|---|---|
hadoop101 | 192.168.10.101 | rabbitmq |
hadoop102 | 192.168.10.102 | rabbitmq |
hadoop103 | 192.168.10.103 | rabbitmq |
text
①Upload rabbitmq installation package to server hadoop101, hadoop102, hadoop103
②The server uses the rpm command to install the erlang environment respectively
Command:
sudo rpm -ivh erlang-26.2.3-1.el7.x86_64.rpm
Check whether erlang is installed successfully:
③Use rpm command to install rabbitmq server
Command:
sudo rpm -ivh rabbitmq-server-3.13.0-1.el8.noarch.rpm
④Start rabbitmq server
Command:
#Set rabbitmq server to start automatically at boot systemctl enable rabbitmq-server #Start rabbitmq server systemctl start rabbitmq-server #View rabbitmq server status systemctl status rabbitmq-server #Stop rabbitmq server systemctl stop rabbitmq-server #Restart rabbitmq server systemctl restart rabbitmq-server
⑤Open rabbitmq’s web client rabbitmq_management
-Open the web client of rabbitmq server
rabbitmq-plugins enable rabbitmq_management
-Use a browser to access the rabbitmq client
- Add the account accessed by rabbitmq
# Create user rabbitmqctl add_user <username> <password> # Set user role rabbitmqctl set_user_tags <user> <Role> # Set user permissions rabbitmqctl set_permissions [-p <vhostpath>] <user> <conf> <write> <read> # View users rabbitmqctl list_users
- Use the created rabbitmq account admin to log in to the web terminal
⑥Configure the mapping between host name and host address to facilitate servers to access each other through host names. For information about secret-free access between servers and mutual transfer of server files, please refer to the author’s previous blog content
⑦ Synchronously distribute the cookies of hadoop101 server rabbitmq to hadoop102 and hadoop103 to ensure that each node uses the same cookie
scp /var/lib/rabbitmq/.erlang.cookie root@hadoop102:/var/lib/rabbitmq/.erlang.cookie
scp /var/lib/rabbitmq/.erlang.cookie root@hadoop103:/var/lib/rabbitmq/.erlang.cookie
⑧Execute the following commands on the hadoop102 and hadoop103 nodes respectively to join the nodes to the cluster
- Start the rabbitmq service in the background
rabbitmq-server -detached
- Shut down rabbitmq server
rabbitmqctl stop_app
- reset rabbitmq server
rabbitmqctl reset
-rabbitmq joins the cluster
rabbitmqctl join_cluster rabbit@hadoop101
- Start application
rabbitmqctl start_app
- View cluster status
rabbitmqctl cluster_status
⑨ The rabbitmq node exits the cluster, taking the hadoop103 node exit as an example
- Stop hadoop103 node application
rabbitmqctl -n rabbit@hadoop103 stop_app
- Remove hadoop103 from any node of hadoop101 or hadoop102
rabbitmqctl forget_cluster_node rabbit@hadoop103
- View cluster
⑩Create a mirror queue to achieve synchronization between rabbitmq message clusters
- Under normal circumstances, the rabbitmq cluster is not highly available, and data between nodes cannot be shared. It is necessary to use a mirror queue to synchronize node data and introduce a mirror queue (Mirror Queue) mechanism. If a node in the cluster fails , the queue can automatically switch to another node in the mirror to ensure service availability
- The mirroring strategy has been added. Multiple strategies can be added according to specific actual needs.
- Check whether the policy is in effect
- Stop node 101, the cluster can still be used normally, failover occurs, and other nodes take effect
- Restart node 101 and restore the number of replicas
Conclusion
At this point, the tutorial on building a rabbitmq high-availability cluster ends here. See you in the next issue. . . . . .