Linux系统之Bonding 网卡绑定配置方法

江湖有缘
• 阅读 109

一、检查本地系统环境

1.检查系统版本

[root@Server001 ~]# cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

2.查看服务器网卡

[root@Server001 network-scripts]# ifconfig  -a
bond0: flags=5123<UP,BROADCAST,MASTER,MULTICAST>  mtu 1500
        inet 192.168.30.122  netmask 255.255.255.0  broadcast 192.168.30.255
        ether a6:ad:e5:84:f0:6e  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.3.55  netmask 255.255.255.0  broadcast 192.168.3.255
        inet6 fe80::2a6e:d4ff:fe89:8720  prefixlen 64  scopeid 0x20<link>
        ether 28:6e:d4:89:87:20  txqueuelen 1000  (Ethernet)
        RX packets 2256  bytes 439140 (428.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 428  bytes 68770 (67.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::2a6e:d4ff:fe8a:3299  prefixlen 64  scopeid 0x20<link>
        ether 28:6e:d4:8a:32:99  txqueuelen 1000  (Ethernet)
        RX packets 1617  bytes 386452 (377.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7  bytes 586 (586.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::2a6e:d4ff:fe88:f490  prefixlen 64  scopeid 0x20<link>
        ether 28:6e:d4:88:f4:90  txqueuelen 1000  (Ethernet)
        RX packets 1617  bytes 386452 (377.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7  bytes 586 (586.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


二、创建网卡配置文件

1.进入网卡配置文件目录

[root@Server001 ~]# cd /etc/sysconfig/network-scripts/
[root@Server001 network-scripts]# ls
ifcfg-bond0  ifdown-eth   ifdown-ppp       ifdown-tunnel  ifup-ippp   ifup-post    ifup-TeamPort      network-functions-ipv6
ifcfg-eth0   ifdown-ippp  ifdown-routes    ifup           ifup-ipv6   ifup-ppp     ifup-tunnel
ifcfg-lo     ifdown-ipv6  ifdown-sit       ifup-aliases   ifup-isdn   ifup-routes  ifup-wireless
ifdown       ifdown-isdn  ifdown-Team      ifup-bnep      ifup-plip   ifup-sit     init.ipv6-global
ifdown-bnep  ifdown-post  ifdown-TeamPort  ifup-eth       ifup-plusb  ifup-Team    network-functions

2.拷贝eth0的网卡配置文件

[root@Server001 network-scripts]# cp ifcfg-eth0 ifcfg-eth1
[root@Server001 network-scripts]# cp ifcfg-eth0 ifcfg-eth2
[root@Server001 network-scripts]# cp ifcfg-eth0 ifcfg-bond0

3.修改bond0网卡配置文件

[root@Server001 network-scripts]# cat ifcfg-bond0 
DEVICE=bond0
BOOTPROTO=none
TYPE=bond0
ONBOOT=yes
IPADDR=192.168.30.122
NETMASK=255.255.255.0

4.修改eth1网卡配置文件

[root@Server001 network-scripts]# cat ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
TYPE=Ethernet
MASTER=bond0
SLAVE=yes

5.修改eth2网卡配置文件

[root@Server001 network-scripts]# cat ifcfg-eth2
DEVICE=eth2
BOOTPROTO=none
TYPE=Ethernet
MASTER=bond0
SLAVE=yes

三、创建bonding的配置文件

1.编辑bonding.conf

[root@node network-scripts]# vim /etc/modprobe.d/bonding.conf
[root@node network-scripts]# cat /etc/modprobe.d/bonding.conf
alias bond0 bonding
options bonding mode=1 miimon=100

注:关于mode的说明 mode=0 //平衡循环 mode=1 //主备 mode=3 //广播 mode=4 //链路聚合

2.停止 NetworkManager 服务

systemctl stop NetworkManager
systemctl disable NetworkManage

3.加载 bonding 模块

[root@Server001 network-scripts]# lsmod |grep bonding
[root@Server001 network-scripts]#  modprobe bonding
[root@Server001 network-scripts]#  lsmod |grep bonding
bonding               152656  0 

4.重启网络服务

systemctl restart network

四、查看网卡绑定情况

1.再次检查本地网卡

[root@Server001 network-scripts]# ifconfig 
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 192.168.30.122  netmask 255.255.255.0  broadcast 192.168.30.255
        inet6 fe80::2a6e:d4ff:fe8a:3299  prefixlen 64  scopeid 0x20<link>
        ether 28:6e:d4:8a:32:99  txqueuelen 1000  (Ethernet)
        RX packets 2426  bytes 748394 (730.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 838 (838.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.3.55  netmask 255.255.255.0  broadcast 192.168.3.255
        inet6 fe80::2a6e:d4ff:fe89:8720  prefixlen 64  scopeid 0x20<link>
        ether 28:6e:d4:89:87:20  txqueuelen 1000  (Ethernet)
        RX packets 2853  bytes 740694 (723.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 478  bytes 75189 (73.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 28:6e:d4:8a:32:99  txqueuelen 1000  (Ethernet)
        RX packets 2229  bytes 689858 (673.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 838 (838.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 28:6e:d4:8a:32:99  txqueuelen 1000  (Ethernet)
        RX packets 2243  bytes 690766 (674.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

2.查看网卡绑定状态

[root@Server001 network-scripts]# cat  /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 28:6e:d4:8a:32:99
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 28:6e:d4:88:f4:90
Slave queue ID: 0

五、测试网卡连通情况

1.本地客户端ping服务器

ping 192.168.30.122

Linux系统之Bonding 网卡绑定配置方法

六、关闭eth1网卡测试连通情况

1.关闭eth1网卡

[root@Server001 network-scripts]# ifdown eth1
[root@Server001 network-scripts]# 

2.查看本地客户端连通情况

可以正常ping通 Linux系统之Bonding 网卡绑定配置方法

3.查看当前的bond0状态

当前活动网卡已经切换到eth2,eth2网卡提供服务

[root@Server001 network-scripts]# cat  /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 28:6e:d4:88:f4:90
Slave queue ID: 0

七、关闭eth2网卡测试连通情况

1.开启eth1网卡,关闭eth2网卡

[root@Server001 network-scripts]# ifup eth1
[root@Server001 network-scripts]# ifdown eth2

2.测试本地客户端连通情况

可以正常ping通

Linux系统之Bonding 网卡绑定配置方法

3.查看当前的bond0状态

当前活动网卡已经切换到eth1,eth1网卡提供服务

[root@Server001 network-scripts]# cat  /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 28:6e:d4:8a:32:99
Slave queue ID: 0
点赞
收藏
评论区
推荐文章
02-Vue入门之数据绑定
02Vue入门之数据绑定02Vue入门之数据绑定2.1.什么是双向绑定?Vue框架很核心的功能就是双向的数据绑定。双向是指:HTML标签数据绑定到Vue对象,另外反方向数
Wesley13 Wesley13
2年前
SUSE12 网卡配置、SSH远程配置、解决CRT密钥交换失败,没有兼容的加密程序
安装好SUSE系统后发现网卡配置与Centos有些差异,多网卡的同学可以参考一下(我的是双网卡)SUSE系统默认第一块网卡自动获取IP,如果是多网卡,需要手动配置,由于我的第一个网卡获取正确无需更改,第二块网卡需要配置静态IP!(https://oscimg.oschina.net/oscnet/f45ba440cd5ff57ba8b568f9f
Stella981 Stella981
2年前
Linux系统修改网卡名(eth0
一、命名规则策略规则1:对于板载设备命名合并固件或BIOS提供的索引号,如果来自固件或BIOS的信息可读就命名,比如eno1,这种命名是比较常见的,否则使用规则2。规则2:命名合并固件或BIOS提供的PCIE热插拔口索引号,比如ens1,如果信息可读就使用,否则使用规则3。规则3:
Stella981 Stella981
2年前
Centos7.x系统_网卡启动报错的案例分析
_摘要:_ 介绍了Centos7系统,网卡启动失败的两种情况,和对应的分析解决方法。介绍了Centos7系统,网卡启动失败的两种情况,和对应的分析解决方法。情景一:ifconfig 查看不到网卡ip配置,网卡没有正常启动。处理过程: 1、启动网卡systemctlstartnetwork尝试启动网卡,不能正
Stella981 Stella981
2年前
Linux之基础命令
常用命令查看ip地址的两种方式ifconfigipaddrshowLinux的两种ip地址:  127.0.0.1本机回环地址  0.0.0.0全网地址/绑定所有网卡/所有地址Linux远程连接,连接本地虚拟机服务器windows默认没有ss
Stella981 Stella981
2年前
Linux运维常见面试题之精华收录
Linux运维常见面试题之精华收录1、什么是运维?什么是游戏运维?1)运维是指大型组织已经建立好的网络软硬件的维护,就是要保证业务的上线与运作的正常,在他运转的过程中,对他进行维护,他集合了网络、系统、数据库、开发、安全、监控于一身的技术运维又包括很多种,有DBA运维、网站运维、虚
Stella981 Stella981
2年前
Centos 7下网卡bonding配置之mode4
一、bonding技术bonding(绑定)是一种linux系统下的网卡绑定技术,可以把服务器上n个物理网卡在系统内部抽象(绑定)成一个逻辑上的网卡,能够提升网络吞吐量、实现网络冗余、负载等功能,有很多优势。bonding技术是linux系统内核层面实现的,它是一个内核模块(驱动)。使用它需要系统有这个模块,我们可以modinfo命
江湖有缘 江湖有缘
6个月前
Linux系统之VNC服务设置方法
Linux系统之VNC服务设置方法
江湖有缘 江湖有缘
6个月前
Linux系统之普通用户sudo提权配置
Linux系统之普通用户sudo提权配置
江湖有缘 江湖有缘
6个月前
Linux系统之查看进程监听端口方法
Linux系统之查看进程监听端口方法
江湖有缘
江湖有缘
Lv1
各大IT社区专家博主,华为HCIE云计算认证等,路漫漫其修远兮,吾将上下而求索!
文章
10
粉丝
0
获赞
2