View previous topic :: View next topic |
Author |
Message |
BlueFusion Guru
Joined: 08 Mar 2006 Posts: 371
|
Posted: Sun Jan 17, 2016 3:15 am Post subject: Routing tables with kernel 4.4 |
|
|
I've discovered an odd thing on one of my PCs after upgrading from 4.1.12 to 4.4 kernel...
The following are used on this PC to allow SSH connections from the LAN and the WAN. WAN access (outbound) is restricted to via VPN-only, so this exemption must be added to allow SSH connections to establish when remotely logging in from the internet.
Quote: | iptables -A OUTPUT -t mangle -p tcp --sport 22 -j MARK --set-mark=1
sysctl -w net.ipv4.conf.bond0.rp_filter=0
ip route add default via 10.2.1.1 dev bond0 table novpn
ip rule add fwmark 1 lookup novpn |
On kernel 4.4, everything else being the same commands and software versions, I often get one the following errors when attempting to SSH into this box when the firewall and VPN are active.
Quote: | ssh_exchange_identification: read: Connection reset by peer
write: Connection reset by peer |
I say often, because if I keep persisting, eventually it will connect and everything works fine. This only happens to SSH, all other connections having no connection issues. It usually takes between 7 to 15 attempts to get SSH to establish a connection.
On a hunch, I rebooted back into 4.1.12 kernel (gentoo-sources), and it worked perfect again without any other changes.
Is there a change in how routing tables are handled in the later kernels or is this a bug? _________________ i7-940 2.93Ghz | ASUS P6T Deluxe (v.1) | 24GB Triple Channel RAM | nVidia GTX660
4x 4TB Seagate NAS HDD (Btrfs raid5) | 2x 120GB Samsung 850 EVO SSD (Btrfs raid1) |
|
Back to top |
|
|
hydrapolic Tux's lil' helper
Joined: 07 Feb 2008 Posts: 126
|
Posted: Tue Jan 19, 2016 7:06 pm Post subject: |
|
|
I also have this problem. It really seems like only the starting of the ssh communication is working weird, once connected it works just fine.
Can you please share your /etc/conf.d/net? My configuration is:
Code: |
config_eth0="null"
config_eth1="null"
slaves_bond0="eth0 eth1"
config_bond0="null"
bridge_xenbr0="bond0"
config_xenbr0="10.1.1.2/24"
routes_xenbr0="default via 10.1.1.1"
dns_servers_xenbr0="10.1.1.1"
dns_domain_xenbr0="example.com"
|
|
|
Back to top |
|
|
BlueFusion Guru
Joined: 08 Mar 2006 Posts: 371
|
Posted: Tue Jan 19, 2016 7:26 pm Post subject: |
|
|
Sure thing, here's my /etc/conf.d/net:
Quote: | config_eth0="null"
config_eth1="null"
config_eth2="null"
slaves_bond0="eth0 eth1 eth2"
mode_bond0="802.3ad"
mac_bond0="00:18:f3:13:84:df"
config_bond0="dhcp" |
IP is assigned by DHCP on my router. I have 3 bonded NICs to a Netgear "Smart" switch which has the LAG configured.
Quote: | rich@phoenix ~ $ ifconfig bond0
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 10.2.1.12 netmask 255.255.255.0 broadcast 10.2.1.255
inet6 <<removed>> prefixlen 64 scopeid 0x0<global>
inet6 fe80::ea26:46c2:f5d2:5098 prefixlen 64 scopeid 0x20<link>
ether 00:18:f3:13:84:df txqueuelen 0 (Ethernet)
RX packets 36029097 bytes 31029428228 (28.8 GiB)
RX errors 0 dropped 54895 overruns 0 frame 0
TX packets 18695206 bytes 2935792706 (2.7 GiB)
TX errors 0 dropped 3 overruns 0 carrier 0 collisions 0 |
Quote: | rich@phoenix ~ $ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 3
Number of ports: 2
Actor Key: 9
Partner Key: 55
Partner Mac Address: 00:22:3f:8b:ec:bf
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:18:f3:13:84:df
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 2
Partner Churned Count: 2
details actor lacp pdu:
system priority: 0
port key: 9
port priority: 255
port number: 1
port state: 69
details partner lacp pdu:
system priority: 65535
oper key: 1
port priority: 255
port number: 1
port state: 1
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:18:f3:13:84:df
Slave queue ID: 0
Aggregator ID: 3
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 0
port key: 9
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 32768
oper key: 55
port priority: 128
port number: 5
port state: 61
Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:18:f3:13:84:df
Slave queue ID: 0
Aggregator ID: 3
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 0
port key: 9
port priority: 255
port number: 3
port state: 61
details partner lacp pdu:
system priority: 32768
oper key: 55
port priority: 128
port number: 4
port state: 61 |
_________________ i7-940 2.93Ghz | ASUS P6T Deluxe (v.1) | 24GB Triple Channel RAM | nVidia GTX660
4x 4TB Seagate NAS HDD (Btrfs raid5) | 2x 120GB Samsung 850 EVO SSD (Btrfs raid1) |
|
Back to top |
|
|
hydrapolic Tux's lil' helper
Joined: 07 Feb 2008 Posts: 126
|
Posted: Wed Jan 20, 2016 6:30 am Post subject: |
|
|
Hi @BlueFusion, what hardware do you run that on? On my side it's a Supermicro X10DRW with igb networking (Intel PCI-Express Gigabit Ethernet).
I've tested disabling the bond, removing iptables, didn't help. |
|
Back to top |
|
|
hydrapolic Tux's lil' helper
Joined: 07 Feb 2008 Posts: 126
|
|
Back to top |
|
|
hydrapolic Tux's lil' helper
Joined: 07 Feb 2008 Posts: 126
|
Posted: Thu Mar 24, 2016 8:47 am Post subject: |
|
|
I cannot reproduce this anymore on 4.4.6, can you? |
|
Back to top |
|
|
|