Forums

Resolved
0 votes
Tonight (18/02/20-19/02/20) the following updated packages are being released:

  • app-dns - updated version to bind to configured interfaces only and not 0.0.0.0
  • app-events - csplugin-events was updated a couple of weeks ago. This brings the updated filter live and also resets the red event counter on the top right of the webconfig. The count included a huge amount of logging which the updated filter removed.

* = the package is also being released to Business at the same time.

Packages available for testing

  • yum-marketplace-plugin - updated to work with a distributed mirrorlist. If testing the main thing is to see if yum and the marketplace still work. Also, if you set up your hosts file to point mirrorlist.clearos.com to 67.205.144.164 does it still work. This is the big test (there are 9 other IPs as well but testing one should be fine)
  • app-dns - Gateway Management/DNSThingy is going to be updated and this is needed for correct interoperation withe the AD Connector
  • app-active-directory. This must be installed with app-dns from updates-testing. It allows correct operation with the old and new version of Gateway Management. install with:
    yum update app-dns app-active-directory --disablerepo=* --enablerepo=clearos-updates-testing,clearos-paid-testing

  • app-network - code merge complete. Seems to work for everything except external VLANs and a few quirks.

    • Now allows you to set up Wireless and Cellular interfaces. You will also need app-wireless to configure a NIC as an access point. Otherwise manual configuration is then needed for WiFi and Cellular devices.
    • I've tweaked it for kernel mode PPPoE (much faster and lower resoucres). For the moment we are not forcibly converting PPPoE interfaces over, but if you edit an interface it will switch to kernel mode.
    • Hides irrelevant interfaces from app-network-report such as docker0, veth* and ifb*.
    • Numerous other changes since last 2.6.0
    • Do not use the the update if you use VLAN's on external interfaces.


Unless detailed otherwise, packages available for testing can be installed with:
yum update --enablerepo=clearos-updates-testing {package-name}

Packages being worked on:

  • app-storage
  • app-sia - In theory it worked but does not. Fixes needed. I have an idea to get it working in Gateway mode but it is deathly slow.
  • app-openvpn to add three configuration parameters (client-to-client, "push block-outside-dns" and to force all traffic through the VPN). This is being worked on by an external contributor.
  • app-attack-detector to add a button beside each banned IP to you can unban it. This is being worked on by an external contributor.
  • kimchi 3.0 and wok 3.0. I can build wok and install and get it running with one python error. Kimchi will build but cannot install as I am missing a couple of dependencies. Kimchi is a great VM manager front end for KVM/libvirt. I have a feeling EPEL is going to end up being a brick wall. I've stopped on this one.
  • app-network - bug #41 should now be fixed. Two more bugs to go. Team Canada will be working on it.
  • nextcloud - upstream v18
  • Gateway Management/DNSThingy - Big update. Also DNSThingy is rebranding to AdamOne. Files are available for testing but subject to a discussion at the moment. More info as soon as I get it.
Wednesday, February 19 2020, 09:26 AM
Share this post:
Responses (11)
  • Accepted Answer

    Wednesday, February 19 2020, 07:30 PM - #Permalink
    Resolved
    1 votes
    I've pushed a temporary fix to disable it and it is synchronising to the repos. You can try updating with:
    yum clean all && yum update app-dns
    If you see app-dns-2.7.11-1.v7, update.

    I really do need help testing the app because each time it has failed it has been OK on all the machines I tested on - at least three here and two in the US.

    @Blake Andreasen and anyone else with this problem, please also give the output to:
    ip a | grep '^\S'
    grep '...IF' /etc/clearos/network.conf


    But most of all I need testers when I put a fix into updates-testing. I should do it tomorrow and will post to this thread.
    Like
    1
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, February 19 2020, 03:04 PM - #Permalink
    Resolved
    0 votes
    New problem with the app-dns update:

    Feb 19 09:58:19 c7 kernel: IPv6: ADDRCONF(NETDEV_UP): enp4s4: link is not ready
    Feb 19 09:58:33 c7 dnsmasq: dnsmasq: unknown interface enp4s4
    Feb 19 09:58:33 c7 systemd: dnsmasq.service: main process exited, code=exited, status=2/INVALIDARGUMENT
    Feb 19 09:58:33 c7 systemd: Unit dnsmasq.service entered failed state.
    Feb 19 09:58:33 c7 systemd: dnsmasq.service failed.
    Feb 19 09:58:33 c7 dnsmasq[9391]: unknown interface enp4s4
    Feb 19 09:58:33 c7 dnsmasq[9391]: FAILED to start up


    enp4s4 is an external interface, a dhcp client, and the cable to it is currently unplugged.


    ... Whatever we're trying to solve by not binding to 0.0.0.0, is there any other imaginable resolution? Am I going to be able to change the binding back manually?

    I have an application where I actually need it bound to an external interface (appropriately firewalled, of course) -- is this going to eventually become a problem?


    In any case, the system throwing this problem at me this morning is actually at my house, and (the opinions of my game-addicted roommate aside) non-critical, so I'll leave it broken until I get home in case further information is needed, but I bet you already know what's up since the update is so recent.
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, February 19 2020, 02:41 PM - #Permalink
    Resolved
    0 votes
    yum-marketplace-plugin has been fast-tracked into Community and is just sync'ing to the repos.
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, February 19 2020, 05:08 PM - #Permalink
    Resolved
    0 votes
    Please can you give the contents of /etc/clearos/network.conf and the output if "ifconfig"?

    To disable it, please set auto_configure to "no" in /etc/clearos/dns.conf and delete /etc/dnsmasq.d/bind.conf. Then restart dnsmasq. Alternatively, delete the interface in the IP Settings.
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, February 19 2020, 05:31 PM - #Permalink
    Resolved
    0 votes
    Nick Howitt wrote:

    Please can you give the contents of /etc/clearos/network.conf and the output if "ifconfig"?

    To disable it, please set auto_configure to "no" in /etc/clearos/dns.conf and delete /etc/dnsmasq.d/bind.conf. Then restart dnsmasq. Alternatively, delete the interface in the IP Settings.


    network.conf:
    # Network mode
    MODE="gateway"

    # Network interface roles
    EXTIF="enp4s4 enp4s7"
    LANIF="enp0s25 enp0s25.6 enp0s25.9"
    DMZIF=""
    HOTIF="enp0s25.4091 enp0s25.4092 enp0s25.4093 enp0s25.4094"

    # Domain and Internet Hostname
    DEFAULT_DOMAIN="(redacted)"
    INTERNET_HOSTNAME="(redacted)"

    # Extra LANS
    EXTRALANS=""

    # ISP Maximum Speeds
    ENP0S26U1U6_MAX_DOWNSTREAM=1000
    ENP0S26U1U6_MAX_UPSTREAM=1000
    ENP4S7_MAX_DOWNSTREAM=10240
    ENP4S7_MAX_UPSTREAM=87040
    ENP4S6_MAX_DOWNSTREAM=67150
    ENP4S6_MAX_UPSTREAM=5750
    ENP4S4_MAX_DOWNSTREAM=1024
    ENP4S4_MAX_UPSTREAM=1024
    ENP4S5_MAX_DOWNSTREAM=0
    ENP4S5_MAX_UPSTREAM=0


    ifconfig:
    enp0s25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    inet 172.27.72.127 netmask 255.255.255.0 broadcast 172.27.72.255
    inet6 fe80::227:eff:fe15:a53 prefixlen 64 scopeid 0x20<link>
    ether 00:27:0e:15:0a:53 txqueuelen 1000 (Ethernet)
    RX packets 682739433 bytes 298619407242 (278.1 GiB)
    RX errors 0 dropped 9061 overruns 0 frame 0
    TX packets 1264701954 bytes 1634446678579 (1.4 TiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
    device interrupt 20 memory 0xf3300000-f3320000

    enp0s25.6: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 10.171.42.127 netmask 255.255.255.0 broadcast 10.171.42.255
    inet6 fe80::227:eff:fe15:a53 prefixlen 64 scopeid 0x20<link>
    ether 00:27:0e:15:0a:53 txqueuelen 1000 (Ethernet)
    RX packets 188868 bytes 50976086 (48.6 MiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 173455 bytes 39449024 (37.6 MiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    enp0s25.9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 192.168.9.1 netmask 255.255.255.128 broadcast 192.168.9.127
    inet6 fe80::227:eff:fe15:a53 prefixlen 64 scopeid 0x20<link>
    ether 00:27:0e:15:0a:53 txqueuelen 1000 (Ethernet)
    RX packets 244220949 bytes 33019508424 (30.7 GiB)
    RX errors 0 dropped 22420 overruns 0 frame 0
    TX packets 380126909 bytes 516294854108 (480.8 GiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    enp0s25.4091: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 10.13.47.17 netmask 255.255.255.248 broadcast 10.13.47.23
    inet6 fe80::227:eff:fe15:a53 prefixlen 64 scopeid 0x20<link>
    ether 00:27:0e:15:0a:53 txqueuelen 1000 (Ethernet)
    RX packets 321632 bytes 23205160 (22.1 MiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 586628 bytes 582632790 (555.6 MiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    enp0s25.4092: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 172.5.16.65 netmask 255.255.255.252 broadcast 172.5.16.67
    inet6 fe80::227:eff:fe15:a53 prefixlen 64 scopeid 0x20<link>
    ether 00:27:0e:15:0a:53 txqueuelen 1000 (Ethernet)
    RX packets 219337 bytes 42647743 (40.6 MiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 409925 bytes 514454146 (490.6 MiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    enp0s25.4093: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 10.172.66.1 netmask 255.255.255.0 broadcast 10.172.66.255
    inet6 fe80::227:eff:fe15:a53 prefixlen 64 scopeid 0x20<link>
    ether 00:27:0e:15:0a:53 txqueuelen 1000 (Ethernet)
    RX packets 0 bytes 0 (0.0 B)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 38 bytes 3637 (3.5 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    enp0s25.4094: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet 172.10.52.1 netmask 255.255.255.0 broadcast 172.10.52.255
    inet6 fe80::227:eff:fe15:a53 prefixlen 64 scopeid 0x20<link>
    ether 00:27:0e:15:0a:53 txqueuelen 1000 (Ethernet)
    RX packets 0 bytes 0 (0.0 B)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 38 bytes 3637 (3.5 KiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    enp4s4: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
    ether 00:80:c8:ca:4e:6d txqueuelen 1000 (Ethernet)
    RX packets 2661602 bytes 2095300121 (1.9 GiB)
    RX errors 6 dropped 0 overruns 0 frame 0
    TX packets 2400834 bytes 704268411 (671.6 MiB)
    TX errors 42 dropped 0 overruns 6 carrier 36 collisions 0

    enp4s7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
    inet (redacted) netmask 255.255.248.0 broadcast 255.255.255.255
    inet6 fe80::280:c8ff:feca:4e70 prefixlen 64 scopeid 0x20<link>
    ether 00:80:c8:ca:4e:70 txqueuelen 30 (Ethernet)
    RX packets 2597228318 bytes 1822385117704 (1.6 TiB)
    RX errors 4071 dropped 0 overruns 0 frame 0
    TX packets 1095332976 bytes 481607698117 (448.5 GiB)
    TX errors 37661 dropped 0 overruns 55 carrier 37607 collisions 0

    lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
    inet 127.0.0.1 netmask 255.0.0.0
    inet6 ::1 prefixlen 128 scopeid 0x10<host>
    loop txqueuelen 1000 (Local Loopback)
    RX packets 23940427 bytes 5358498354 (4.9 GiB)
    RX errors 0 dropped 69 overruns 0 frame 0
    TX packets 23940427 bytes 5358498354 (4.9 GiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


    deleting/disabling the interface isn't an option (at least not a good one) as it would require manual intervention whenever that network comes and goes. I will proceed with the other provided workaround when I get home.
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, February 19 2020, 05:45 PM - #Permalink
    Resolved
    0 votes
    Researching, please can you also give the output to "ip a"?

    I'll push an update tonight to undo the implementation.
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, February 19 2020, 06:29 PM - #Permalink
    Resolved
    0 votes
    Nick Howitt wrote:

    Researching, please can you also give the output to "ip a"?

    I'll push an update tonight to undo the implementation.


    ip a:
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
    2: enp4s4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether 00:80:c8:ca:4e:6d brd ff:ff:ff:ff:ff:ff
    3: enp4s5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:80:c8:ca:4e:6e brd ff:ff:ff:ff:ff:ff
    4: enp4s6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:80:c8:ca:4e:6f brd ff:ff:ff:ff:ff:ff
    5: enp4s7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 30
    link/ether 00:80:c8:ca:4e:70 brd ff:ff:ff:ff:ff:ff
    inet (redacted)/21 brd 255.255.255.255 scope global dynamic enp4s7
    valid_lft 46368sec preferred_lft 46368sec
    inet6 fe80::280:c8ff:feca:4e70/64 scope link
    valid_lft forever preferred_lft forever
    6: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:27:0e:15:0a:53 brd ff:ff:ff:ff:ff:ff
    inet 172.27.72.127/24 brd 172.27.72.255 scope global enp0s25
    valid_lft forever preferred_lft forever
    inet6 fe80::227:eff:fe15:a53/64 scope link
    valid_lft forever preferred_lft forever
    7: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
    8: enp0s25.6@enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:27:0e:15:0a:53 brd ff:ff:ff:ff:ff:ff
    inet 10.171.42.127/24 brd 10.171.42.255 scope global enp0s25.6
    valid_lft forever preferred_lft forever
    inet6 fe80::227:eff:fe15:a53/64 scope link
    valid_lft forever preferred_lft forever
    10: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    183: enp0s25.9@enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:27:0e:15:0a:53 brd ff:ff:ff:ff:ff:ff
    inet 192.168.9.1/25 brd 192.168.9.127 scope global enp0s25.9
    valid_lft forever preferred_lft forever
    inet6 fe80::227:eff:fe15:a53/64 scope link
    valid_lft forever preferred_lft forever
    227: enp0s25.4091@enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:27:0e:15:0a:53 brd ff:ff:ff:ff:ff:ff
    inet 10.13.47.17/29 brd 10.13.47.23 scope global enp0s25.4091
    valid_lft forever preferred_lft forever
    inet6 fe80::227:eff:fe15:a53/64 scope link
    valid_lft forever preferred_lft forever
    229: enp0s25.4092@enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:27:0e:15:0a:53 brd ff:ff:ff:ff:ff:ff
    inet 172.5.16.65/30 brd 172.5.16.67 scope global enp0s25.4092
    valid_lft forever preferred_lft forever
    inet6 fe80::227:eff:fe15:a53/64 scope link
    valid_lft forever preferred_lft forever
    230: enp0s25.4093@enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:27:0e:15:0a:53 brd ff:ff:ff:ff:ff:ff
    inet 10.172.66.1/24 brd 10.172.66.255 scope global enp0s25.4093
    valid_lft forever preferred_lft forever
    inet6 fe80::227:eff:fe15:a53/64 scope link
    valid_lft forever preferred_lft forever
    231: enp0s25.4094@enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:27:0e:15:0a:53 brd ff:ff:ff:ff:ff:ff
    inet 172.10.52.1/24 brd 172.10.52.255 scope global enp0s25.4094
    valid_lft forever preferred_lft forever
    inet6 fe80::227:eff:fe15:a53/64 scope link
    valid_lft forever preferred_lft forever
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, February 19 2020, 09:09 PM - #Permalink
    Resolved
    0 votes
    I've looked into the problem you're trying to solve and it looks hairy.

    as long as you're using bind-interfaces, if you specify an interface that is currently unplugged, dnsmasq won't start and you get today's problem. If you omit interfaces that aren't up, it won't bind to them when they are. I feel like this should be dnsmasq's problem to fix (why crash and burn when told to use an interface that's temporarily unplugged?) and I will be very interested in seeing your solution.

    I generally avoid being a guinea pig but this feels like a very low-risk experiment, not to mention I've been affected by the last two problems anyway... so when you've got something worked up that you want tried, I have two systems I can try it on, both rather complex but both quite different from each other.


    Nick Howitt wrote:

    I've pushed a temporary fix to disable it and it is synchronising to the repos. You can try updating with:
    yum clean all && yum update app-dns
    If you see app-dns-2.7.11-1.v7, update.

    I really do need help testing the app because each time it has failed it has been OK on all the machines I tested on - at least three here and two in the US.

    @Blake Andreasen and anyone else with this problem, please also give the output to:
    ip a | grep '^\S'
    grep '...IF' /etc/clearos/network.conf


    But most of all I need testers when I put a fix into updates-testing. I should do it tomorrow and will post to this thread.
    The reply is currently minimized Show
  • Accepted Answer

    Wednesday, February 19 2020, 09:34 PM - #Permalink
    Resolved
    0 votes
    I'm stuck on this one for the moment. There may not be a fix tomorrow. It is getting hairier as you've seen. I've now observed the same. I need to find some other sort of trick. The alternative is to step backwards and add except-interfaces for docker0 in app-docker, but I don;t kow what to do about virbr (from KVM) and there are other interfaces I want to exclude (if you see my write up on running an AD Domain Controller in the KB, I don't want it to bind to virtual IP's.
    The reply is currently minimized Show
  • Accepted Answer

    Thursday, February 20 2020, 12:49 PM - #Permalink
    Resolved
    0 votes
    I am pulling completely what we were trying to do. There is a further update in updates testing. Please install and test:
    yum update app-dns --disablerepo=* --enablerepo=clearos-updates-testing
    You are looking for app-dns-2.7.12-1.v7.

    We may have to go down a different route and explicitly disallow interfaces like virbr*, docker* and br-* instead or explicitly allowing interfaces.
    The reply is currently minimized Show
  • Accepted Answer

    Friday, February 21 2020, 12:42 PM - #Permalink
    Resolved
    0 votes
    Nick Howitt wrote:

    I am pulling completely what we were trying to do. There is a further update in updates testing. Please install and test: [...] You are looking for app-dns-2.7.12-1.v7.


    It hasn't angered anything at home. I'll know about the other location later on today.
    The reply is currently minimized Show
Your Reply