Developers Documentation



0 error for file:

User Tools

Site Tools

Optimizing Performance for Proxy and Content filter

This guide covers configuration tasks designed to help optimize flow of information across the content filter. The default design of the content filter allows ClearOS to run on very simple hardware without overwhelming that hardware. It is optimized, by default, for network with less than 100 users. If your user count is higher than that, if you are experiencing slowness, and if you have robust hardware (server grade and more than a Gig of RAM) then this guide is for you.

The Content Filter is a marriage of multiple technologies including but not limited to:

  • Core Operating System
  • Proxy services
  • Content listing/classification services
  • Firewall services
  • Anti-virus services
  • Web services
  • Directory services

Since many technologies are involved, slowness in any single subsystem can cause the whole experience to act slow. This guide will address some of these.

Capacity Planning

The largest, single consumer of resources when using the Content Filter is Filter engine itself. The engine uses DansGuardian. Each thread of DansGuardian consumes about 5MB of RAM. So if you have 2 Gigabyte of RAM the most simultaneous users you could support would be 400 before performance would be affected by memory. Please note that this is simultaneous users. Many networks do not have this many users actively hitting the internet all at once.

Optimizing DansGuardian

The performance of DansGuardian can be tuned in the /etc/dansguardian-av/dansguardian.conf file. You will need to edit this file and change some parameters associated with how Dansguardian will run.


There are several parameters that can be manipulated to improve performance. Among them are:

  • maxchildren
  • minchildren
  • minsparechildren
  • preforkchildren
  • maxsparechildren
  • maxagechildren


This parameter more than any other can change the performance of DansGuardian for the better or worse. This is the number of maximum child processes that DansGuarding can create. In DansGuardian, each web request is facilitated by a child process. By default there are 120 of these child processes on ClearOS which is more than enough for the most users. This means that about 600 MB of RAM can be used on the system to facilitate web requests. If the system does not have this RAM physically, it will use swap space. This has a performance cost whenever this is done.

There is a hard ceiling for the number of processes that can be defined here. DansGuardian will NOT make more than 1000 processes. This is hard coded into the software and can only be changed through a patch. Please Contact ClearCenter Technical support if you require this.

Another potential work around for the 1000 child process ceiling is to go distributed. This can be facilitated easily if you are auto-distributing configurations for the proxy using WPAD.


This parameter specifies the minimum number of child processes that should be running at any time. Even when the system is idle. Changing this parameter will affect the load on the server, even when idle. However, having sufficient children in the pool waiting for requests can be important.


This parameter defines the amount of child processes that should be kept in reserve. If this reserve is needed, the system will prefork an additional set of children as defined below.


When the system needs to use the last remaining spare children, this parameter defines how many to spin up in a block.


This parameter defines how many spare children can be idle before eliminating unused children.


Child processes, like any process, can succumb to stale variables, memory usage quirks and other such process related issues. This parameter defines the maximum age in requests that a child process will process before exiting. The default is to handle 1000 requests before exiting. Increasing this parameter can help alleviate performance issues related to spinning up a process but it can create a performance issue if processes get bloated for any reason.

Example change

Try these modifications to your /etc/dansguardian-av/dansguardian.conf file:

#maxchildren = 120 maxchildren = 180

#minchildren = 8 minchildren = 32

#minsparechildren = 4 minsparechildren = 8

#preforkchildren = 6 preforkchildren = 10

#maxsparechildren = 32 maxsparechildren = 64

#maxagechildren = 1000 maxagechildren = 10000

Validating Results

Dansguardian will use between 3-6 Meg of RAM per child process. Using the average of 4.5 Meg you can plan for capacity by ensuring that the RAM of your system is enough to not only run DansGuardian's processes but also the processes of the other services on your machine. A system with 1000 users would need 4.5 Gig of RAM just for DansGuardian alone!

To count the estimated number of processes the Content Filter is running at any time you can do the following:

ps aux | grep dansguardian-av | wc -l

The resulting number (minus 1 of the grep command itself) is the approximate number of dansguardian processes running on your system at that moment.

Another important factor is the system load. If your system load is 1.00 or higher (4.00 on a four processor system) than you need to add more resources like memory, ram or, faster disks to overcome the condition that is causing an slowdown. You can read more about system load here.

To find out how many processors your system has, run the following:

cat /proc/cpuinfo | grep processor | wc -l


cat /proc/cpuinfo | grep processor

Optimizing Squid

ClearOS uses the proxy server called Squid to handle the caching of data used by the content filter. Performance issues with Squid can affect the content filter experience. The rule of thumb here is, if your gateway is slow with the content filter off then nothing you do in tuning the content filter can help it perform any better.

For the most part, Squid is quite dynamic in handling varying loads but there are some key things to consider.

The most performance affecting attribute about Squid is the cache itself. If the cache is located on a slow device then the performance of the content (all content) will be dependent on this limitation. For this reason, it is advised that you run your cache on the fastest Read/Write medium that you can. If your disk is performing other Read/Write operations it can slow the performance of surfing the net.

Another thing to consider is this; the cache is particularly useful when bandwidth is slow or expensive because it can prevent the repetitive download of data, but if your network speed exceeds the I/O performance of your cache media, it will slow the experience down. In cases where network speed exceeds cache-disk I/O, some people run Squid cacheless.


This section will show you how to turn off caching on your proxy server. This is useful if the performance of your cache disk is outstripped by the bandwidth of your ISP. Typically you only need this guide

You can slowness in your proxy server as a result of the cache not being able to write fast enough to disk.

Do I need to do this?

To test your disk speed, ensure that you have sufficient space in the directory of your cache (/var/spool/squid). You will need just over 10GB of space to do this test (or you can change the test to do a smaller amount).

df -h

It is best to perform this test when the usage is low. Perform the following write test:

dd if=/dev/zero of=/var/spool/squid/10GB bs=1024 count=10240000 && rm -rf /var/spool/squid/10GB

You should get results similar to this:

10240000+0 records in
10240000+0 records out
10485760000 bytes (10 GB) copied, 84.292 s, 124 MB/s

The results will show the write speed of data to your directory. If the speed of the internet at your location outstrips the ability to write data in the cache, you may need to set up a no-cache option. As a typical rule of thumb, your cache to bandwidth ratio needs to perform at a factor of 16:1. For example, if your disk speed is 100MB/s (800Mb/s) you can perform well with a 50Mb/s pipe. If you have a 100Mb/s pipe, your disk performance must exceed 200MB/s.

For example, my drive performs at 124 MB/s which means that it should be sufficient for a 62 Mb/s ISP download pipe.

Please note that these are approximations designed for simple planning.

Speed things up

There are multiple ways on speeding up your cache. You could use a faster drive like an SSD. You could use multiple drives (like RAID 0 or RAID 10).

Caching is really designed to conserve bandwidth but in situations where the bandwidth exceeds your disk performance, you can just turn off the cache. This does NOT affect reports so it can be a real win in high bandwidth situations.

To remove the cache you need to change a setting on a file that ClearOS regularly changes. To get around this problem, you need to set up your proxy the way that you want it and then commit the change and make the file read-only to ClearOS.

Edit your /etc/squid/squid.conf file with your favorite command line editor (e.g. vi, or nano). Find the following section:

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 10240 16 256

Add this line:

no_cache deny all

So that it looks like this:

# Uncomment and adjust the following to add a disk cache directory.
cache_dir ufs /var/spool/squid 10240 16 256
no_cache deny all

Restart the squid service

service squid restart

You should be running in cacheless mode. If you ever want to unlock your squid.conf file for editing, perform the removal of the immutable flag.

Add more subnets

The Squid Web Proxy uses the /etc/squid/squid_lans.conf file to control which networks should be allowed through the proxy. This file is dynamically created using the LAN subnets of your network as defined in the EXTIF parameter of the /etc/clearos/network.conf file as well as the EXTRA_LANS parameter. To add addition lans, add the CIDR subnet to the 'EXTRA_LANS' parameter in /etc/squid/squid_lans.conf. For example to add the networks of,,,,


After wards, restart the squid server:

systemctl restart squid
content/en_us/kb_o_optimizing_performance_for_proxy_and_content_filter.txt · Last modified: 2019/09/06 08:29 by dloper