Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

SOLVED : Linux kernel tuning TCP/Memcached



Those for interested, here are my findings/solution.
 
I used JMeter to cause load on the servers, and my target was to be able
to run 96 concurrent users for 2048 requests, while still staying above
700 requests per second. While this really only gives me 60.5MM requests
a day, the previous setups were producing only a fraction of that, so it
seemed like a reasonable target. In short, I achieved it, but it
required changes to about every part of the chain, from apache to php to
memcache and tomcat.

In my initial tests, when I used the tcp_tw_recycle things looked
excellent, but over time and under heavy load I noticed problems with
empty result pages every so often. Adding the fact that use of
tcp_tw_recycle breaks TCP protocol, I looked for a better solution. The
biggest problem was that I started to run out of ephemeral ports on the
server. The speed with which I was hitting the server and the fact that
each request was taking less that 2 seconds from start to finish  meant
that the default settings for the TCP stack left me with many TIME_WAIT
processes, each taking an ephemeral port, and the death of these
processes was at a rate way behind the spawning of new ones by the test
run. The end result of which was that I kept seeing "out of process,
cannot fork()" messages.

so.....
Here is what I ended up with.

In Apache I set the MaxRequestsPerChild to 4000, and changed the PHP
code to use persistent connections (both for connections to memcache and
cURL connections to an internal tomcat server. What this did was allow
the connection to adhere to the apache process, not the PHP script
itself, and therefore for that one connection to stay alive for all 4000
uses of the Apache ChildProcess. This handled the connections to our
internal servers better, but not the httpd threads themselves, nor the
time it took to hit limit.  I then set tcp_fin_timeout to half of its
default value (set to 30 from 60), and tcp_tw_reuse to 'on'.
Bingo. I hit my target.

I then tried removing each option in turn and looking at the result. If
I turned tcp_tw_reuse to off, the test failed. If I set tcp_fin_timeout
to default (60) the test failed. 

With all three options in play, my tests passed. The server did start to
grind, but it did not fail.


Richard


On Mon, 2010-08-02 at 18:36 -0400, theBlueSage wrote:

> Hi folks,
> 
> So I have been pushing the limits of my web servers and came across a
> setting that made everything fabulous! :) However I cant understand why
> it would be off by default in Linux, and therefore wondered if anyone
> knew of a reason _not_ to use the setting. 
> 
> Here is the story ...
> 
> I have a server running CentOS apache/PHP and memcached. When pushed
> under load (stress testing with JMeter) I could reliably break my setup.
> I have machines capable of handling 100+Million request a day, but I am
> getting in the area of 3% or that. To cut a long story short, it appears
> that PHP's TCP connections to the memcached instance were each on single
> TCP threads, and they started to stack up, i.e. when PHP had finished
> with them it left them in a state of FIN_WAIT. When I pushed towards
> 300,000 requests in a minute, Netstat showed me my threads for that port
> in FIN_WAIT > ridiculous .... (memcache here is running on port 11411)
> 
> 
> > [root at api01 ~]# netstat -tulnap |grep 11411 |wc -l
> > 28329
> 
> 
>  eventually this causes the memcache connections to fail, apache gets
> upset, PHP gets nervous and memcached leaves the room. and I get zero
> results. After a lot of digging/reading I found the following 3 setting
> being suggested by a linux tuning website :
> 
> /sbin/sysctl -w net.ipv4.tcp_tw_recycle=1 
> /sbin/sysctl -w net.ipv4.tcp_tw_reuse=1
> /sbin/sysctl -w net.ipv4.tcp_fin_timeout=10
> 
> now, I am familiar with the tcp_fin_timeout, but it had no affect on the
> performance. Nor did 'reuse'. However, 'recycle' was incredible. The
> performance rocked ( I hit the 300,000 target without problem, and my
> netstat result :
> 
> 
> > [root at api01 ipv4]# netstat -tulnap |grep 11411 |wc -l
> > 11
> > 
> 
> it never went higher than 11.
> 
> 
> I have since reset the tcp_fin_timeout to something more reasonable (30)
> and also discovered that the 'reuse' option did not do much of anything.
> IT is all on the recycle ..
> 
> So .... does anyone have any experience with this setting under load?
> does it have any effect (good/bad) on apache or NFS? I have read 'the
> interwebs' but wondered if there were any more personal explanations of
> why it might or might not be a good idea.
> 
> specs and goods on my server :
> 
> Server : 4 x Quad Intels, 26G RAM
> kernel : 2.6.18-164
> CentOS v 5.4 
> Apache 2.2.14
> Php : 5.2.11
> Memcached 1.4.4
> 
> 
> thanks for reading this, any suggestions welcome :)
> 
> Richard
>  
> 
> 
> 






BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org