How to configure ActiveMQ to support a lot of concurrent connections 13

Posted by dstanley on January 19, 2010

In general, the out of the box ActiveMQ configuration works very well for the majority of usecases. One place where you will want to tweak things is in the area of supporting large numbers of concurrent connections.

The good news is that ActiveMQ is well capable and with the tweaks below you should be well on your way to supporting thousands of concurrent connections. Without further adieu, here’s what you need to do:

1) In your brokers /activemq.xml, enable the nio transport.

<!-- The transport connectors ActiveMQ will listen to -->
   <!-- use tcp port for network connectors only -->
   <transportConnector name="openwire" uri="tcp://localhost:61616"/>
   <!-- use nio port for producers/consumers -->
   <transportConnector name="openwire nio" uri="nio://localhost:62828?useQueueForAccept=false"/>

2) Again, in your brokers /activemq.xml, configure a destinationPolicy with optimizedDispatch=true, for example:

     <amq:policyEntry queue=">"
       memoryLimit="128 mb"
       <amq:strictOrderDispatchPolicy />
       <amq:timedSubscriptionRecoveryPolicy recoverDuration="360000" />

For more info on optimizedDispatch see Hiram‘s article here

3) At the OS level increase the number of file descriptors available to the broker

>ulimit -n 4096

4) Use a broker with a version >

5) Tweak your TCP/IP layer to support the connection load

On some operating systems you may need to tune the tcp/ip stack to be able to handle the incoming connection load. Below are the settings I’ve found worked well for Linux and Solaris.

Linux tcp tuning settings:

sudo /sbin/sysctl -w net.core.netdev_max_backlog=3000
sudo /sbin/sysctl -w net.ipv4.tcp_fin_timeout=15
sudo /sbin/sysctl -w net.core.somaxconn=3000

Note: The net.core.netdev_max_backlog controls the size of the incoming packet queue for upper-layer (java) processing.

Solaris tcp tuning settings:

# ndd -set /dev/tcp tcp_fin_wait_2_flush_interval 67500
# ndd -set /dev/tcp tcp_keepalive_interval 30000
# ndd -set /dev/tcp tcp_conn_req_max_q 8000

Lastly, when testing I’ve found the ActiveMQ jmx mbeans and jconsole to be indispensable in terms of monitoring thread counts and the number of concurrent connections. If all is configured properly each 16 incoming connections should result in one new thread in the broker VM.

These minor changes should allow you to support several thousand concurrent connections on conventional hardware. If you happen to try the settings above let me know how it goes.


Use this link to trackback from your own site.


Leave a response

  1. Maarten Thu, 21 Jan 2010 07:43:56 EST

    Hi Dave,
    Thanks for the explanation, very helpful.
    One question though, in the following line:

    What is the point of “useQueueForAccept” (couldn’t find anything on it), and do you absolutely need to specify a name for the connector?


  2. dstanley Thu, 21 Jan 2010 12:22:53 EST

    Hi Maarten,
    Glad it helped.

    The useQueueForAccept controls wether the incoming socket is handled immediately or added to workqueue where its handled by a separate worker. In the case where
    you have a large spike of incoming concurrent connections (e.g. server went down and clients are trying to reconnect) I found it worked better to handle each connection as it comes. Inevitably if the load is high enough some connection attempts will timeout, but if you are using the ActiveMQ failover transport – which you should, you can make this transparent to clients. In my testing I definitely got some connection setup failures when trying to connect >500 clients at the same time.

    On the name for the connector, the name for the connector is optional. ActiveMQ will default it internally to the connector’s uri if its not set. The name is handy for debugging as I think its used when logging..


  3. Oleg Kiorsak Fri, 12 Feb 2010 00:59:25 EST

    Hi Dave

    I am a bit confused regarding having both “openwire” and “openwire nio”
    and also regarding having another port “62828” (??)

    do we really need both?

    And also – what connection url should the clients use – “nio://localhost:62828?useQueueForAccept=false”

    Please elaborate a bit more if possible

    Thank you!


    And also, we are connecting from thousands of .NET clients using “NMS”… and there is no protocol option of “nio:” in NMS…
    is it ok to use just “tcp:” then?
    as far as I understand NIO is still TCP and from the client’s side it should be transparent whether its handled using nio or “old io” on the server side?
    am I right or missing something important?

    please help!!
    thank you!


  4. Oleg Kiorsak Fri, 12 Feb 2010 01:20:14 EST

    I just wanted to try to express myself my clearly:

    I am confused between “use tcp port for network connectors only”
    and “use nio port for producers/consumers”

    isn’t producers/consumers also a network connector?

    or are there some other kinds of “network connectors”?

    Thank you!


  5. dstanley Fri, 12 Feb 2010 14:43:34 EST

    > I am confused between “use tcp port for network connectors only”
    > and “use nio port for producers/consumers”

    So on this I meant have your network connector connect to the standard tcp/transport rather than the nio transport.

    > isn’t producers/consumers also a network connector?
    No in activemq land, a network connector is a broker to broker connection. They connect two or more broker instances into a network of brokers. The subscription information is shared among all brokers in the network so if you have a producer on broker A and a consumer on broker B. The messages will be transparently transferred from broker A to the consumer on broker B over the network connector.

  6. dstanley Fri, 12 Feb 2010 14:48:05 EST

    >do we really need both?

    No, you can just enable the nio transport and it will handle all incoming connections using nio.

    >And also – what connection url should the clients use – “nio://localhost:62828?useQueueForAccept=false”

    Just the regular tcp:// transport url for the producers and consumers. The nio is configuring how the incoming connection is handled on the broker side.

    > And also, we are connecting from thousands of .NET clients using “NMS”… and there is no protocol option of “nio:” in NMS…
    > is it ok to use just “tcp:” then?

    Yes absolutely.

    >as far as I understand NIO is still TCP and from the client’s side it should be transparent whether its handled using nio or “old io” on the server side?

    Yep thats right. Its not a client side concern. This is all on the server side. We are trying to minimize the number of broker threads created when you have a large incoming connection load.

    Hope that helps

  7. Oleg Kiorsak Sun, 14 Feb 2010 19:40:16 EST

    Thanks a lot Dave!

    It is all very clear now.

    One last question:

    >4) Use a broker with a version >

    I downloaded just a month ago and looked again now and also in the console
    when it start and I don’t see “” anywhere… just “”5.3.0″…

    where should I look to confirm that it is “….0.3” (assuming you actually meant that)?

    Also, I guess somewhat important – what Java version is recommended (we still use 1.5)

    The reason I want to be very pedantic is that I am following the instructions from here and configured everything as suggested, but I still see about 2 threads per connection as opposed to 1 thread per 16 connections…

    so does not seem that we got it quite right yet…

    and also I am quite consistently getting a rather scary java crash
    – have you ever seen anything like that?

    # An unexpected error has been detected by HotSpot Virtual Machine:
    # SIGBUS (0xa) at pc=0xffffffff7ee00c10, pid=20145, tid=3611
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.5.0_16-b02 mixed mode)
    # Problematic frame:
    # C []
    # An error report file with more information is saved as hs_err_pid20145.log
    # If you would like to submit a bug report, please visit:

    Please help!

    Thank you!

  8. dstanley Tue, 16 Feb 2010 08:55:01 EST

    Hi Oleg,
    I used the fuse version for my testing so you could try <-- this. JDK 1.5 should be fine. In terms of the thread counts are you using queue's or topics? I just realized I just set a destination policy on the queue, but if your using topics you may need to also setup a destination policy on the topic. Remember to make sure you enabled optimizedDispatch. On the crash, would need to see the stack from the hs_err_pid, but you could also try mailing the activemq user list for that one. The folks on the list are usually very responsive. /Dave

  9. Oleg Kiorsak Wed, 17 Feb 2010 00:01:00 EST

    Thanks Dave!

    We are using queues…. not topics…

    Yes I think I will start posting a whole batch of questions to the mailing list forum very soon…

    Now, interestingly when I reverted the settings (OptimizedDispatch etc)
    back to the “vanilla” Out of the Box configuration, the crash error went away…

    and here is what is pleasantly surprising.. I am actually able to have and sustain 5000 clients (each a producer and a consumer) with just plain TCP (not NIO) and those plain original settings (just with RAM values increased all over the place)

    Are there any disadvantages of having it this way?
    Or we actually should to try get working the recommendations that this post recommends…

    Also, I should mention that we so far tried to have one IN queue and one OUT queue and have all of 5000 clients use them… and do a selector on “WHERE CustomeHeaderCLIENTID=’BLAHBLAH'”… but it seem that the consumers then becomes a serious bottleneck when the queue has a backlog (10-20K messages queued up)

    Am I right in my understanding that this is what should be expected because the queue is not indexed in any way by this customer header we use for selector and therefor each consumer session does a FULL SCAN of sorts and potentially locks and blocks and when we have 5000 of this it becomes a bottleneck….

    so we are going to change it now so that we have a dedicated queue for each consumer…

    (and as far as I understand ActiveMQ now _can_ have 5000 queues easily)?

    Are we on a right path here?

    i.e. would you recommend one huge queue and 5000 clienst doing a “selector” on it or 5000 dedicated queues instead…

    are there some “golden rules” regarding kind of considerations?

    Thank you!

    BTW, we are seriously contemplating getting a help from FUSE as we are a big PROGRESS customer already (for SonicMQ)… at least as an a consulting/training engagement (btw, by any chance maybe you or other ActiveMQ committers from FUSE feel like traveling to Australia? – that would be great if we got get one of the actual engineers to interrogate here 😉

  10. dstanley Wed, 17 Feb 2010 10:56:58 EST

    I actually talked to Gary Tully about the behaviour your seeing and it turns out there have been a number of recent upgrades to the way the dispatch threading works within the broker, so that explains why you can get so many connections now with the OOTB config.

    The queue dispatch thread used to be per destination, but in recent 5.3.x versions its using a shared thread pool factory and respects the dedicated task runner configuration.

    In terms of the selector/vs partitioned destination question, the latter (partitioning) will be better as the selectors are limited by the pagedInMessages window.

    On the support question, thats very exciting. I would say use this form when your ready to go down that road :-)


  11. Mahesh Sun, 27 Jun 2010 11:23:39 EST

    Hi Oleg Kiorsak,

    I have a requirement where there are around 800 topic consumers and one publisher, Can you help send your configuration, i am more interested in the memory setting you had done to handle 5000 clients.

    For my case messages are getting struck for some subscribers.
    At least post some of the important tweaks you had done for ur scenario, i will change accordingly for topics.

    Thanks & Regards,

  12. Lionel Sat, 05 Nov 2011 01:26:15 EST

    Getting my to comment is usually tough, but that post was definitly worthy. I just want to say hi and tell you that i will probably be back

  13. Raúl Kripalani Fri, 03 Aug 2012 07:44:34 EST

    Dave, just a word of warning.

    There’s an error inside the article you reference (“Understanding the Threads Allocated in ActiveMQ”,

    The second-to-last and previous sequence diagrams are reversed. The one that shows optimizedDispatch=true corresponds to consumer.dispatchAsync=false, and viceversa.

    Leads to further confusion on an already complex topic 😉