Frequently Asked Questions

Design Help

I've a rough idea of how ØMQ works, can you help me build my architecture?

This is what you pay experts for, and many of us working on ØMQ make our living like this. So why not budget a week or two of expert help into your project planning, and ask on the list for someone to come and help? If you're making an open source project, you will find the community more interested in giving their time for free.

The other option is to read the Guide, which has some hundreds of worked examples and is now available as an O'Reilly book. Read it, work through the examples, and use them as the basis for your first experiments.

How do I make ØMQ do X, Y, or Z?

If you ask this question it usually means you've not read the Guide, or just skimmed it and not digested it. ØMQ is not a replacement for other messaging systems. It is a new way of designing distributed code. Really, the only way to do this profitably is to internalize how it works, which takes a few days at least. When you've digested it, you'll see your old problems in a new light.

OK, can you at least explain how the socket patterns work?

Each bind allows peers to connect. The number of connections (on a single endpoint or multiple endpoints) affects how the socket pattern works. PUSH/DEALER will rotate messages out. PUB does multicast. SUB/PULL/DEALER rotates messages in. ROUTER rotates in, and uses addressed output. PAIR always sends to its unique peer, if any.

Contributing

I want to send a patch, how do I do that?

See the contribution policy. We make this as simple as it possibly can be, and as long as you follow the rules, we'll accept your patch, no matter how surprising it might be.

What's the roadmap for ØMQ?

Just to keep collecting interesting problems and solving them one by one. Right now lots of people are thinking about security. Perhaps tomorrow it will be something else. In any case there's no guiding genius behind ØMQ, just a lot of very smart people solving very hard problems.

I've found a problem in ØMQ, should I file a bug report?

Sure, if you like. But honestly, there's a large chance it will just sit there unless you find a way to send us a patch. If you don't like the idea of opening up the code yourself, and the bug is critical, try asking on the zeromq-dev mailing list if anyone else wants to help you fix it. Offers of money will always make this discussion easier.

Why are there several repos (zeromq2-x, zeromq3-x, zeromq4-x)?

These repos are not for mass use, but for contributors. We moved away from git branches because they are complex and fragile. We use a separate "release fork" for major versions. It may be unusual but has worked well.

Support

Where do I get paid support for ØMQ?

moc.xitami|hp#xitaMi ta ,snejtniH reteiP liamE, and explain what you need in terms of support (what languages, platforms, what kind of service levels, what kind of support. He'll either put you in touch with the right people, or propose you a package to suit your needs. iMatix brings together lots of the experts who help build ØMQ.

License

Can I use ØMQ in my closed-source application?

Yes, ØMQ is licensed under the LGPL with a static linking exception. This license gives you the explicit right to link ØMQ with closed-source applications (as well as open source applications).

Can I use ØMQ in my commercial application?

Yes. You may use ØMQ in your commercial application as long as you resubmit any changes you might make to the ØMQ source code itself. The license has a static link exception that lets you redistribute the compiled library with your application under your own terms (not the LGPL).

Can I use pieces of the ØMQ code in my non-LGPL library or application?

If you include any ØMQ code in your closed source works, these become derived works of ØMQ and must be licensed under the LGPL. The LGPL does make an exception for header files.

Why does ØMQ use the LGPL and not a more liberal license like BSD?

We believe in community before software, and in our experience the LGPL is the right license to build the most successful community around this technology.

What about people or firms who will not or cannot use the LGPL?

They are free to choose any of the dozens of alternative products, or build their own.

Could we change the ØMQ license?

Yes, we could change to a later version of LGPL.

Who is the copyright owner of ØMQ? Maybe I can talk to him or her?

ØMQ is owned by each contributor. There are hundreds of owners. Feel free to track them down and ask them individually if they want to give you their work under a different license.

General

What version of libzmq should I use for a new project?

All new development should use 3.2. Any existing projects that rely on 2.2 (or earlier) should be migrated as time permits. While 2.2 is not going to be abandoned, the 3.2 series of releases is the new mainline of development.

How do I start the ØMQ daemon?

There are no services or daemons to start, unless you build them yourself. ØMQ is a library. Compile and link an example and run it. ØMQ applications speak to each other directly.

How do I use a regular BSD socket to communicate with a ØMQ socket?

ØMQ has its own wire-level protocol. In order for a regular socket to communicate with it, that regular socket would need to send messages conforming to that protocol. Additionally, please read the section in the guide detailing how ØMQ is not a neutral carrier for your data.

Can I use ØMQ to interact with normal sockets and, for example, be able to ping www.google.com from a ØMQ socket?

No. See prior answer about BSD sockets above.

Does ØMQ support IP multicast?

Yes, ØMQ includes Pragmatic General Multicast (RFC 3208) support using the excellent OpenPGM implementation.

Does ØMQ store messages on disk?

No. If you want to do this, you can build your own storage queues fairly simply.

Does ØMQ include APIs for serializing data to/from the wire representation?

No. This design decision adheres to the UNIX philosophy of "do one thing and do it well". In the case of ØMQ, that one thing is moving messages, not marshaling data to/from binary representations.

Some middleware products do provide their own serialization API. We believe that doing so leads to bloated wire-level specifications like CORBA (1055 pages). Instead, we've opted to use the simplest wire formats possible which ensure easy interoperability, efficiency and reduce the code (and bug) bloat.

If you wish to use a serialization library, there are plenty of them out there. See for example

Note that serialization implementations might not be as performant as you might expect. You may need to benchmark your workloads with several serialization formats and libraries in order to understand performance and which format/implementation is best for your use case (ease of development must also be considered).

I'm trying to send data using the pgm/epgm transport and I get "Protocol not supported". Why?

You need to have ØMQ built with the --with-pgm option for this to be enabled.

How can I integrate ØMQ sockets with normal sockets? Or with a GUI event loop?

You can use the zmq_poll() function to poll for events on both ØMQ and normal sockets. The zmq_poll() function accepts a timeout so if you need to poll and process GUI events in the same application thread you can set a timeout and periodically poll for GUI events. See also the reference documentation.

Why can't I use standard I/O multiplexing functions such as select() or poll() on ØMQ sockets?

ØMQ socket is not a standard POSIX socket. It would be great if it was, however, POSIX doesn't provide a mechanism to simulate file descriptors in user space. To convert ØMQ sockets into POSIX file descriptors we would have either to move ØMQ to kernel-space or hack the kernel to provide the functionality needed. In both cases we would have to sacrifice portability and stick to a single operating system. Note that there's a way to retrieve a file descriptor from ØMQ socket (ZMQ_FD socket option) that you can poll on from version 2.1 onwards, however, there are some serious caveats when using it. Check the documentation carefully before using this feature.

When sending a multipart message, my receiver doesn't get any message parts until the last part is sent. Is ØMQ broken?

The ØMQ library guarantees that multipart messages are sent and received atomically. A message cannot be delivered by the library to your application until all frames have been received. Obviously, this cannot happen while the sender still has parts pending or in queue.

There are two reasons for multipart message support in the library.

  1. This function supports a scatter/gather-style for message transmission so the application can avoid copying the (potentially) large data to a single message. Zero copy is a huge performance win particularly for large messages.
  2. Multipart support may help a protocol designer to logically separate their protocol into separate frames at the application level. This may ease parsing on the receiving end since not all languages support fast bit-twiddling for binary structures like C.

Does ØMQ buffer my entire message in memory before sending to its recipients?

Yes. So if you are sending multi-gigabyte messages, it will consume RAM to store those messages on both sides. The memory on the sender side is freed after last part of the message has been sent. Also, see point above.

Is it possible to receive EAGAIN or to block when sending a multi-part message?

Multipart messages are sent as an atomic unit. If one part is sent successfully, then the socket is guaranteed to not block or return EAGAIN until all parts have been sent. Of course, if you run out of memory in the meantime then the guarantee doesn't mean much, so don't try to send messages that exceed available memory.

Are you trying to tell me that ØMQ won't magically save me from out-of-memory conditions?

You are a perfect candidate for using AMQP or JMS. Please use Google to find more information on libraries that support those platforms and protocols.

Can I subscribe to messages using regex or wildcards?

No. Prefix matching only.

What portions of the ØMQ structures survive a call to fork()?

If you must fork, then the forked process should create its own context and its own sockets. It is not safe to share a context or sockets between a parent and its child.

Platform and Patterns

I want to write a program using ØMQ sockets. Which socket types should I use?

Please read the guide. It covers each socket pattern along with their use-cases. For further assistance, please join the mailing list or IRC and ask for help. Please do not be upset if the first question asked is, "Have you read the Guide yet?"

My multi-threaded program keeps crashing in weird places inside the ØMQ library. What am I doing wrong?

ØMQ sockets are not thread-safe. This is covered in some detail in the Guide.

The short version is that sockets should not be shared between threads. We recommend creating a dedicated socket for each thread.

For those situations where a dedicated socket per thread is infeasible, a socket may be shared if and only if each thread executes a full memory barrier before accessing the socket. Most languages support a Mutex or Spinlock which will execute the full memory barrier on your behalf.

For more information, please read the Guide.

Why do I see different behavior when I bind a socket versus connect a socket?

ØMQ creates queues per underlying connection, e.g. if your socket is connected to 3 peer sockets there are 3 messages queues.

With bind, you allow peers to connect to you, thus you don't know how many peers there will be in the future and you cannot create the queues in advance. Instead, queues are created as individual peers connect to the bound socket.

With connect, ØMQ knows that there's going to be at least a single peer and thus it can create a single queue immediately. This applies to all socket types except ROUTER, where queues are only created after the peer we connect to has acknowledge our connection.

Consequently, when sending a message to bound socket with no peers, or a ROUTER with no live connections, there's no queue to store the message to.

When should I use bind and when connect?

As a very general advice: use bind on the most stable points in your architecture and connect from the more volatile endpoints. For request/reply the service provider might be point where you bind and the client uses connect. Like plain old TCP.

If you can't figure out which parts are more stable (i.e. peer-to-peer) think about a stable device in the middle, where boths sides can connect to.

The question of bind or connect is often overemphasized. It's really just a matter of what the endpoints do and if they live long — or not. And this depends on your architecture. So build your architecture to fit your problem, not to fit the tool.

I'm worried about my application heartbeats queuing up behind lower-priority data and causing a disconnect. How can I send a message with higher priority so it jumps forward in the queue?

See the discussion of heartbeating in the Guide.

I need to have multiple sockets share a single TCP connection (host ++ port). How can I accomplish this?

This is being added to the next version of the ZMTP protocol. Today you can accomplish this using a proxy that sits between the external TCP address, and your tasks.

I set a HWM (high water mark) for a socket but it isn't working!

That's not a question. Also, please be certain to read the man page for zmq_setsockopt() closely. Certain socket options only take effect for subsequent zmq_bind/zmq_connect calls. We recommend setting all socket options before making any calls to zmq_bind/zmq_connect; that way we don't have to remember these tiny implementation details and can focus on writing great code.

Also, read the next question about HWM below.

How does the HWM (high water mark) work with any socket type?

It works the following way right now:

The I/O thread reads messages from the pipe and pushes them to the network. If network is not able to accept more data (e.g. TCP backpressure is applied) it stops reading messages from the pipe and waits until the network is ready for accepting more data.

In the application thread, messages are simply pushed to the pipe when zmq_send() is called. If the pipe is full (HWM is reached) the message is dropped.

The problem with the above approach is that when you send a lot of messages is a quick sequence (e.g. sending small messages in a tight loop) the messages are stored in the pipe until it is full and the subsequent messages are simply dropped. The sender is not even notified about the fact that messages are disappearing.

The main core developer is hopeful that some community members will volunteer to assist in replacing this mechanism with a rate flow control mechanism.

How can I flush all messages that are in the ØMQ socket queue?

There is no explicit command for flushing a specific message or all messages from the message queue. You may set ZMQ_LINGER to 0 and close the socket to discard any unsent messages.

When running "make check" on OSX, how do I fix the failures?

Please refer to the tuning guide for platform-specific tuning. In short, OSX has some low defaults that the tests overrun so they need to be tuned.

Performance

What is the optimal number of I/O threads for best performance?

The basic heuristic is to allocate 1 I/O thread in the context for every gigabit per second of data that will be sent and received (aggregated). Further, the number of I/O threads should not exceed (number_of_cpu_cores - 1).

The graph in the test results shows that ØMQ is slower than TCP/IP. What's the point then?

Obviously, you would expect system working on top of TCP to have higher latencies than TCP. Anything else would be - simply speaking - supernatural. However, throughput is a different matter. ØMQ gets you more throughput than TCP has using intelligent batching algorithms. Moreover ØMQ delivers value-add over the TCP. Asynchronicity, message queueing, routing based on business logic, multicast etc.

How come ØMQ has higher throughput than TCP although it's built on top of TCP?

Avoiding redundant networking stack traversals can improve throughput significantly. In other words, sending two messages down the networking stack in one go takes much less time then sending each of them separately.This technique is known as message batching.

When sending messages in batches you have to wait for the last one to send the whole batch. This would make the latency of the first message in the batch much worse, wouldn't it?

ØMQ batches messages in opportunistic manner. Rather than waiting for a predefined number of messages and/or predefined time interval, it sends all the messages available at the moment in one go. Imagine the network interface card is busy sending data. Once it is ready to send more data it asks ØMQ for new messages. ØMQ sends all the messages available at the moment. Does it harm the latency of the first message in the batch? No. The message won't be sent earlier anyway because the networking card was busy. On the contrary, latency of subsequent messages will be improved because sending single batch to the card is faster then sending lot of small messages. On the other hand, if network card isn't busy, the message is sent straight away without waiting for following messages. Thus it'll have the best possible latency.

ØMQ's latency is nice, but is there a way to make it even lower?

We are working on delivering ØMQ over alternative networking stacks, thus having advantage of features like kernel bypass, avoiding TCP/IP overhead, using high-performance networking hardware etc. That way we can get the latency as low as 10 microseconds.

Why am I only getting 40 Mbps performance when sending messages using PGM?

PGM uses rate-limiting on the sender side. By default this limit is set to 40 Mbps. You can set it using the ZMQ_RATE option to zmq_setsockopt().

Does the ØMQ library disable the Nagle algorithm (TCP NODELAY)?

Yes.

Monitoring

How do I determine how many messages are in queue?

This isn't possible. At any given time a message may be in the ØMQ sender queue, the sender's kernel buffer, on the wire, in the receiver's kernel buffer or in the receiver's ØMQ receiver queue. Furthermore, a ØMQ socket can bind and/or connect to many peers. Each peer may have different performance characteristics and therefore a different queue depth. Any "queue depth" number is almost certainly wrong, so rather than provide incorrect information the library avoids providing any view into this data.

How can I be notified that a peer has connected/disconnected from my socket?

ØMQ sockets can bind and/or connect to multiple peers simultaneously. The sockets also transparently provide asynchronous connection and reconnection facilities. At this time, none of the sockets will provide notification of peer connect/disconnect. This feature is being investigated for a future release.

How can I retrieve a list of all connected peers?

This is not supported.

How can I auto-discover services provided by a ØMQ-based application?

This type of facility is not supported by the library. Such a tool could be built on top of ØMQ.

Security

Is it true that it is not safe to use ØMQ over the internet because it will crash?

Earlier versions of the ØMQ library (before 2.1) were not very resilient against "fuzzing" attacks. A malformed packet or garbage data could cause an old version of the library to assert and exit. Since the release of 2.1, all reported cases of assertions caused by bad data have been fixed. If your testing uncovers a problem in this area, please file a bug report.

What security features does ØMQ support?

None at the moment but this is being added to the next version of the protocol. People have successfully built DTLS, CurveCP, and other security protocols over ØMQ.

I read somewhere that I have to run my application as root if I want to use PGM, is this true?

The epgm:// transport uses PGM encapsulated in UDP packets and does not require any special permissions.

If you need to use the raw pgm:// transport then your application must be able to create raw sockets, which means either running as root or with capabilities to do so. On a modern Linux distribution with capabilities enabled you can use the following to run an application with the CAP_NET_RAW capability enabled:

$  sudo execcap 'cap_net_raw=ep' pgmsend moo

For more details please see the relevant OpenPGM wiki page.

Backwards Compatibility

0MQ/3.2.2 stable and later releases are compatible with 2.2 and 2.1. All other 3.x versions up to 3.2.1 are only compatible with themselves, which is unfortunate but caused by releases 3.0 and 3.1 which broke the protocol without adding any version information. One of the symptoms of using these incompatible releases (3.0.x, 3.1.x, 3.2.0, 3.2.1) with stable releases is that request-reply will work in one direction only.

Why is the libzmq.so version not the same as the product version?

As the GNU libtool manual says, "Never try to set the interface numbers so that they correspond to the release number of your package."

Building

What packages do I need to build ØMQ on Ubuntu?

You need build-essential and, for pre-ØMQ/3.0 versions, uuid-dev. So sudo apt-get install build-essential uuid-dev. You can also run sudo apt-get build-dep libzmq0 which installs all the build-dependencies of the libzmq0 ubuntu package.

After cloning the github repository, I can't build the library because the 'configure' script doesn't exist! What do I do?

You need autotools installed for your OS so that the configure script can be generated. Run this code to generate that script.

$ ./autogen.sh