Tom Barnes, Sunila Srivatsan, both from BEA Systems
Use cases: When to use messaging - load balancing, batching multiple operations into one, store and forward (aka reliable messaging in XML), pub/sub broadcast, integration between J2EE platform, and non-J2EE, non-Java. Exactly-once (transactional) versions of the above.
When not to use messaging: short running request/response style operations on reliable services. Instead invoke servlets, EJB architecture, web services directly. Integration at message-bus boundaries (e.g., between different companies) Use XML standards to specify message semantics and to inject foreign messages into the local bus.
- Location transparency - transparent access to JMS destinations and connections from any server in the cluster or any client connecting to cluster
- Connection load balancing - multiple servers host JMS client connections
- Message routing - a client stays connected to a single server in the cluster, client requests route to appropriate JMS server
- Distributed destinations - a single logical destination that represents multiple physical destinations.
Distributed destinations - vendor-specific feature. Logical destination that is a group of multiple physical destinations. Appears ands should be used as a regular destination. Increases availability for continuous JMS service. Responsible for distributing the load across the different physical destinations on multiple server instances. Most common use case: MDBs application targeted to same servers that host distributed destination.
Routing and load balancing: Connection load balance on client side; Message routing and load balance on server side
without distributed destination, must specify physical destination of message, with distributed destination no need to specify.
Distributed destination load balancing. Round-robin or random algorithm used. Producers can load balance once on first message or on every message. Consumers load balance once when they are created. For distributed queues message are load balanced across member queues. Persistent message goes to firs member with configured JMS store. Favors members already in a transaction. Favors members on same server as client.
Connection level fail-over. For most JMS vendors, no fail-over for existing connections and state is lost. Session fail-over - receive exceptions for consumer failures, best practice to call Session.recover()
JMS is an exactly-once service so service can only be active on one server in a cluster. Automatic restart (HA framework, vendor-specific), replicated message store approaches, provided by some JMS vendors.
Store-and-forward. Store request in local destination for eventual forwarding to a remote destination or service. Save requests even if remote destination is unavailable. Two stiles of SAF - client-side, server-side. Destination-to-destination SAF is unnecessary if remote services can pull directly from local destination, or when message scan be sent directly to a remote destination.
Performance...Leveraging aggregation: Inherent scalability can be leveraged by batching as much as possible. Multiple network disk database, transaction operations. Encourage concurrent clients, use MDB pools, use multiple senders and receivers. Use single message to hold multiple requests. Get the same messages per second with 1K messages as 100-byte messages. Not helpful with request sizes larger than a few K. Batch multiple messages into single transactions. Allows messages to be grouped into single persistent I/Os. Batched transactions may yield higher performance than non-transactional. Use asynchronous rather than synchronous consumers. Allows JMS to push messages in a one-way rather than serve them via request response. Tune asynchronous message pipeline,
Benchmark design: Develop custom benchmarks to model applications. Generic JMS benchmark kits often don't yield representative data. Apps use JMS in variety of ways, message type size concurrency, locality, filtering, etc are all significant. App overhead has significant impact. Designing a benchmark to closely model app. Run on same hardware used in production. Use enough messages to test accurately. Give system time to warm-up before beginning measurements.
Locating bottlenecks: Bottlenecks are often outside of JMS - DB, app processing, XML processing, transactions. Network can be bottleneck. Take into account non-JMS network traffic. JMS persistence can be bottleneck. Measure by temporarily forcing all messages to be non-persistent. Configure JMS to treat all message as non-persistent or switch to asynchronous file persistence or move file store onto a virtual disk.
Database vs File stores - JMS vendors typically offer a choice of file or database based persistence. Both have same transaction semantics and guarantees. Both have same app interface. All being things equal, file stores generally offer better throughput than a JDBC store.
Reducing transaction overhead: Batch multiple messages into transaction. Transactions usually not needed for non-persistent messaging. Vendor optimizations aimed at reducing database XA transaction overhead. Last-resource/last-participant optimization (not fully safe). WebLogic has logging last resource. WebSphere has shared data source with CMP. Replace transaction with "duplicate elimination". Common transaction use case: exactly once message forwarding. Send-Retry with duplicate elimination can replace transactions, but not always faster.
Tuning file stores: hardware solutions - dedicated disks, multiple stores, disks with battery-backed cache. Asynchronous file store writes. Java based JMS typically provide asynchronous disk write setting. Unsafe in the event of an OS crash or power failure, but fast. Some JMS vendors enable this as default.
Windows disk caching - various MS Windows versions enable by default OS or disk-level caching for direct writes. Unsafe in event of OS crash or power failure. Java-based vendors typically don't use direct writes, instead explicitly force caches to flush as needed. Configure via hardware manager settings.
Throttling techniques: various techniques exist to smooth JMS server load, prevent system overload. Techniques include configuring message quotas, configuring sender throttling, design apps using request/response pattern, tuning application concurrency.
Ordered message delivery - ensuring FIFO: JMS guarantees ordering between single sender and single consumer, but limits guarantee in the event of consumer rollbacks or recovers. Generally requires single consumer per destination, which limits performance. Some vendors provide "unit of work" or "unit of order" solution, which enables multiple-named FIFO "sub-orderings" within a single destination. Less configuration than using multiple destinations, supports multiple concurrent consumers, standard alternative is to use unique selector per consumer. Custom message ordering (priority, etc) are vendor-specific, typically configurable on the destination.
"Poison messages" are repeatedly delivered. Apps may force redelivery via transaction rollbacks or timeouts, NAKs, throwing exceptions from onMessage(). Throwing runtime exceptions from MDB onMessage - container will destroy current MDB instance, force message redelivery, rollback transaction if there is one.
Handling of problem messages impacts performance. Poison message handling provided by HMS API. Additional handling commonly provided by JMS API vendors - automatic redelivery delay, auto delete after X attempts, error/dead-letter queues, scheduled message delivery.
Introducing transactions and JMS. When to apply: exactly once forwarding. Exactly once message processing, rarely makes sense for non-persistent messaging. Two types of transaction: local and global. Local = only JMS may participate in the transaction, supplied by JMS API "transacted sessions".
Ensuring JMS in global: Use a JMS API XAConnection factory. Manual enlistment, automatic enlistment, assisted enlistment. For more info see Integrating Remote JMS Providers.