Into the Wild with Servlet Async IO

By: Oracle Developers

20   1   1679

Uploaded on 06/02/2015

The Servlet Async IO API was released into the wild more than a year ago and is a significantly different animal than the JVM’s async NIO. Most developers are as familiar with scaling web applications with async techniques as they are with scaling the Himalayas with an ice axe. The implementers of Jetty are your experienced guides for this session, which examines the beast in the real world. You will discover if developers will encounter a marmot or a yeti as they attempt to scale web applications with async techniques.

Author:
Greg Wilkins
Greg is the lead developer of the Jetty HTTP server and servlet container, as well as a key contributor to the cometd server push framework and serveral other open source projects. Greg is an active participant in the standards processes at the Java Community Process and the IETF HTTP2 working group. Greg was a founder of Mort Bay Consulting and Webtide.com. He is now a senior architect at Intalio|Webtide.
View more trainings by Greg Wilkins at https://www.parleys.com/author/greg-wilkins-1

Find more related tutorials at https://www.parleys.com/category/developer-training-tutorials

Comments (4):

By anonymous    2017-09-20

It's not right to mean that Netty is better than tomcat. The implementation is different. Tomcat uses java NIO to implement servlet 3.1 spec. Meantime, netty uses NIO as well but introduces custom api. If you want to get insight in how does servlet 3.1 implemeted in Netty, watch this video https://youtu.be/uGXsnB2S_vc

Original Thread

By anonymous    2017-09-20

Suppose you have a Tomcat server that has 10 threads listening for client requests. If you have a client that invokes an endpoint that takes 5 seconds to respond, that client holds that thread for those 5 seconds. Add a few concurrent clients and you will soon run out of threads during those 5 seconds.

The situation is even worse, because during most of those 5 seconds your request is doing mostly I/O, which means you just block your thread to do nothing but waiting.

So, the ability of Spring to use Callable, CompletableFuture or ListenableFuture as the return types of controllers is precisely to allow programmers to overcome this kind of problem to a certain extend.

Fundamentally, just returning one of these types is only going to release the Web Server thread making it available to be used by another client. So you get to attend more clients in the same amount of time, However that, by itself, may not be enough to implement a non-blocking IO (aka NIO) API.

Most of these features come from the core functionality offered by Servlet API and Servlet Async IO, which Spring should probably use under the hood. You may want to take a look at the following interesting videos that helped me understand this from the ground up:

Those videos explain the idea behind Servlet Async I/O and the final goal of implementing NIO Web apps as well.

The holy grail here is to reach a point in which the threads in your thread pool are never blocked waiting for some I/O to happen. They are either doing some CPU bound work, or they're back in the thread pool where they can be used by some other client. When you do I/O you don't introduce wait, you register some form of callback that will be used to tell you when the results are ready, and in the meantime you can use your valuable CPU cores to work on something else. If you think it over, a Callable, a CompletableFuture or a ListenableFuture are that sort of callback objects that Spring infrastructure is using under the hood to invoke their functionality to attend a request in a separate thread.

This increases your throughput, since you can attend more clients concurrently, simply by optimizing the use of your valuable CPU resources, particularly if you do it in a NIO way, since as you can imagine, just moving the request to another thread, although beneficial (since you free a valuable Tomcat thread), would still be blocking and therefore, you'd be just moving your problem to another thread pool.

I believe this fundamental principle is also behind a good part of the work that the Spring team is currently doing in Project Reactor since in order to leverage the power of this type of features you need to introduce asynchronous programming in your APIs and that is hard to do.

That's also the reason for the proliferation of frameworks like Netty, RxJava, Reactive Streams Initiative and the Project Reactor. They all are seeking to promote this type of optimization and programming model.

There is also an interesting movement of new frameworks that leverage this powerful features and are trying to compete with or even complement Spring yet limited functionality in that area. I'm talking of interesting projects like Vert.x and Ratpack and now that we're at it, this feature is one of the major selling points of Node.js as well.

Original Thread

Recommended Books

    Popular Videos 155

    Submit Your Video

    If you have some great dev videos to share, please fill out this form.