CppCon 2016: Michael Caisse “Asynchronous IO with Boost.Asio"

By: CppCon

110   4   11682

Uploaded on 10/05/2016


Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/cppcon/cppcon2016

Reactive systems are found everywhere. The temptation to implement them with legions of waiting threads can be strong; however, the result is nearly always disappointing. The Boost.Asio library provides a framework to handle asynchronous resources with specific classes directed toward networking, serial port I/O, timers and more. In this session we will introduce Asio and some best practices while implementing a simple TCP client and server.

Asio has been submitted to the C++ Standards Committee for inclusion and can be found in the Boost library collection or as a stand-alone version. Come and learn a better way to implement reactive systems with the Asynchronous I/O library.

Michael Caisse
Ciere, Inc.
Michael Caisse has been crafting code in C++ for 25-years. He is a regular speaker at various conferences and is passionate about teaching and training. Michael is the owner of Ciere Consulting which provides software consulting and contracting services, C++ training, and Project Recovery for failing multidisciplinary engineering projects. When he isn't fighting with compilers or robots, he enjoys fencing with a sabre. :: ciere.com

Videos Filmed & Edited by Bash Films: http://www.BashFilms.com

Comments (6):

By anonymous    2017-09-20

Connection lifetime is a fundamental issue with boost::asio. Speaking from experience, I can assure you that getting it wrong causes "undefined behaviour"...

The asio examples use shared_ptr to ensure that a connection is kept alive whilst it may have outstanding handlers in an asio::io_service. Note that even in a single thread, an asio::io_service runs asynchronously to the application code, see CppCon 2016: Michael Caisse "Asynchronous IO with Boost.Asio" for an excellent description of the precise mechanism.

A shared_ptr enables the lifetime of a connection to be controlled by the shared_ptr instance count. IMHO it's not "cheating and cheating big"; but an elegant solution to complicated problem.

However, I agree with you that just using shared_ptr's to control connection lifetimes is not a complete solution since it can lead to resource leaks.

In my answer here: Boost async_* functions and shared_ptr's, I proposed using a combination of shared_ptr and weak_ptr to manage connection lifetimes. An HTTP server using a combination of shared_ptr's and weak_ptr's can be found here: via-httplib.

The HTTP server is built upon an asynchronous TCP server which uses a collection of (shared_ptr's to) connections, created on connects and destroyed on disconnects as you propose.

Original Thread

By anonymous    2017-09-20

Instead of using an io_service in a thread per pair of network cards, you may be better off wrapping your sockets in an asio::io_service::strand and using a single io_service in a thread pool, see: Strands: Use Threads Without Explicit Locking and Asynchronous IO with boost asio.

It is easiest to put the sockets and strands together in a class as in this example: Timer 5 example. There is some code that supports asio UDP sockets and strands here.

Original Thread

By anonymous    2017-09-20

No, you cannot safely call async_send_to multiple times in a row WITHOUT waiting for the write handler to be called. See Asynchronous IO with Boost.Asio to see precisely why.

However, asio supports scatter gather and so you can call async_send_to with multiple buffers, e.g.:

typedef std::deque<boost::asio::const_buffer> ConstBuffers;

std::string msg_1("Blah");
std::string msg_n("Blah");

ConstBuffers buffers;

socket_.async_send_to(buffers, tx_endpoint_, write_handler);

So you could increase your throughput by double buffering your message queue and using gathered writes...

Original Thread

Popular Videos 723

Submit Your Video

If you have some great dev videos to share, please fill out this form.