CppCon 2016: Howard Hinnant “A <chrono> Tutorial"

By: CppCon

183   2   12696

Uploaded on 10/05/2016

http://CppCon.org

Presentation Slides, PDFs, Source Code and other presenter materials are available at: https://github.com/cppcon/cppcon2016

This talk starts very simple: with seconds. It explains exactly what a second is: what is under the hood, what it can do, and most importantly how and why it fails at compile time. Failing at compile time, instead of at run time is the critical design philosophy that runs through chrono.

Slowly the talk adds complexity: other units of time duration, custom time durations, conversions among durations, points in time, etc. With each addition, another layer of the chrono onion is peeled away.

By the end of the talk, you will understand both the importance of the simplicity of chrono, and the power available to you if you avoid the typical mistakes beginners make when first learning chrono. Even experts will find at least one undiscovered gem in this talk to take back to their own code. And you may see ways to transfer some of the chrono design principles into your own designs.

People who attend this talk will be especially well prepared for my later talk about time zones.

This talk is a prequel to my CppCon 2015 talk: https://www.youtube.com/watch?v=tzyGjOm8AKo

Howard Hinnant
Senior Software Engineer, Ripple
Lead author of several C++11 features including: move semantics, unique_ptr and . | | Lead author on three open source projects: A std::lib implementation: http://libcxx.llvm.org | An Itanium ABI implementation: http://libcxxabi.llvm.org | A date/time/timezone library: https://github.com/HowardHinnant/date

Videos Filmed & Edited by Bash Films: http://www.BashFilms.com

Comments (14):

By anonymous    2017-09-20

When using std::chrono::nanoseconds instead, it is cumbersome to specify, say, 10 minutes.

Actually, it doesn't make it cumbersome in the least:

#include <chrono>
#include <iostream>

void my_function(bool work_really_hard, std::chrono::nanoseconds timeout)
{
    std::cout << timeout.count() << '\n';
}

int
main()
{
    my_function(true, std::chrono::minutes(10));
}

Output:

600000000000

The only time you'll have trouble with nanoseconds is if you want to pass in something that won't exactly convert to nanoseconds, such as picoseconds, or duration<long, ratio<1, 3>> (1/3 second units).

Update

I intended this answer to be additional information for an already accepted answer that I thought was a good answer (by sehe). sehe recommended a templated solution, which I also consider fine.

If you want to accept any std::chrono::duration, even one that you may have to truncate or round, then going with sehe's deleted answer is the way to go:

template <typename Rep, typename Period>
void my_function(bool work_really_hard, std::chrono::duration<Rep, Period> timeout)
{
    // Do stuff, until timeout is reached.
    std::this_thread::sleep_for(timeout);
}

If for some reason you do not want to deal with templates and/or you are content with having your clients having to specify only units that are exactly convertible to std::chrono:nanoseconds, then using std::chrono:nanoseconds as I show above is also completely acceptable.

All of the std::chrono "pre-defined" units:

hours
minutes
seconds
milliseconds
microseconds
nanoseconds

are implicitly convertible to nanoseconds, and will not involve any truncation or round off error. Overflow will not happen as long as you keep it within the two bright white lines (obscure reference to keeping your car in your own lane). As long as the duration is within +/- 292 years, you don't have to worry about overflow with these pre-defined units.

The std-defined functions such as std::this_thread::sleep_for are templated as sehe suggests for exactly the reason of wanting to be interoperable with every chrono:duration imaginable (e.g. 1/3 of a floating-point femtosecond). It is up to the API designer to decide if they need that much flexibility in their own API.

If I've now managed to confuse you instead of clarify, don't worry too much. If you choose to use nanoseconds, things will either work exactly, with no truncation or round off error, or the client will get a compile time error. There will be no run time error.

void my_function(bool work_really_hard, std::chrono::nanoseconds timeout)
{
    std::cout << timeout.count() << '\n';
}

int
main()
{
    using namespace std;
    using namespace std::chrono;
    typedef duration<double, pico> picoseconds;
    my_function(true, picoseconds(100000.25));
}

test.cpp:15:9: error: no matching function for call to 'my_function'
        my_function(true, picoseconds(100000.25));
        ^~~~~~~~~~~
test.cpp:4:10: note: candidate function not viable: no known conversion from 'duration<double, ratio<[...], 1000000000000>>' to
      'duration<long long, ratio<[...], 1000000000>>' for 2nd argument
    void my_function(bool work_really_hard, std::chrono::nanoseconds timeout)
         ^
1 error generated.

And if the client gets a compile-time error, he can always use duration_cast to work around it:

my_function(true, duration_cast<nanoseconds>(picoseconds(100000.25)));  // Ok

For further details, please see:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2661.htm

Sweet update to the original code at the top of this answer. In C++1y, which we hope means C++14:

using namespace std::literals;
my_function(true, 10min);  // also ok, is equal to 600000000000 nanoseconds

Question: What would you recommend as a timeout of "infinity" (i.e. don't time out)

I would first try to use an API that didn't take a timeout, and which implied "doesn't time out." For example condition_variable::wait. If I had control over the API, I would create such a signature without a timeout.

Failing that, I would create a series a "large" chrono::durations:

typedef std::chrono::duration
<
    std::int32_t, std::ratio_multiply<std::chrono::hours::period, std::ratio<24>>
> days;

typedef std::chrono::duration
<
    std::int32_t, std::ratio_multiply<days::period, std::ratio<7>>
> weeks;

typedef std::chrono::duration
<
    std::int32_t, std::ratio_multiply<days::period, std::ratio<146097, 400>>
> years;

typedef std::chrono::duration
<
    std::int32_t, std::ratio_divide<years::period, std::ratio<12>>
> months;

And then I would use one of these large durations in my call, for example:

std::this_thread::sleep_for(years(3));

I would not try to push_things to the maximum (you might break the underlying assumptions the OS has made for example). Just arbitrarily pick something ridiculously large (like 3 years). That will catch your code reviewer's eye, and likely strike up an informative conversation. :-)

Now available in video: https://www.youtube.com/watch?v=P32hvk8b13M :-)

Original Thread

By anonymous    2017-09-20

This was done so that you get maximum flexibility along with compact size. If you need ultra-fine precision, you usually don't need a very large range. And if you need a very large range, you usually don't need very high precision.

For example, if you're trafficking in nanoseconds, do you regularly need to think about more than +/- 292 years? And if you need to think about a range greater than that, well microseconds gives you +/- 292 thousand years.

The macOS system_clock actually returns microseconds, not nanoseconds. So that clock can run for 292 thousand years from 1970 until it overflows.

The Windows system_clock has a precision of 100-ns units, and so has a range of +/- 29.2 thousand years.

If a couple hundred thousand years is still not enough, try out milliseconds. Now you're up to a range of +/- 292 million years.

Finally, if you just have to have nanosecond precision out for more than a couple hundred years, <chrono> allows you to customize the storage too:

using dnano = duration<double, nano>;

This gives you nanoseconds stored as a double. If your platform supports a 128 bit integral type, you can use that too:

using big_nano = duration<__int128_t, nano>;

Heck, if you write overloaded operators for timespec, you can even use that for the storage (I don't recommend it though).

You can also achieve precisions finer than nanoseconds, but you'll sacrifice range in doing so. For example:

using picoseconds = duration<int64_t, pico>;

This has a range of only +/- .292 years (a few months). So you do have to be careful with that. Great for timing things though if you have a source clock that gives you sub-nanosecond precision.

Check out this video for more information on <chrono>.

For creating, manipulating and storing dates with a range greater than the validity of the current Gregorian calendar, I've created this open-source date library which extends the <chrono> library with calendrical services. This library stores the year in a signed 16 bit integer, and so has a range of +/- 32K years. It can be used like this:

#include "date.h"

int
main()
{
    using namespace std::chrono;
    using namespace date;
    system_clock::time_point now = sys_days{may/30/2017} + 19h + 40min + 10s;
}

Update

In the comments below the question is asked how to "normalize" duration<int32_t, nano> into seconds and nanoseconds (and then add the seconds to a time_point).

First, I would be wary of stuffing nanoseconds into 32 bits. The range is just a little over +/- 2 seconds. But here's how I separate out units like this:

    using ns = duration<int32_t, nano>;
    auto n = ns::max();
    auto s = duration_cast<seconds>(n);
    n -= s;

Note that this only works if n is positive. To correctly handle negative n, the best thing to do is:

    auto n = ns::max();
    auto s = floor<seconds>(n);
    n -= s;

std::floor is introduced with C++17. If you want it earlier, you can grab it from here or here.

I'm partial to the subtraction operation above, as I just find it more readable. But this also works (if n is not negative):

    auto s = duration_cast<seconds>(n);
    n %= 1s;

The 1s is introduced in C++14. In C++11, you will have to use seconds{1} instead.

Once you have seconds (s), you can add that to your time_point.

Original Thread

By anonymous    2017-09-20

using namespace std::chrono;
meanWaiting = duration_cast<nanoseconds>(
              duration<double>{(maxWait * (maxDistance - distance) / maxDistance)});

The duration<double> turns your double, into seconds stored as a double. Next you cast those seconds to nanoseconds.

In C++17 you'll be able to replace duration_cast with another rounding mode if you want:

  • duration_cast: truncate towards zero.
  • floor: truncate towards negative infinity.
  • ceil: truncate towards positive infinity.
  • round: rounds towards nearest integral, to even on tie.

If it is important for your application, and you can't wait until C++17, open-source implementations of these are floating around and easy to find.

In general, there are two cases when you need to use duration_cast, or some other explicit rounding mode:

  1. When you assign or copy from a fine duration to a coarse duration (e.g. nanoseconds to seconds), and

  2. When you assign or copy from a floating-point-based duration to an integral-based duration.

Both of the above conversions involve truncation error. And so <chrono> requires you to explicitly acknowledge that you want the truncation by using duration_cast.

For conversions that don't involve truncation error (e.g. integral seconds to integral nanoseconds, or integral duration to any floating point duration), you can use implicit conversion syntax.

In the modified answer where meanWaiting is seconds instead of nanoseconds, the duration_cast is still required because you will be truncating the fractional part of the double-based duration:

meanWaiting = duration_cast<seconds>(
              duration<double>{(maxWait * (maxDistance - distance) / maxDistance)});

For a complete video tutorial on <chrono> please see my Cppcon 2016 talk: https://www.youtube.com/watch?v=P32hvk8b13M

Original Thread

By anonymous    2017-09-23

In that case, learn ``, part of the std::lib. Here is a video tutorial to help you with that: https://www.youtube.com/watch?v=P32hvk8b13M

Original Thread

By anonymous    2017-10-08

High Resolution Clock is just a typedef of the steady clock. Check out. [This video](https://www.youtube.com/watch?v=P32hvk8b13M&t=1s) for more information on the reason you should use steady_clock over high resolution as well as more good information on chrono.

Original Thread

By anonymous    2017-10-08

timer::time_point elapsed_time; // this is my issue

Just from the name elapsed_time, this doesn't sound like a time_point. It sounds like a duration. Do this to fix that problem:

timer::duration elapsed_time;

This looks suspicious:

float waitTime = float(temp.ncycle)/1000;

Typically a time duration should have type std::chrono::duration<some representation, some period>. And you don't want to apply conversion factors like 1/1000 manually. Let <chrono> handle all conversions.

elapsed_time = timer::duration_cast<chrono::milliseconds>(clock_check - clock_wait);

duration_cast is not a static member function of system_clock. duration_cast is a namespace scope function. Use it like this:

elapsed_time = chrono::duration_cast<chrono::milliseconds>(clock_check - clock_wait);

If waitTime had a duration type, the .count() would be unnecessary here:

if (elapsed_time.count() > waitTime)

// Below is the line that is giving me trouble now. I get an error when casting.
// I don't know how to make duration_cast part of the timer declared in meta.h
float EndTime = float(timer::duration_cast <chrono::milliseconds>(end_time - startTime).count());

Best practice is to stay within the <chrono> type system instead of escaping to scalars such as float. You can get integral milliseconds with this:

auto EndTime = chrono::duration_cast<chrono::milliseconds>(end_time - startTime);

If you really want EndTime to be float-based milliseconds, that is easy too:

using fmilliseconds = chrono::duration<float, std::milli>;
fmilliseconds EndTime = end_time - startTime;

For more details, here is a video tutorial for the <chrono> library: https://www.youtube.com/watch?v=P32hvk8b13M


If this answer doesn't address your question, distill your problem down into a complete minimal program that others can copy/paste into their compiler and try out. For example I could not give you concrete advice on waitTime because I have no idea what temp.ncycle is.


Finally, and this is optional, if you would like an easier way to stream out durations for debugging purposes, consider using my free, open source, header-only date/time library. It can be used like this:

#include "date/date.h"
#include <iostream>
#include <thread>

using timer = std::chrono::system_clock;
timer::time_point clock_wait;
timer::time_point clock_check;
timer::duration elapsed_time;

int
main()
{
    using namespace std::chrono_literals;
    clock_wait = timer::now();
    std::this_thread::sleep_for(25ms); // simulate work
    clock_check = timer::now();
    elapsed_time = clock_check - clock_wait;
    using date::operator<<;  // Needed to find the correct operator<<
    std::cout << elapsed_time << '\n';  // then just stream it
}

which just output for me:

25729µs

The compile-time units of the duration are automatically appended to the run-time value to make it easier to see what you have. This prevents you from accidentally appending the wrong units to your output.

Original Thread

By anonymous    2017-11-06

You have identified the issue correctly. You can use return time_point_cast<steady_clock::duration>(ref + NTSC_FPS_t(frames)); which will truncate towards zero to the next steady_clock::duration (nanosecond).

In C++17, you will have other rounding modes:

  • floor
  • ceil
  • round

You're welcome to use them from here if you want them prior to C++17: https://github.com/HowardHinnant/date/blob/master/include/date/date.h

If it helps, here is a video tutorial for <chrono>: https://www.youtube.com/watch?v=P32hvk8b13M

You could also use "date.h" to explore the units that do result from steady_clock::time_point + NTSC_FPS_t like this:

#include "date/date.h"
#include <iostream>

typedef std::chrono::duration<int, std::ratio<1001,30000>> NTSC_FPS_t;

int
main()
{
    using date::operator<<;
    auto tp = std::chrono::steady_clock::now() + NTSC_FPS_t{1};
    std::cout << tp.time_since_epoch() << '\n';
}

For me this just output:

4680675375035054[1/3000000000]s

Indicating that the sum of nanoseconds and NTSC_FPS_t has units of 1/3 of a nanosecond.

Original Thread

By anonymous    2017-11-27

std::chrono::microseconds timeout{myUsInterval};

Or if you really want the intmax_t rep (which is not needed in this example):

std::chrono::duration<std::intmax_t, std::micro> timeout{myUsInterval};

For a video tutorial of <chrono>, see: CppCon 2016: Howard Hinnant “A <chrono> Tutorial" on YouTube

Original Thread

By anonymous    2018-03-05

Here's another reference: https://www.youtube.com/watch?v=P32hvk8b13M

Original Thread

By anonymous    2018-03-26

These are `std::chrono` concepts. Perhaps this video tutorial about `chrono` would help? https://www.youtube.com/watch?v=P32hvk8b13M

Original Thread

By anonymous    2018-08-01

This line:

std::chrono::duration<unsigned int,std::ratio<1,1000>> today_day (ms.count());

is overflowing. The number of milliseconds since 1970 is on the order of 1.5 trillion. But unsigned int (on your platform) overflows at about 4 billion.

Also, depending on your platform, this line:

std::chrono::duration<system_clock::duration::rep,system_clock::duration::period> same_day(ns.count());

may introduce a conversion error. If you are using gcc, system_clock::duration is nanoseconds, and there will be no error.

However, if you're using llvm's libc++, system_clock::duration is microseconds and you will be silently multiplying your duration by 1000.

And if you are using Visual Studio, system_clock::duration is 100 nanoseconds and you will be silently multiplying your duration by 100.

Here is a video tutorial for <chrono> which may help, and contains warnings about the use of .count() and .time_since_epoch().

Original Thread

Popular Videos 10757

Submit Your Video

If you have some great dev videos to share, please fill out this form.