Daniel Elliott

Pacific++ 2017 trip report – Part 1

I’m writing this after my first ever c++ conference in person. I’ve watched countless hours of cppcon, cppnow and meetingc++ on youtube so I was very excited when a c++ conference was announced that was going to be held in the country I live in, New Zealand. The speaker line up was of top calibre too. Chandler Carruth and Jason Turner have both delivered top talks at multiple conferences so to see this live is pretty cool.

I was able to say hello to Jason Turner in the morning as we were registering and collecting our badges. Very nice to briefly meet Jason as he contributes to the c++ community and co-hosts the c++ podcast cppcast. (More on his talk in part 2 of this trip report!)

I was able to meet a developer from the New Zealand auction website trademe (NZ’s version of ebay) and a guy who worked for an Airway Navigation service. Interesting to hear what other industries do.

I had travelled with two of my colleagues from Weta Digital who are actual ‘proper’ developers. I a am a Visual Effects Artist and c++ is my hobby (an extreme hobby, if you will). I have started using c++ in my job now and very much enjoy it.¬† Was a great chance to find more about my colleagues’s roles as well within the company and how they use c++. I could write a whole other post on c++ use within the VFX industry.

The first day was a full day of 5 talks. Here is an overview……

 

Chandler Carruth

Google

First up was Chandler Carruth with his talk on LLVM and c++ toolchains. The talk started by talking about the compilers and toolchains that are out in the wild that come with various distributions of linux and how far behind/ahead they are because this affects what c++ tools and language features are available on those default platforms. I wanted to let Chandler know about the industry I work in the VFX industry which has the VFX Reference Platform which is a set of recommended compiler and library versions that companies should adhere to for a higher chance of compatibility. There is also a google group if anyone wants to try and convince the industry to bump up quicker (next year we are going UP to gcc 6.3 woop! no word on clang yet although it is used when possible to compile with binary compatibility).

What was then shown was many compelling reasons to use the latest versions of the LLVM tools and the clang compiler. First we were shown that it is relatively easy to download and compile the repository yourself (what wasn’t shown was how difficult it can be getting PATHs, Dynamic Library Paths and include paths to play nicely when you have multiple compilers and libraries installed but that wasn’t the scope of the talk.

Clang format was also given a mention (I’ve seen talks on this and even use the default options that come with qtcreator but want to explore more on this including creating my own .format file).

We were shown live demos of the sanitizer tools in c++ which can analyze your code for potential errors. There was the address sanitizer which can catch access violations like going past the end of an array, a thread sanitizer that can catch data races (that might not even seem a problem because your code might compile and run fine 99.9 percent of the time) and also the memory sanitizer which can warn about uninitialized memory.

Another thing shown was the thinLTO and llvms own linker which can optimize the whole program accross translation units (I wasn’t sure how that was different from the standard flto flag). Examples were shown by using the google benchmark (which I know use myself thanks to Chandler’s previous talk from cppcon a couple of years ago).

Overall this all was showing that there are a lot of people out there working to make working with c++ easier. I wish more companies and industries were aware of what working with newer and better compilers and ecosystems can bring to people’s lives (even if it is a bit riskier to bump versions of compilers at a large company).

Toby Allsopp

WhereScape Software – New Zealand

Toby gave pretty deep coverage of the coroutines TS and the machinery around getting the best use from them. Having seen some simple examples of coroutines use in C# and unity3D, I wasn’t aware of how coroutines in c++ were different. I can see now after this talk that there is a lot of work that the programmer has to do get full use out of coroutines.

Basically they boil down to 3 new keywords that give you hooks into how the functions are suspended and resumed and what happens when they are. I didn’t fully understand everything but I get that there are going to have to be libraries and community education on best practices, patterns and idioms on coroutines.

Asynchronous generators seems to be the most likeliest use for them.

Overall, this was a very in depth talk on the machinery around what coroutines give you and how you can hook into what is happening and write your own code and control what happens behind the scenes.

Will have to rewatch the video when it is posted.

 

Matt Bentley

plf library

I’m a bit of a lurker on the SG14 (high performance and low latency study group for the c++ standard) google group forum and was familiar with Matt from there and from his appearances on cppcast and cppson as well as being a New Zealander which is the country I live in.

Matt presented details on his latest container plf::list. It’s a almost drop in replacement for std::list but with better performance. It’s closely related to Matt’s other container plf::colony which is like a vector but is broken into chunks which allows much faster insertion and erasure as well as a novel way to iterate the container using a cool thing called a jump counting skip field. (the JCKF is similar to something used in the VFX software Houdini’s SDK. 3D Particles are stored in contiguous memory chunks. Deleted particles just mark parts of the memory as empty and a ‘block advance’ iterator is provided to iterate and skip over empty particles, very similar to colony but don’t think they use a jump counting skip field so thats something they could benefit from).

This list has a similar design goal to allow better cache performance for today’s modern cpus.

I’m a fan of Matt’s containers and his approach to making things faster. I like Matt’s talks and how he sometime gets philosophical about why we are doing things.

Apparently std::lists are still used in some industries and Matt was spurred on by someone who had contacted him and asked if was possible to do the same thing he had done for colony but for linked lists.

One of the reasons I like the plf approach is that contain cool tricks and machinery to get things done. For instance, to sort the list, Matt uses a thing called “The pointer-array sort hack”, which puts all of the pointers of the nodes in an array (which has a memory cost of N pointers for N elements in the list) and then sorts the pointers by the value of the elements they point to using a functor. Then the pointers are iterated through and pointers are updated in the main list. This is faster because the initial filling of the pointer array is fast because they can be iterated on in sequential contiguous memory which again is great for cache.

I think if I start a project soon, then I will try out using plf::colony and plf::list (if I need a list!).

 

Dean Michael Berris

Google

LLVM XRay

XRay is a tool that comes with the LLVM suite of tools that patches your binary so you can get information about when functions are called and how long they take. This gives you a lot of information about what your program is actually doing. Dean showed how they are doing it with various assembly tricks which themselves were quite interesting. Obviously once data is generated, there needs to be some way to inspect it in a human readable way. There are some command line tools that can give you readouts from the data and there are also open source graphical tools (one called a flamegraph I believe) which shows the the time of functions asnd the callstack as a bunch of horizontal and vertical bars. A wide bar is a long function and a stacked up bunch of bars means that you have very deep calls in your program (ie one functions calls another, which calls another, which calls…. etc). Overall a very good presentation and something I will add to my list to try in my own programs.

Christian Blume

Serato

I really enjoyed this talk as I a had the night before watched a talk by Hartmut Kaiser on the HPX library and the parallel programming style offered by the libraries like that (stlab) and the language support in the upcoming concurrent TS.

Christian talked about his library Transwarp which also offers it’s own take on the extentions to std::future. For those that don’t know what a future is, it basically is an object which represents a value that actually isn’t there yet. You can create one by launching an asynchronous task and run a function on another thread. You get back your std::future immediately but the other thread might not have finished computing the value yet. So your thread that invoked the async task is free to go on and do other work. At some later time, you can call your future’s .get() method to retrieve the value (hoping that it is ready by that point).

The main issue is that the current std::future that is part of c++11/14/17 blocks when you call .get(). This is obviously bad for performance. The solution to this is to add a .then() function to futures which allow you to chain them up so that when one completes, it automatically calls the next one passing along its result. What this allows is for you to chain up multiple futures and also build a complex graph of futures with the when_all() and when_any() functions which return another future that automatically runs when all or 1 of it’s chile futures are finished. This allows for code that looks like normal single threaded code to become parallelized. It is a very powerful expressive technique and I thing will vastly improve the way people write multithreaded code.

Transwarp had slightly different syntax and design from the other libraries and concurrency TS. Apparently, it also had some extra advantages like the ability to build these task graphs out of futures and then have multiple invocations and rescheduling of the task graphs. I’ll need to explore that more as I don’t yet see the power.

The library also brings executors which allow you to get more control over the scheduling of the tasks. Again, I’ll need to read up more on them.

Overall, seemed like something which Christian had put a lot of work into and the github repo has a nice readme with a comparison of transwarp to the other libraries.

End of Day 1

 

Well by the end I was extremely tired and headed back to my hotel room and watched some cppcon videos……zzzzzzzzzz

Leave a Reply

Your email address will not be published. Required fields are marked *