-
Notifications
You must be signed in to change notification settings - Fork 44
[WIP] Introducing execution- and launch policies #300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development
Are you sure you want to change the base?
Conversation
Related to launch policies: Task concept specification (see #193) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for getting the ball rolling. Questions inline.
/** | ||
* Parallel non-sequential execution policy. | ||
*/ | ||
class parallel_unsequenced_policy { }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference in semantics between parallel_policy
and parallel_unsequenced_policy
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is well described on cppreference.
InputIt in_last, | ||
OutputIt out_first); | ||
dash::Future<ValueType *> copy( | ||
ExecutionPolicy && policy, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The execution policy is ignored atm, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I just pinned the branch as PR to simplify discussion. It's work in progress (= [WIP]
in title).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. It perfectly matches the C++17 Specs.
I think I got the required concepts figured out. Execution Policies In C++17, execution policies are introduced in algorithm function interfaces and passed as first argument: template< class ExecutionPolicy, class InputIt, class UnaryFunction2 >
void for_each( ExecutionPolicy&& policy, InputIt first, InputIt last, UnaryFunction2 f ); It is therefore trivial to specialize / overload algorithm interfaces for execution policies. The actual effort is in rather formal aspects. Which is nice, because science. In principle, this boils down to additional PGAS-specific dimensions for execution policies.
... you get the idea. Policies can be arbitrarily combined with Launch Policies STL concepts do not cover our requirements regarding launch policies. template< class Function, class... Args >
std::future<typename std::result_of<Function(Args...)>::type>
async( std::launch policy, Function&& fun, Args&&... args ); Parameter Our algorithm variants depend on launch policies, however. For example, asynchronous copying I sketched out some alternatives, like:
So far, my initial approach based on phantom- and container proxy types looks like the best option to me. Instead of namespaces or function template specializations, algorithm variants are realized as overloads for iterator traits. An iterator for async operations would have dash::Array<X> array(size);
// Default:
auto copy_end = dash::copy(array.begin(), array.end(), dest);
// -> dash::Array<X>::iterator
// Async:
auto copy_end = dash::copy(array.async.begin(), array.async.end(), dest);
// current implementation:
// -> dash::GlobAsyncIter<T>
// will be:
// -> dash::Future<dash::Array<X>::iterator>
// Lazy:
auto copy_end = dash::copy(array.deferred.begin(), array.deferred.end(), dest);
// -> dash::Array<X>::iterator This is very close to the STL specs and there is no way to get it wrong for developers. But also, it is important to note that we now can actually define // Developer writes:
auto fut_copy_end = dash::async(
dash::launch::async,
dash::copy<Foo>,
array.begin(),
array.end());
// --> resolves to:
auto fut_copy_end = dash::async(
dash::launch::async,
dash::copy<Foo>,
array.begin().async, // same as array.async.begin()
array.end().async); // same as array.async.end() This is semantically and conceptually robust:
So long story short: There is a chance we can have the cake and eat it, too. In the current implementation in branch Looking forward to your thoughts! |
Thanks for this brain dump! I think I get the idea. On Execution Policies
Keep in mind that DART will eventually have it's own thread-pool independent of OpenMP. OpenMP is just too restrictive to use it as a general-purpose task scheduler (unfortunately). Executing tasks in DART and OpenMP in parallel likely leads to contention and performance degradation. On Launch Policies Now, how is this implemented for other DASH algorithms? Do we provide overrides for them as well? For example, combining My idea of tasking in DASH currently is that calling There are two concepts that I would like to raise awareness of:
We should definitely discuss this in Garching. I think we are arriving at a state where the tasking in DART is ready to be made available (though maybe not in its full feature-set). I've been holding it back until after the (ominous) next release so far. |
Of course! Just using OpenMP as a reference, but we want light-weight tasks, of course.
That's not a contradiction: the conversion to But on the other hand, how would you replace the current I assume that eventually, the current async interface of DART will be wrapped in a proper task interface so the C++ layer could prefer tasks to abstract async operations - Still, we must be able to support these "async-specialized implementation" cases like We need both, and I think / hope the proposed interface could do. |
Not at all, it does not make sense to implement
Absolutely, I just wanted to make sure we are on the same page here. I will present my current status in Garching and based on that we can discuss the details of the tasking abstraction in it's full beauty :) |
Ah, I got a good feeling about this. Ehhmm, when exactly is the SPPEXA name-dancing in Garching again? |
Ah, here we go: http://www.sppexa.de/sppexa-activities/annual-plenary-meeting/2017.html That's a good time frame. Did you happen to review HPX and Kokkos (also: Alpaka), yet? |
I had a look at Kokkos (see my comments in #193) and will attend a workshop on HPX here at HLRS next week. I haven't looked at Alpaca, yet. Could that be relevant as well? |
Yes, Alpaka is relevant as well. I'm just here at the Alpaka Kickoff meeting, will report back tomorrow. |
@fuerlinger Gotcha. If you can provide us with some material I'll be happy to look at it before the Garching meeting. |
Can we get this into 0.3.0? If not, I would start working on a quick fix for |
@devreal I can get this into 0.3.0 this week. |
@fuchsto May I hijack this PR to add the few lines needed for Alpakas |
Addresses issues #272 #104 #216