Open
Description
Collecting related thoughts from #210.
Especially with PyTorch. It certainly feels like there should be an integration with PyTorch because it has torch.distributed and torch.multiprocessing
Optimization:
- I see these algs (e.g., Hogwild) as exploiting dask's distributed architecture
- These will require a parameter server. Can we make this general and integrate with (for example) CuPy/Chainer and PyTorch?
@TomAugspurger in #210 (comment)
Dask-Tensorflow
- Review new datasets API, anything we should do there?