1. 30 Jun, 2020 1 commit
    • Chi Song's avatar
      Reuse OpenPAI jobs to run multiple trials (#2521) · 0b9d6ce6
      Chi Song authored
      Designed new interface to support reusable training service, currently only applies to OpenPAI, and default disabled.
      
      Replace trial_keeper.py to trial_runner.py, trial_runner holds an environment, and receives commands from nni manager to run or stop an trial, and return events to nni manager.
      Add trial dispatcher, which inherits from original trianing service interface. It uses to share as many as possible code of all training service, and isolate with training services.
      Add EnvironmentService interface to manage environment, including start/stop an environment, refresh status of environments.
      Add command channel on both nni manager and trial runner parts, it supports different ways to pass messages between them. Current supported channels are file, web sockets. and supported commands from nni manager are start, kill trial, send new parameters; from runner are initialized(support some channel doesn't know which runner connected), trial end, stdout ((new type), including metric like before), version check (new type), gpu info (new type).
      Add storage service to wrapper a storage to standard file operations, like NFS, azure storage and so on.
      Partial support run multiple trials in parallel on runner side, but not supported by trial dispatcher side.
      Other minor changes,
      
      Add log_level to TS UT, so that UT can show debug level log.
      Expose platform to start info.
      Add RouterTrainingService to keep origianl OpenPAI training service, and support dynamic IOC binding.
      Add more GPU info for future usage, including GPU mem total/free/used, gpu type.
      Make some license information consistence.
      Fix async/await problems on Array.forEach, this method doesn't support async actually.
      Fix IT errors on download data, which causes by my #2484 .
      Accelerate some run loop pattern by reducing sleep seconds.
      0b9d6ce6
  2. 22 Jun, 2020 1 commit
  3. 25 Nov, 2019 1 commit
  4. 22 Apr, 2019 1 commit
  5. 19 Apr, 2019 1 commit
  6. 15 Mar, 2019 1 commit
    • SparkSnail's avatar
      Support version check of nni (#807) · d0b22fc7
      SparkSnail authored
      check nni version in trialkeeper, to make sure the version of trialkeeper is consistent with trainingService
      add a debug mode in config file
      d0b22fc7
  7. 25 Feb, 2019 1 commit
    • SparkSnail's avatar
      Support webhdfs path in python hdfs client (#722) · 8c4c0ef2
      SparkSnail authored
      trial_keeper use 50070 port to connect to webhdfs server, and PAI use a mapping method to map 50070 port to 5070 port to visit restful server, this method has some risk for PAI may not support this kind of mapping in later release.Now use Pylon path(/webhdfs/api/v1) instead of 50070 port in webhdfs client of trial_keeper, the path is transmitted in trainingService.
      In this pr, we have these changes:
      
      1. Change to use webhdfs path instead of 50070 port in hdfs client.
      2. Change to use new hdfs package "PythonWebHDFS", which is build to support pylon by myself. You could test the new function from "sparksnail/nni:dev-pai" image to test pai trainingService.
      3. Update some variables' name according to comments.
      8c4c0ef2
  8. 26 Dec, 2018 1 commit
  9. 17 Dec, 2018 1 commit
  10. 05 Dec, 2018 1 commit
  11. 09 Nov, 2018 1 commit
  12. 05 Nov, 2018 1 commit
  13. 31 Oct, 2018 3 commits
  14. 26 Oct, 2018 1 commit
  15. 27 Sep, 2018 1 commit
    • fishyds's avatar
      PAI Training Service implementation (#128) · d3506e34
      fishyds authored
      * PAI Training service implementation
      **1. Implement PAITrainingService
      **2. Add trial-keeper python module, and modify setup.py to install the module
      **3. Add PAItrainingService rest server to collect metrics from PAI container.
      d3506e34
  16. 14 Sep, 2018 1 commit
  17. 23 Aug, 2018 1 commit
  18. 20 Aug, 2018 1 commit