1. 30 Jul, 2020 1 commit
  2. 30 Jun, 2020 1 commit
    • Chi Song's avatar
      Reuse OpenPAI jobs to run multiple trials (#2521) · 0b9d6ce6
      Chi Song authored
      Designed new interface to support reusable training service, currently only applies to OpenPAI, and default disabled.
      
      Replace trial_keeper.py to trial_runner.py, trial_runner holds an environment, and receives commands from nni manager to run or stop an trial, and return events to nni manager.
      Add trial dispatcher, which inherits from original trianing service interface. It uses to share as many as possible code of all training service, and isolate with training services.
      Add EnvironmentService interface to manage environment, including start/stop an environment, refresh status of environments.
      Add command channel on both nni manager and trial runner parts, it supports different ways to pass messages between them. Current supported channels are file, web sockets. and supported commands from nni manager are start, kill trial, send new parameters; from runner are initialized(support some channel doesn't know which runner connected), trial end, stdout ((new type), including metric like before), version check (new type), gpu info (new type).
      Add storage service to wrapper a storage to standard file operations, like NFS, azure storage and so on.
      Partial support run multiple trials in parallel on runner side, but not supported by trial dispatcher side.
      Other minor changes,
      
      Add log_level to TS UT, so that UT can show debug level log.
      Expose platform to start info.
      Add RouterTrainingService to keep origianl OpenPAI training service, and support dynamic IOC binding.
      Add more GPU info for future usage, including GPU mem total/free/used, gpu type.
      Make some license information consistence.
      Fix async/await problems on Array.forEach, this method doesn't support async actually.
      Fix IT errors on download data, which causes by my #2484 .
      Accelerate some run loop pattern by reducing sleep seconds.
      0b9d6ce6
  3. 07 May, 2020 1 commit
  4. 25 Dec, 2019 1 commit
  5. 23 Dec, 2019 1 commit
  6. 10 Dec, 2019 1 commit
  7. 25 Nov, 2019 1 commit
  8. 06 Nov, 2019 1 commit
  9. 26 Aug, 2019 1 commit
  10. 01 Aug, 2019 1 commit
  11. 30 Jul, 2019 1 commit
  12. 20 Jun, 2019 1 commit
  13. 19 Jun, 2019 1 commit
  14. 20 Mar, 2019 1 commit
  15. 28 Nov, 2018 1 commit
  16. 23 Nov, 2018 1 commit
    • SparkSnail's avatar
      Add nniManagerIp in nnictl and trainingService (#393) · c2a4ce6c
      SparkSnail authored
      Add nniManager Ip in nnictl, pai TrainingService and kubeflow TrainingService.
      If users set nniManagerIp, pai and kubeflow will use this ip instead of using getIPV4() function.
      Web UI will also use this nniManagerIp.
      c2a4ce6c
  17. 27 Sep, 2018 1 commit
    • fishyds's avatar
      PAI Training Service implementation (#128) · d3506e34
      fishyds authored
      * PAI Training service implementation
      **1. Implement PAITrainingService
      **2. Add trial-keeper python module, and modify setup.py to install the module
      **3. Add PAItrainingService rest server to collect metrics from PAI container.
      d3506e34