• kenmatsu4's avatar
    [python] Bug fix for first_metric_only on earlystopping. (#2209) · 84754399
    kenmatsu4 authored
    * Bug fix for first_metric_only if the first metric is train metric.
    
    * Update bug fix for feval issue.
    
    * Disable feval for first_metric_only.
    
    * Additional test items.
    
    * Fix wrong assertEqual settings & formating.
    
    * Change dataset of test.
    
    * Fix random seed for test.
    
    * Modiry assumed test result due to different sklearn verion between CI and local.
    
    * Remove f-string
    
    * Applying variable  assumed test result for test.
    
    * Fix flake8 error.
    
    * Modifying  in accordance with review comments.
    
    * Modifying for pylint.
    
    * simplified tests
    
    * Deleting error criteria `if eval_metric is None`.
    
    * Delete test items of classification.
    
    * Simplifying if condition.
    
    * Applying first_metric_only for sklearn wrapper.
    
    * Modifying test_sklearn for comforming to python 2.x
    
    * Fix flake8 error.
    
    * Additional fix for sklearn and add tests.
    
    * Bug fix and add test cases.
    
    * some refactor
    
    * fixed lint
    
    * fixed lint
    
    * Fix duplicated metrics scores to pass the test.
    
    * Fix the case first_metric_only not in params.
    
    * Converting metrics aliases.
    
    * Add comment.
    
    * Modify comment for pylint.
    
    * Modify comment for pydocstyle.
    
    * Using split test set for two eval_set.
    
    * added test case for metric aliases and length checks
    
    * minor style fixes
    
    * fixed rmse name and alias position
    
    * Fix the case metric=[]
    
    * Fix using env.model._train_data_name
    
    * Fix wrong test condition.
    
    * Move initial process to _init() func.
    
    * Modify test setting for test_sklearn & training data matching on callback.py
    
    * test_sklearn.py
    -> A test case for training is wrong, so fixed.
    
    * callback.py
    -> A condition of if statement for detecting test dataset is wrong, so fixed.
    
    * Support composite name metrics.
    
    * Remove metric check process & reduce redundant test cases.
    
    For #2273 fixed not only the order of metrics in cpp, removing metric check process at callback.py
    
    * Revised according to the matters pointed out on a review.
    
    * increased code readability
    
    * Fix the issue of order of validation set.
    
    * Changing to OrderdDict from default dict for score result.
    
    * added missed check in cv function for first_metric_only and feval co-occurrence
    
    * keep order only for metrics but not for datasets in best_score
    
    * move OrderedDict initialization to init phase
    
    * fixed minor printing issues
    
    * move first metric detection to init phase and split can be performed without checks
    
    * split only once during callback
    
    * removed excess code
    
    * fixed typo in variable name and squashed ifs
    
    * use setdefault
    
    * hotfix
    
    * fixed failing test
    
    * refined tests
    
    * refined sklearn test
    
    * Making "feval" effective on early stopping.
    
    * allow feval and first_metric_only for cv
    
    * removed unused code
    
    * added tests for feval
    
    * fixed printing
    
    * add note about whitespaces in feval name
    
    * Modifying final iteration process in case valid set is  training data.
    84754399
basic.py 107 KB