LightGBM can use categorical features as input directly. It doesn't need to covert to one-hot coding, and is much faster than one-hot coding (about 8x speed-up).
**Note:You should convert your categorical features to int type before you construct `Dataset`.**
#### Weights can be set when needed:
```python
w=np.random.rand(500,1)
train_data=lgb.Dataset(data,label=label,weight=w)
```
or
```python
train_data=lgb.Dataset(data,label=label)
w=np.random.rand(500,1)
train_data.set_weight(w)
```
And you can use `Dataset.set_init_score()` to set initial score, and `Dataset.set_group()` to set group/query data for ranking tasks.
#### Memory efficent usage
The `Dataset` object in LightGBM is very memory-efficient, due to it only need to save discrete bins.
However, Numpy/Array/Pandas object is memory cost. If you concern about your memory consumption. You can save memory accroding to following:
1. let ```free_raw_data=True```(default is ```True```) when construct Dataset
2. Explicit set ```raw_data=None``` after ```Dataset``` constructed
3. Call ```gc```
Setting Parameters
------------------
LightGBM can use either a list of pairs or a dictionary to set [parameters](https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.md). For instance:
The model will train until the validation score stops improving. Validation error needs to improve at least every `early_stopping_rounds` to continue training.
If early stopping occurs, the model will have an additional field: `bst.best_iteration`. Note that `train()` will return a model from the last iteration, not the best one. And you can set `num_iteration=bst.best_iteration` when saving model.
This works with both metrics to minimize (L2, log loss, etc.) and to maximize (NDCG, AUC). Note that if you specify more than one evaluation metric, all of them will be used for early stopping.
Prediction
----------
A model that has been trained or loaded can perform predictions on data sets.
```python
# 7 entities, each contains 10 features
data=np.random.rand(7,10)
ypred=bst.predict(data)
```
If early stopping is enabled during training, you can get predictions from the best iteration with `bst.best_iteration`: