customize_encoder.ipynb 20.6 KB
Newer Older
1
2
3
4
5
6
7
8
{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Bp8t2AI8i7uP"
      },
      "source": [
Hongkun Yu's avatar
Hongkun Yu committed
9
        "##### Copyright 2022 The TensorFlow Authors."
10
11
12
13
      ]
    },
    {
      "cell_type": "code",
14
      "execution_count": null,
15
16
17
18
      "metadata": {
        "cellView": "form",
        "id": "rxPj2Lsni9O4"
      },
19
      "outputs": [],
20
21
22
23
24
25
26
27
28
29
30
31
      "source": [
        "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
32
      ]
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6xS-9i5DrRvO"
      },
      "source": [
        "# Customizing a Transformer Encoder"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Mwb9uw1cDXsa"
      },
      "source": [
Hongkun Yu's avatar
Hongkun Yu committed
49
50
51
52
53
54
55
56
57
58
59
60
61
62
        "<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://www.tensorflow.org/tfmodels/nlp/customize_encoder\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n",
        "  </td>\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/models/blob/master/docs/nlp/customize_encoder.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
        "  </td>\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://github.com/tensorflow/models/blob/master/docs/nlp/customize_encoder.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
        "  </td>\n",
        "  <td>\n",
        "    <a href=\"https://storage.googleapis.com/tensorflow_docs/models/docs/nlp/customize_encoder.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
        "  </td>\n",
        "</table>"
63
64
65
66
67
68
69
70
71
72
73
74
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iLrcV4IyrcGX"
      },
      "source": [
        "## Learning objectives\n",
        "\n",
        "The [TensorFlow Models NLP library](https://github.com/tensorflow/models/tree/master/official/nlp/modeling) is a collection of tools for building and training modern high performance natural language models.\n",
        "\n",
75
        "The `tfm.nlp.networks.EncoderScaffold` is the core of this library, and lots of new network architectures are proposed to improve the encoder. In this Colab notebook, we will learn how to customize the encoder to employ new network architectures."
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YYxdyoWgsl8t"
      },
      "source": [
        "## Install and import"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fEJSFutUsn_h"
      },
      "source": [
        "### Install the TensorFlow Model Garden pip package\n",
        "\n",
95
96
        "*  `tf-models-official` is the stable Model Garden package. Note that it may not include the latest changes in the `tensorflow_models` github repo. To include latest changes, you may install `tf-models-nightly`,\n",
        "which is the nightly Model Garden package created daily automatically.\n",
97
98
99
100
101
        "*  `pip` will install all models and dependencies automatically."
      ]
    },
    {
      "cell_type": "code",
102
      "execution_count": null,
103
      "metadata": {
104
        "id": "mfHI5JyuJ1y9"
105
      },
106
      "outputs": [],
107
      "source": [
Hongkun Yu's avatar
Hongkun Yu committed
108
        "!pip install -q opencv-python"
109
110
111
112
      ]
    },
    {
      "cell_type": "code",
113
      "execution_count": null,
114
115
116
117
118
      "metadata": {
        "id": "thsKZDjhswhR"
      },
      "outputs": [],
      "source": [
Hongkun Yu's avatar
Hongkun Yu committed
119
        "!pip install -q tf-models-official"
120
      ]
121
122
123
124
125
126
127
128
129
130
131
132
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hpf7JPCVsqtv"
      },
      "source": [
        "### Import Tensorflow and other libraries"
      ]
    },
    {
      "cell_type": "code",
133
      "execution_count": null,
134
135
136
      "metadata": {
        "id": "my4dp-RMssQe"
      },
137
      "outputs": [],
138
139
140
141
      "source": [
        "import numpy as np\n",
        "import tensorflow as tf\n",
        "\n",
142
        "import tensorflow_models as tfm\n",
Hongkun Yu's avatar
Hongkun Yu committed
143
        "nlp = tfm.nlp"
144
      ]
145
146
147
148
149
150
151
152
153
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vjDmVsFfs85n"
      },
      "source": [
        "## Canonical BERT encoder\n",
        "\n",
154
        "Before learning how to customize the encoder, let's firstly create a canonical BERT enoder and use it to instantiate a `bert_classifier.BertClassifier` for classification task."
155
156
157
158
      ]
    },
    {
      "cell_type": "code",
159
      "execution_count": null,
160
161
162
      "metadata": {
        "id": "Oav8sbgstWc-"
      },
163
      "outputs": [],
164
165
166
167
168
169
170
      "source": [
        "cfg = {\n",
        "    \"vocab_size\": 100,\n",
        "    \"hidden_size\": 32,\n",
        "    \"num_layers\": 3,\n",
        "    \"num_attention_heads\": 4,\n",
        "    \"intermediate_size\": 64,\n",
171
        "    \"activation\": tfm.utils.activations.gelu,\n",
172
173
        "    \"dropout_rate\": 0.1,\n",
        "    \"attention_dropout_rate\": 0.1,\n",
174
        "    \"max_sequence_length\": 16,\n",
175
176
177
        "    \"type_vocab_size\": 2,\n",
        "    \"initializer\": tf.keras.initializers.TruncatedNormal(stddev=0.02),\n",
        "}\n",
178
        "bert_encoder = nlp.networks.BertEncoder(**cfg)\n",
179
180
        "\n",
        "def build_classifier(bert_encoder):\n",
181
        "  return nlp.models.BertClassifier(bert_encoder, num_classes=2)\n",
182
183
        "\n",
        "canonical_classifier_model = build_classifier(bert_encoder)"
184
      ]
185
186
187
188
189
190
191
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Qe2UWI6_tsHo"
      },
      "source": [
Mark Daoust's avatar
Mark Daoust committed
192
        "`canonical_classifier_model` can be trained using the training data. For details about how to train the model, please see the [Fine tuning bert](https://www.tensorflow.org/text/tutorials/fine_tune_bert) notebook. We skip the code that trains the model here.\n",
193
194
195
196
197
198
        "\n",
        "After training, we can apply the model to do prediction.\n"
      ]
    },
    {
      "cell_type": "code",
199
      "execution_count": null,
200
201
202
      "metadata": {
        "id": "csED2d-Yt5h6"
      },
203
      "outputs": [],
204
205
206
207
208
      "source": [
        "def predict(model):\n",
        "  batch_size = 3\n",
        "  np.random.seed(0)\n",
        "  word_ids = np.random.randint(\n",
209
210
        "      cfg[\"vocab_size\"], size=(batch_size, cfg[\"max_sequence_length\"]))\n",
        "  mask = np.random.randint(2, size=(batch_size, cfg[\"max_sequence_length\"]))\n",
211
        "  type_ids = np.random.randint(\n",
212
        "      cfg[\"type_vocab_size\"], size=(batch_size, cfg[\"max_sequence_length\"]))\n",
213
214
215
        "  print(model([word_ids, mask, type_ids], training=False))\n",
        "\n",
        "predict(canonical_classifier_model)"
216
      ]
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PzKStEK9t_Pb"
      },
      "source": [
        "## Customize BERT encoder\n",
        "\n",
        "One BERT encoder consists of an embedding network and multiple transformer blocks, and each transformer block contains an attention layer and a feedforward layer."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rmwQfhj6fmKz"
      },
      "source": [
        "We provide easy ways to customize each of those components via (1)\n",
        "[EncoderScaffold](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/networks/encoder_scaffold.py) and (2) [TransformerScaffold](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/layers/transformer_scaffold.py)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xsMgEVHAui11"
      },
      "source": [
        "### Use EncoderScaffold\n",
        "\n",
247
        "`networks.EncoderScaffold` allows users to provide a custom embedding subnetwork\n",
248
249
250
251
252
253
254
255
256
257
258
        "  (which will replace the standard embedding logic) and/or a custom hidden layer class (which will replace the `Transformer` instantiation in the encoder)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-JBabpa2AOz8"
      },
      "source": [
        "#### Without Customization\n",
        "\n",
259
        "Without any customization, `networks.EncoderScaffold` behaves the same the canonical `networks.BertEncoder`.\n",
260
        "\n",
261
        "As shown in the following example, `networks.EncoderScaffold` can load `networks.BertEncoder`'s weights and output the same values:"
262
263
264
265
      ]
    },
    {
      "cell_type": "code",
266
      "execution_count": null,
267
268
269
      "metadata": {
        "id": "ktNzKuVByZQf"
      },
270
      "outputs": [],
271
272
273
274
      "source": [
        "default_hidden_cfg = dict(\n",
        "    num_attention_heads=cfg[\"num_attention_heads\"],\n",
        "    intermediate_size=cfg[\"intermediate_size\"],\n",
275
        "    intermediate_activation=cfg[\"activation\"],\n",
276
277
        "    dropout_rate=cfg[\"dropout_rate\"],\n",
        "    attention_dropout_rate=cfg[\"attention_dropout_rate\"],\n",
278
        "    kernel_initializer=cfg[\"initializer\"],\n",
279
280
281
282
283
        ")\n",
        "default_embedding_cfg = dict(\n",
        "    vocab_size=cfg[\"vocab_size\"],\n",
        "    type_vocab_size=cfg[\"type_vocab_size\"],\n",
        "    hidden_size=cfg[\"hidden_size\"],\n",
284
        "    initializer=cfg[\"initializer\"],\n",
285
        "    dropout_rate=cfg[\"dropout_rate\"],\n",
286
        "    max_seq_length=cfg[\"max_sequence_length\"]\n",
287
288
289
290
291
292
293
        ")\n",
        "default_kwargs = dict(\n",
        "    hidden_cfg=default_hidden_cfg,\n",
        "    embedding_cfg=default_embedding_cfg,\n",
        "    num_hidden_instances=cfg[\"num_layers\"],\n",
        "    pooled_output_dim=cfg[\"hidden_size\"],\n",
        "    return_all_layer_outputs=True,\n",
294
        "    pooler_layer_initializer=cfg[\"initializer\"],\n",
295
        ")\n",
296
        "\n",
297
        "encoder_scaffold = nlp.networks.EncoderScaffold(**default_kwargs)\n",
298
299
300
301
        "classifier_model_from_encoder_scaffold = build_classifier(encoder_scaffold)\n",
        "classifier_model_from_encoder_scaffold.set_weights(\n",
        "    canonical_classifier_model.get_weights())\n",
        "predict(classifier_model_from_encoder_scaffold)"
302
      ]
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sMaUmLyIuwcs"
      },
      "source": [
        "#### Customize Embedding\n",
        "\n",
        "Next, we show how to use a customized embedding network.\n",
        "\n",
        "We firstly build an embedding network that will replace the default network. This one will have 2 inputs (`mask` and `word_ids`) instead of 3, and won't use positional embeddings."
      ]
    },
    {
      "cell_type": "code",
319
      "execution_count": null,
320
321
322
      "metadata": {
        "id": "LTinnaG6vcsw"
      },
323
      "outputs": [],
324
325
      "source": [
        "word_ids = tf.keras.layers.Input(\n",
326
        "    shape=(cfg['max_sequence_length'],), dtype=tf.int32, name=\"input_word_ids\")\n",
327
        "mask = tf.keras.layers.Input(\n",
328
        "    shape=(cfg['max_sequence_length'],), dtype=tf.int32, name=\"input_mask\")\n",
329
        "embedding_layer = nlp.layers.OnDeviceEmbedding(\n",
330
331
        "    vocab_size=cfg['vocab_size'],\n",
        "    embedding_width=cfg['hidden_size'],\n",
332
        "    initializer=cfg[\"initializer\"],\n",
333
334
        "    name=\"word_embeddings\")\n",
        "word_embeddings = embedding_layer(word_ids)\n",
335
        "attention_mask = nlp.layers.SelfAttentionMask()([word_embeddings, mask])\n",
336
337
        "new_embedding_network = tf.keras.Model([word_ids, mask],\n",
        "                                       [word_embeddings, attention_mask])"
338
      ]
339
340
341
342
343
344
345
346
347
348
349
350
351
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HN7_yu-6O3qI"
      },
      "source": [
        "Inspecting `new_embedding_network`, we can see it takes two inputs:\n",
        "`input_word_ids` and `input_mask`."
      ]
    },
    {
      "cell_type": "code",
352
      "execution_count": null,
353
354
355
      "metadata": {
        "id": "fO9zKFE4OpHp"
      },
356
      "outputs": [],
357
358
      "source": [
        "tf.keras.utils.plot_model(new_embedding_network, show_shapes=True, dpi=48)"
359
      ]
360
361
362
363
364
365
366
367
368
369
370
371
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9cOaGQHLv12W"
      },
      "source": [
        "We then can build a new encoder using the above `new_embedding_network`."
      ]
    },
    {
      "cell_type": "code",
372
      "execution_count": null,
373
374
375
      "metadata": {
        "id": "mtFDMNf2vIl9"
      },
376
      "outputs": [],
377
378
379
380
381
382
383
      "source": [
        "kwargs = dict(default_kwargs)\n",
        "\n",
        "# Use new embedding network.\n",
        "kwargs['embedding_cls'] = new_embedding_network\n",
        "kwargs['embedding_data'] = embedding_layer.embeddings\n",
        "\n",
384
        "encoder_with_customized_embedding = nlp.networks.EncoderScaffold(**kwargs)\n",
385
386
387
388
389
390
        "classifier_model = build_classifier(encoder_with_customized_embedding)\n",
        "# ... Train the model ...\n",
        "print(classifier_model.inputs)\n",
        "\n",
        "# Assert that there are only two inputs.\n",
        "assert len(classifier_model.inputs) == 2"
391
      ]
392
393
394
395
396
397
398
399
400
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Z73ZQDtmwg9K"
      },
      "source": [
        "#### Customized Transformer\n",
        "\n",
401
        "User can also override the `hidden_cls` argument in `networks.EncoderScaffold`'s constructor to employ a customized Transformer layer.\n",
402
        "\n",
403
        "See [the source of `nlp.layers.ReZeroTransformer`](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/layers/rezero_transformer.py) for how to implement a customized Transformer layer.\n",
404
        "\n",
405
        "Following is an example of using `nlp.layers.ReZeroTransformer`:\n"
406
407
408
409
      ]
    },
    {
      "cell_type": "code",
410
      "execution_count": null,
411
412
413
      "metadata": {
        "id": "uAIarLZgw6pA"
      },
414
      "outputs": [],
415
416
417
418
      "source": [
        "kwargs = dict(default_kwargs)\n",
        "\n",
        "# Use ReZeroTransformer.\n",
419
        "kwargs['hidden_cls'] = nlp.layers.ReZeroTransformer\n",
420
        "\n",
421
        "encoder_with_rezero_transformer = nlp.networks.EncoderScaffold(**kwargs)\n",
422
423
424
425
426
427
        "classifier_model = build_classifier(encoder_with_rezero_transformer)\n",
        "# ... Train the model ...\n",
        "predict(classifier_model)\n",
        "\n",
        "# Assert that the variable `rezero_alpha` from ReZeroTransformer exists.\n",
        "assert 'rezero_alpha' in ''.join([x.name for x in classifier_model.trainable_weights])"
428
      ]
429
430
431
432
433
434
435
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6PMHFdvnxvR0"
      },
      "source": [
436
        "### Use `nlp.layers.TransformerScaffold`\n",
437
        "\n",
438
        "The above method of customizing the model requires rewriting the whole `nlp.layers.Transformer` layer, while sometimes you may only want to customize either attention layer or feedforward block. In this case, `nlp.layers.TransformerScaffold` can be used.\n"
439
440
441
442
443
444
445
446
447
448
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "D6FejlgwyAy_"
      },
      "source": [
        "#### Customize Attention Layer\n",
        "\n",
449
        "User can also override the `attention_cls` argument in `layers.TransformerScaffold`'s constructor to employ a customized Attention layer.\n",
450
        "\n",
451
        "See [the source of `nlp.layers.TalkingHeadsAttention`](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/layers/talking_heads_attention.py) for how to implement a customized `Attention` layer.\n",
452
        "\n",
453
        "Following is an example of using `nlp.layers.TalkingHeadsAttention`:"
454
455
456
457
      ]
    },
    {
      "cell_type": "code",
458
      "execution_count": null,
459
460
461
      "metadata": {
        "id": "nFrSMrZuyNeQ"
      },
462
      "outputs": [],
463
464
465
      "source": [
        "# Use TalkingHeadsAttention\n",
        "hidden_cfg = dict(default_hidden_cfg)\n",
466
        "hidden_cfg['attention_cls'] = nlp.layers.TalkingHeadsAttention\n",
467
468
        "\n",
        "kwargs = dict(default_kwargs)\n",
469
        "kwargs['hidden_cls'] = nlp.layers.TransformerScaffold\n",
470
471
        "kwargs['hidden_cfg'] = hidden_cfg\n",
        "\n",
472
        "encoder = nlp.networks.EncoderScaffold(**kwargs)\n",
473
474
475
476
477
478
        "classifier_model = build_classifier(encoder)\n",
        "# ... Train the model ...\n",
        "predict(classifier_model)\n",
        "\n",
        "# Assert that the variable `pre_softmax_weight` from TalkingHeadsAttention exists.\n",
        "assert 'pre_softmax_weight' in ''.join([x.name for x in classifier_model.trainable_weights])"
479
480
481
482
      ]
    },
    {
      "cell_type": "code",
483
      "execution_count": null,
484
485
486
487
488
489
490
      "metadata": {
        "id": "tKkZ8spzYmpc"
      },
      "outputs": [],
      "source": [
        "tf.keras.utils.plot_model(encoder_with_rezero_transformer, show_shapes=True, dpi=48)"
      ]
491
492
493
494
495
496
497
498
499
500
501
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kuEJcTyByVvI"
      },
      "source": [
        "#### Customize Feedforward Layer\n",
        "\n",
        "Similiarly, one could also customize the feedforward layer.\n",
        "\n",
502
        "See [the source of `nlp.layers.GatedFeedforward`](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/layers/gated_feedforward.py) for how to implement a customized feedforward layer.\n",
503
        "\n",
504
        "Following is an example of using `nlp.layers.GatedFeedforward`:"
505
506
507
508
      ]
    },
    {
      "cell_type": "code",
509
      "execution_count": null,
510
511
512
      "metadata": {
        "id": "XAbKy_l4y_-i"
      },
513
      "outputs": [],
514
      "source": [
515
        "# Use GatedFeedforward\n",
516
        "hidden_cfg = dict(default_hidden_cfg)\n",
517
        "hidden_cfg['feedforward_cls'] = nlp.layers.GatedFeedforward\n",
518
519
        "\n",
        "kwargs = dict(default_kwargs)\n",
520
        "kwargs['hidden_cls'] = nlp.layers.TransformerScaffold\n",
521
522
        "kwargs['hidden_cfg'] = hidden_cfg\n",
        "\n",
523
        "encoder_with_gated_feedforward = nlp.networks.EncoderScaffold(**kwargs)\n",
524
525
526
527
528
529
        "classifier_model = build_classifier(encoder_with_gated_feedforward)\n",
        "# ... Train the model ...\n",
        "predict(classifier_model)\n",
        "\n",
        "# Assert that the variable `gate` from GatedFeedforward exists.\n",
        "assert 'gate' in ''.join([x.name for x in classifier_model.trainable_weights])"
530
      ]
531
532
533
534
535
536
537
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a_8NWUhkzeAq"
      },
      "source": [
538
        "### Build a new Encoder\n",
539
540
541
        "\n",
        "Finally, you could also build a new encoder using building blocks in the modeling library.\n",
        "\n",
Hongkun Yu's avatar
Hongkun Yu committed
542
        "See [the source for `nlp.networks.AlbertEncoder`](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/networks/albert_encoder.py) as an example of how to do this. \n",
543
544
        "\n",
        "Here is an example using `nlp.networks.AlbertEncoder`:\n"
545
546
547
548
      ]
    },
    {
      "cell_type": "code",
549
      "execution_count": null,
550
551
552
      "metadata": {
        "id": "xsiA3RzUzmUM"
      },
553
      "outputs": [],
554
      "source": [
555
        "albert_encoder = nlp.networks.AlbertEncoder(**cfg)\n",
556
557
558
        "classifier_model = build_classifier(albert_encoder)\n",
        "# ... Train the model ...\n",
        "predict(classifier_model)"
559
      ]
560
561
562
563
564
565
566
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MeidDfhlHKSO"
      },
      "source": [
Hongkun Yu's avatar
Hongkun Yu committed
567
        "Inspecting the `albert_encoder`, we see it stacks the same `Transformer` layer multiple times (note the loop-back on the \"Transformer\" block below.."
568
569
570
571
      ]
    },
    {
      "cell_type": "code",
572
      "execution_count": null,
573
574
575
      "metadata": {
        "id": "Uv_juT22HERW"
      },
576
      "outputs": [],
577
578
      "source": [
        "tf.keras.utils.plot_model(albert_encoder, show_shapes=True, dpi=48)"
579
580
581
582
583
584
585
586
587
588
589
590
591
      ]
    }
  ],
  "metadata": {
    "colab": {
      "collapsed_sections": [],
      "name": "customize_encoder.ipynb",
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
592
    }
593
594
595
  },
  "nbformat": 4,
  "nbformat_minor": 0
Hongkun Yu's avatar
Hongkun Yu committed
596
}