MediaGen.ipynb 31.4 KB
Newer Older
Rayyyyy's avatar
Rayyyyy committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "51a5122c-1e2a-4626-9fe1-d2aad9186007",
   "metadata": {
    "id": "51a5122c-1e2a-4626-9fe1-d2aad9186007"
   },
   "source": [
    "# Building a Video Generation Pipeline with Llama3\n",
    "\n",
    "<!--\n",
    "## Video Walkthrough\n",
    "You can follow along this notebook on our [video walkthrough](https://youtu.be/UO8QZ2qBonw).\n",
    "\n",
    "[![Workshop Walkthrough Still](http://img.youtube.com/vi/UO8QZ2qBonw/0.jpg)](http://www.youtube.com/watch?v=UO8QZ2qBonw \"Workshop Walkthrough\") -->\n",
    "\n",
    "## Overview\n",
    "In this notebook you'll learn how to build a powerful media generation pipeline in a few simple steps. More specifically, this pipeline will generate a ~1min long food recipe video entirely from just the name of a dish.\n",
    "\n",
    "This demo in particular showcases the ability for Llama3 to produce creative recipes while following JSON formatting guidelines very well.\n",
    "\n",
    "[Example Video Output for \"dorritos consomme\"](https://drive.google.com/file/d/1AP3VUlAmOUU6rcZp1wQ4v4Fyf5-0tky_/view?usp=drive_link)\n",
    "\n",
    "![overview](https://raw.githubusercontent.com/tmoreau89/image-assets/main/llama3_hackathon/mediagen_llama3.png)\n",
    "\n",
    "Let's take a look at the high level steps needed to go from the name of a dish, e.g. \"baked alaska\" to a fully fledged recipe video:\n",
    "1. We use a Llama3-70b-instruct LLM to generate a recipe from the name of a dish. The recipe is formatted in JSON which breaks down the recipe into the following fields: recipe title, prep time, cooking time, difficulty, ingredients list and instruction steps.\n",
    "2. We use SDXL to generate a frame for the finished dish, each one of the ingredients, and each of the recipe steps.\n",
    "3. We use Stable Video Diffusion 1.1 to animate each frame into a short 4 second video.\n",
    "4. Finally we stitch all of the videos together using MoviePy, add subtitles and a soundtrack.\n",
    "\n",
    "## Pre-requisites\n",
    "\n",
    "### OctoAI\n",
    "We'll use [OctoAI](https://octo.ai/) to power all of the GenAI needs of this notebook: LLMs, image gen, image animation.\n",
    "* To use OctoAI, you'll need to go to https://octoai.cloud/ and sign in using your Google or GitHub account.\n",
    "* Next you'll need to generate an OctoAI API token by following these [instructions](https://octo.ai/docs/getting-started/how-to-create-an-octoai-access-token). Keep the API token in hand, we'll need it further down in this notebook.\n",
    "\n",
    "In this example we will use the Llama 3 70b instruct model. You can find more on Llama models on the [OctoAI text generation solution page](https://octoai.cloud/text).\n",
    "\n",
    "At the time of writing this notebook the following Llama models are available on OctoAI:\n",
    "* meta-llama-3-8b-instruct\n",
    "* meta-llama-3-70b-instruct\n",
    "* codellama-7b-instruct\n",
    "* codellama-13b-instruct\n",
    "* codellama-34b-instruct\n",
    "* llama-2-13b-chat\n",
    "* llama-2-70b-chat\n",
    "* llamaguard-7b\n",
    "\n",
    "### Local Python Notebook\n",
    "We highly recommend launching this notebook from a fresh python environment, for instance you can run the following:\n",
    "```\n",
    "python3 -m venv .venv         \n",
    "source .venv/bin/activate\n",
    "```\n",
    "All you need to run this notebook is to install jupyter notebook with `python3 -m pip install notebook` then run `jupyter notebook` ([link](https://jupyter.org/install)) in the same directory as this `.ipynb` file.\n",
    "You don't need to install additional pip packages ahead of running the notebook, since those will be installed right at the beginning. You will need to ensure your system has `imagemagick` installed by following the [instructions](https://imagemagick.org/script/download.php)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b38d7fe4-789d-4a2f-9f3b-a185d35fb005",
   "metadata": {
    "id": "b38d7fe4-789d-4a2f-9f3b-a185d35fb005"
   },
   "outputs": [],
   "source": [
    "# This can take a few minutes on Colab, please be patient!\n",
    "# Note: in colab you may have to restart the runtime to get all of the\n",
    "# dependencies set up properly (a message will instruct you to do so)\n",
    "import platform\n",
    "if platform.system() == \"Linux\":\n",
    "    # Tested on colab - requires a few steps to get imagemagick installed correctly\n",
    "    # https://github.com/Zulko/moviepy/issues/693#issuecomment-622997875\n",
    "    ! apt install imagemagick &> /dev/null\n",
    "    ! apt install ffmpeg &> /dev/null\n",
    "    ! pip install moviepy[optional] &> /dev/null\n",
    "    ! sed -i '/<policy domain=\"path\" rights=\"none\" pattern=\"@\\*\"/d' /etc/ImageMagick-6/policy.xml\n",
    "elif platform.system() == \"Darwin\":\n",
    "    # Tested on a macbook on macOS Sonoma\n",
    "    ! brew install imagemagick\n",
    "    ! brew reinstall ffmpeg\n",
    "    ! pip install moviepy\n",
    "else:\n",
    "    print(\"Please install imagemagick on your system by following the instructions above\")\n",
    "# Let's proceed by installing the necessary pip packages\n",
    "! pip install langchain==0.1.19 octoai===1.0.2 openai pillow ffmpeg devtools"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "71e94246-b65c-45c4-8b64-40654207bd34",
   "metadata": {
    "id": "71e94246-b65c-45c4-8b64-40654207bd34"
   },
   "outputs": [],
   "source": [
    "# Next let's use the getpass library to enter the OctoAI API token you just\n",
    "# obtained in the pre-requisite step\n",
    "from getpass import getpass\n",
    "import os\n",
    "\n",
    "OCTOAI_API_TOKEN = getpass()\n",
    "os.environ[\"OCTOAI_API_TOKEN\"] = OCTOAI_API_TOKEN"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58b66a05-d7ff-4987-9444-15bc0892f9d0",
   "metadata": {
    "id": "58b66a05-d7ff-4987-9444-15bc0892f9d0"
   },
   "source": [
    "# 1. Recipe Generation with Langchain using a Llama3-70b-instruct hosted on OctoAI\n",
    "\n",
    "In this first section, we're going to show how you can use Llama3-70b-instruct LLM hosted on OctoAI. Here we're using Langchain, a popular Python based library to build LLM-powered application.\n",
    "\n",
    "[Llama 3](https://llama.meta.com/llama3/) is Meta AI's latest open source model in the Llama family.\n",
    "\n",
    "The key here is to rely on the OctoAIEndpoint LLM by adding the following line to your python script:\n",
    "```python\n",
    "from langchain.llms.octoai_endpoint import OctoAIEndpoint\n",
    "```\n",
    "\n",
    "Then you can instantiate your `OctoAIEndpoint` LLM by passing in under the `model_kwargs` dictionary what model you wish to use (there is a rather wide selection you can consult [here](https://octo.ai/docs/text-gen-solution/getting-started#self-service-models)), and what the maximum number of tokens should be set to.\n",
    "\n",
    "Next you need to define your prompt template. The key here is to provide enough rules to guide the LLM into generating a recipe with just the right amount of information and detail. This will make the text generated by the LLM usable in the next generation steps (image generation, image animation etc.).\n",
    "\n",
    "> ⚠️ Note that we're generating intentionally a short recipe according to the prompt template - this is to ensure we can go through this notebook fairly quickly the first time. If you want to generate a full recipe, delete the following line from the prompt template.\n",
    "```\n",
    "Use only two ingredients, and two instruction steps.\n",
    "```\n",
    "\n",
    "Finally we create an LLM chain by passing in the LLM and the prompt template we just instantiated.\n",
    "\n",
    "This chain is now ready to be invoked by passing in the user input, namely: the name of the dish to generate a  recipe for. Let's invoke the chain and see what recipe our LLM just thought about."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "27befbc0-845f-48b2-84d6-1601e3c8f9e6",
   "metadata": {
    "id": "27befbc0-845f-48b2-84d6-1601e3c8f9e6"
   },
   "outputs": [],
   "source": [
    "import json\n",
    "from langchain.llms.octoai_endpoint import OctoAIEndpoint\n",
    "from langchain.output_parsers import PydanticOutputParser\n",
    "from langchain import PromptTemplate, LLMChain\n",
    "from pydantic import BaseModel, Field\n",
    "from typing import List\n",
    "\n",
    "# OctoAI LLM endpoint\n",
    "llm = OctoAIEndpoint(\n",
    "    model = \"meta-llama-3-70b-instruct\",\n",
    "    max_tokens = 1024,\n",
    "    temperature = 0.01\n",
    ")\n",
    "\n",
    "# Define a JSON format for our recipe using Pydantic to declare our data model\n",
    "class Ingredient(BaseModel):\n",
    "    \"\"\"The object representing an ingredient\"\"\"\n",
    "    item: str = Field(description=\"Ingredient\")\n",
    "    illustration: str = Field(description=\"Text-based detailed visual description of the ingredient for a photograph or illustrator\")\n",
    "\n",
    "class RecipeStep(BaseModel):\n",
    "    \"\"\"The object representing a recipe steps\"\"\"\n",
    "    item: str = Field(description=\"Recipe step/instruction\")\n",
    "    illustration: str = Field(description=\"Text-based detailed visual description of the instruction for a photograph or illustrator\")\n",
    "\n",
    "class Recipe(BaseModel):\n",
    "    \"\"\"The format of the recipe answer.\"\"\"\n",
    "    dish_name: str = Field(description=\"Name of the dish\")\n",
    "    ingredient_list: List[Ingredient] = Field(description=\"List of the ingredients\")\n",
    "    recipe_steps: List[RecipeStep] = Field(description=\"List of the recipe steps\")\n",
    "    prep_time: int = Field(description=\"Recipe prep time in minutes\")\n",
    "    cook_time: int = Field(description=\"Recipe cooking time in minutes\")\n",
    "    difficulty: str = Field(description=\"Rating in difficulty, can be easy, medium, hard\")\n",
    "\n",
    "# Pydantic output parser\n",
    "parser = PydanticOutputParser(pydantic_object=Recipe)\n",
    "\n",
    "# Define a recipe template\n",
    "template = \"\"\"\n",
    "You are a food recipe generator. \n",
    "\n",
    "Given the name of a dish, generate a recipe that's easy to follow and leads to a delicious and creative dish.\n",
    "\n",
    "Use only two ingredients, and two instruction steps.\n",
    "\n",
    "Here are some rules to follow at all costs:\n",
    "0. Respond back only as only JSON!!!\n",
    "1. Provide a list of ingredients needed for the recipe.\n",
    "2. Provide a list of instructions to follow the recipe.\n",
    "3. Each instruction should be concise (1 sentence max) yet informative. It's preferred to provide more instruction steps with shorter instructions than fewer steps with longer instructions.\n",
    "4. For the whole recipe, provide the amount of prep and cooking time, with a classification of the recipe difficulty from easy to hard.\n",
    "\n",
    "{format_instructions}\n",
    "\n",
    "Human: Generate a recipe for a dish called {human_input}\n",
    "AI: \"\"\"\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    template=template,\n",
    "    input_variables=[\"human_input\"],\n",
    "    partial_variables={\"format_instructions\": parser.get_format_instructions()}\n",
    ")\n",
    "\n",
    "# Set up the language model chain\n",
    "llm_chain = prompt | llm | parser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "994bfa88-53c8-4083-bee2-fd9a600bddd3",
   "metadata": {
    "id": "994bfa88-53c8-4083-bee2-fd9a600bddd3"
   },
   "outputs": [],
   "source": [
    "# Let's request user input for the recipe name\n",
    "print(\"Provide a recipe name, e.g. llama 3 spice omelette\")\n",
    "recipe_title = input()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "20bcc3d5-7ca6-4bbc-be09-06d442d907e1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from devtools import pprint\n",
    "\n",
    "# Invoke the LLM chain, extract the JSON and print the response\n",
    "recipe_dict = llm_chain.invoke({\"human_input\": recipe_title})\n",
    "recipe_dict = json.loads(recipe_dict.json())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a89efaa2-dfa9-4693-a49f-7e08bc94104d",
   "metadata": {
    "id": "a89efaa2-dfa9-4693-a49f-7e08bc94104d"
   },
   "source": [
    "# 2. Generate images that narrate the recipe with SDXL hosted on OctoAI\n",
    "\n",
    "In this section we'll rely on OctoAI's SDK to invoke the image generation endpoint powered by Stable Diffusion XL. Now that we have our recipe stored in JSON object we'll generate the following images:\n",
    "* A set of images for every ingredient used in the recipe, stored in `ingredient_images`\n",
    "* A set of images for every step in the recipe, stored in `step_images`\n",
    "* An image of the final dish, stored under `final_dish_still`\n",
    "\n",
    "We rely on the OctoAI Python SDK to generate those images with SDXL. You just need to instantiate the OctoAI ImageGenerator with your OctoAI API token, then invoke the `generate` method for each set of images you want to produce. You'll need to pass in the following arguments:\n",
    "* `engine` which selects what model to use - we use SDXL here\n",
    "* `prompt` which describes the image we want to generate\n",
    "* `negative_prompt` which provides image attributes/keywords that we absolutely don't want to have in our final image\n",
    "* `width`, `height` which helps us specify a resolution and aspect ratio of the final image\n",
    "* `sampler` which is what's used in every denoising step, you can read more about them [here](https://stable-diffusion-art.com/samplers/)\n",
    "* `steps` which specifies the number of denoising steps to obtain the final image\n",
    "* `cfg_scale` which specifies the configuration scale, which defines how closely to adhere to the original prompt\n",
    "* `num_images` which specifies the number of images to generate at once\n",
    "* `use_refiner` which when turned on lets us use the SDXL refiner model which enhances the quality of the image\n",
    "* `high_noise_frac` which specifies the ratio of steps to perform with the base SDXL model vs. refiner model\n",
    "* `style_preset` which specifies a stype preset to apply to the negative and positive prompts, you can read more about them [here](https://stable-diffusion-art.com/sdxl-styles/)\n",
    "\n",
    "To read more about the API and what options are supported in OctoAI, head over to this [link](https://octoai.cloud/media/image-gen?mode=api).\n",
    "\n",
    "**Note:** Looking to use a specific SDXL checkpoint, LoRA or controlnet for your image generation needs? You can manage and upload your own collection of stable diffusion assets via the [OctoAI CLI](https://octo.ai/docs/media-gen-solution/uploading-a-custom-asset-to-the-octoai-asset-library), or via the [web UI](https://octoai.cloud/assets?isPublic=false). You can then invoke your own [checkpoint](https://octo.ai/docs/media-gen-solution/customizations/checkpoints), [LoRA](https://octo.ai/docs/media-gen-solution/customizations/loras), [textual inversion](https://octo.ai/docs/media-gen-solution/customizations/textual-inversions), or [controlnet](https://octo.ai/docs/media-gen-solution/customizations/controlnets) via the `ImageGenerator` API."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c2efe30c-05ee-4c79-9242-71f964c5ba10",
   "metadata": {
    "id": "c2efe30c-05ee-4c79-9242-71f964c5ba10"
   },
   "outputs": [],
   "source": [
    "from PIL import Image\n",
    "from io import BytesIO\n",
    "from base64 import b64encode, b64decode\n",
    "from octoai import client as octo_client\n",
    "\n",
    "# Instantiate the OctoAI SDK image generator\n",
    "octo_client = octo_client.OctoAI(api_key=OCTOAI_API_TOKEN)\n",
    "\n",
    "# Ingredients stills dictionary (Ingredient -> Image)\n",
    "ingredient_images = {}\n",
    "# Recipe steps stills dictionary (Step -> Image)\n",
    "step_images = {}\n",
    "\n",
    "# Iterate through ingredients and recipe steps\n",
    "for recipe_list, dst in zip([\"ingredient_list\", \"recipe_steps\"], [ingredient_images, step_images]):\n",
    "  for element in recipe_dict[recipe_list]:\n",
    "      # We do some simple prompt engineering to achieve a consistent style\n",
    "      prompt = \"RAW photo, Fujifilm XT, clean bright modern kitchen photograph, ({})\".format(element[\"illustration\"])\n",
    "      # The parameters below can be tweaked as needed, the resolution is intentionally set to portrait mode\n",
    "      image_resp = octo_client.image_gen.generate_sdxl(\n",
    "          prompt=prompt,\n",
    "          negative_prompt=\"Blurry photo, distortion, low-res, poor quality, watermark\",\n",
    "          width=768,\n",
    "          height=1344,\n",
    "          num_images=1,\n",
    "          sampler=\"DPM_PLUS_PLUS_2M_KARRAS\",\n",
    "          steps=30,\n",
    "          cfg_scale=12,\n",
    "          use_refiner=True,\n",
    "          high_noise_frac=0.8,\n",
    "          style_preset=\"Food Photography\",\n",
    "      )\n",
    "      image_str = image_resp.images[0].image_b64\n",
    "      image = Image.open(BytesIO(b64decode(image_str)))\n",
    "      dst[element[\"item\"]] = image\n",
    "      display(dst[element[\"item\"]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d8fa0772-4578-42bd-bc8c-34fe1026f975",
   "metadata": {
    "id": "d8fa0772-4578-42bd-bc8c-34fe1026f975"
   },
   "outputs": [],
   "source": [
    "# Final dish in all of its glory\n",
    "prompt = \"RAW photo, Fujifilm XT, clean bright modern kitchen photograph, professionally presented ({})\".format(recipe_dict[\"dish_name\"])\n",
    "image_resp = octo_client.image_gen.generate_sdxl(\n",
    "    prompt=prompt,\n",
    "    negative_prompt=\"Blurry photo, distortion, low-res, poor quality\",\n",
    "    width=768,\n",
    "    height=1344,\n",
    "    num_images=1,\n",
    "    sampler=\"DPM_PLUS_PLUS_2M_KARRAS\",\n",
    "    steps=30,\n",
    "    cfg_scale=12,\n",
    "    use_refiner=True,\n",
    "    high_noise_frac=0.8,\n",
    "    style_preset=\"Food Photography\",\n",
    ")\n",
    "image_str = image_resp.images[0].image_b64\n",
    "final_dish_still = Image.open(BytesIO(b64decode(image_str)))\n",
    "display(final_dish_still)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fd988959-366b-4434-ab3f-04de45b80d2a",
   "metadata": {
    "id": "fd988959-366b-4434-ab3f-04de45b80d2a"
   },
   "source": [
    "# 3. Animate the images with Stable Video Diffusion 1.1 hosted on OctoAI\n",
    "\n",
    "In this section we'll rely once again on OctoAI's SDK to invoke the image animation endpoint powered by Stable Video Diffusion 1.1. In the last section we generated a handful of images which we're now going to animate:\n",
    "* A set of videos for every ingredient used in the recipe, stored in `ingredient_videos`\n",
    "* A set of videos for every step in the recipe, stored in `steps_videos`\n",
    "* An videos of the final dish, stored under `final_dish_video`\n",
    "\n",
    "From these we'll be generating 25-frame videos using the image animation API in OctoAI's Python SDK. You just need to instantiate the OctoAI VideoGenerator with yout OctoAI API token, then invoke the `generate` method for each animation you want to produce. You'll need to pass in the following arguments:\n",
    "* `engine` which selects what model to use - we use SVD here\n",
    "* `image` which encodes the input image we want to animate as a base64 string\n",
    "* `steps` which specifies the number of denoising steps to obtain each frame in the video\n",
    "* `cfg_scale` which specifies the configuration scale, which defines how closely to adhere to the image description\n",
    "* `fps` which specifies the numbers of frames per second\n",
    "* `motion scale` which indicates how much motion should be in the generated animation\n",
    "* `noise_aug_strength` which specifies how much noise to add to the initial images - a higher value encourages more creative videos\n",
    "* `num_video` which represents how many output animations to generate\n",
    "\n",
    "To read more about the API and what options are supported in OctoAI, head over to this [link](https://octoai.cloud/media/animate?mode=api).\n",
    "\n",
    "**Note:** this step will take a few minutes, as each video takes about 30s to generate and that we're generating each video sequentially. For faster execution time all of these video generation calls can be done asynchronously, or in multiple threads."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4ee69ac0-8cd1-4c5b-8090-e2b1aba9bbd9",
   "metadata": {
    "id": "4ee69ac0-8cd1-4c5b-8090-e2b1aba9bbd9"
   },
   "outputs": [],
   "source": [
    "# We'll need this helper to convert PIL images into a base64 encoded string\n",
    "def image_to_base64(image: Image) -> str:\n",
    "  buffered = BytesIO()\n",
    "  image.save(buffered, format=\"JPEG\")\n",
    "  img_b64 = b64encode(buffered.getvalue()).decode(\"utf-8\")\n",
    "  return img_b64"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1d1872d4-4029-41a2-a1dc-7c922075fac1",
   "metadata": {
    "id": "1d1872d4-4029-41a2-a1dc-7c922075fac1"
   },
   "outputs": [],
   "source": [
    "# Generate a video for the final dish presentation (it'll be used in the intro and at the end)\n",
    "video_resp = octo_client.image_gen.generate_svd(\n",
    "    image=image_to_base64(final_dish_still),\n",
    "    steps=25,\n",
    "    cfg_scale=3,\n",
    "    fps=6,\n",
    "    motion_scale=0.5,\n",
    "    noise_aug_strength=0.02,\n",
    "    num_videos=1,\n",
    ")\n",
    "final_dish_video = video_resp.videos[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0b7bc445-1710-4a7c-b13d-55a62b9fd24b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import HTML\n",
    "from moviepy.editor import *\n",
    "\n",
    "# This is a helper function that gets the video dumped locally\n",
    "def getVideoFileClip(video, fn):\n",
    "    with open(fn, 'wb') as wfile:\n",
    "        wfile.write(b64decode(video.video))\n",
    "    vfc = VideoFileClip(fn)\n",
    "    return vfc\n",
    "\n",
    "# View the video to confirm\n",
    "getVideoFileClip(final_dish_video, \"final_dish.mp4\")\n",
    "mp4 = open('final_dish.mp4','rb').read()\n",
    "data_url = \"data:video/mp4;base64,\" + b64encode(mp4).decode()\n",
    "HTML(\"\"\"\n",
    "<video width=400 controls>\n",
    "      <source src=\"%s\" type=\"video/mp4\">\n",
    "</video>\n",
    "\"\"\" % data_url)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1ad00f53-1176-419e-b383-c313105e248a",
   "metadata": {
    "id": "1ad00f53-1176-419e-b383-c313105e248a"
   },
   "outputs": [],
   "source": [
    "# Generate the ingredients videos (doing synchronously, so it's gonna be slow)\n",
    "# TODO: parallelize to make this run faster!\n",
    "\n",
    "# Dictionary that stores the videos for ingredients (ingredient -> video)\n",
    "ingredient_videos = {}\n",
    "# Dictionary that stores the videos for recipe steps (step -> video)\n",
    "steps_videos = {}\n",
    "\n",
    "# Iterate through ingredients and recipe steps\n",
    "for recipe_list, src, dst in zip([\"ingredient_list\", \"recipe_steps\"], [ingredient_images, step_images], [ingredient_videos, steps_videos]):\n",
    "    # Iterate through each ingredient / step\n",
    "    for item in recipe_dict[recipe_list]:\n",
    "        key = item[\"item\"]\n",
    "        # Retrieve the image from the ingredient_images dict\n",
    "        still = src[key]\n",
    "        # Generate a video with the OctoAI video generator\n",
    "        video_resp = octo_client.image_gen.generate_svd(\n",
    "            image=image_to_base64(still),\n",
    "            steps=25,\n",
    "            cfg_scale=3,\n",
    "            fps=6,\n",
    "            motion_scale=0.5,\n",
    "            noise_aug_strength=0.02,\n",
    "            num_videos=1,\n",
    "        )\n",
    "        dst[key] = video_resp.videos[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6a9613be-1433-43d9-84c9-4870cba4585a",
   "metadata": {
    "id": "6a9613be-1433-43d9-84c9-4870cba4585a"
   },
   "source": [
    "# 4. Create a video montage with MoviePy\n",
    "\n",
    "In this section we're going to rely on the MoviePy library to create a montage of the videos.\n",
    "\n",
    "For each short animation (dish, ingredients, steps), we also have corresponding text that goes with it from the original `recipe_dict` JSON object. This allows us to generate a montage captions.\n",
    "\n",
    "Each video having 25 frames and being a 6FPS video, they will last 4.167s each. Because the ingredients list can be rather long, we crop each video to a duration of 2s to keep the flow of the video going. For the steps video, we play 4s of each clip given that we need to give the viewer time to read the instructions.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2438a285-a913-4f82-80fb-e993809f5a09",
   "metadata": {
    "id": "2438a285-a913-4f82-80fb-e993809f5a09"
   },
   "outputs": [],
   "source": [
    "from IPython.display import Video\n",
    "from moviepy.video.tools.subtitles import SubtitlesClip\n",
    "import textwrap\n",
    "\n",
    "# Video collage\n",
    "collage = []\n",
    "\n",
    "# To prepare the closed caption of the video, we define\n",
    "# two durations: short duration (2.0s) and long duration (4.0s)\n",
    "short_duration = 2\n",
    "long_duration = 4\n",
    "# We keep track of the time ellapsed\n",
    "t = 0\n",
    "# This sub list will contain tuples in the following form:\n",
    "# ((t_start, t_end), \"caption\")\n",
    "subs = []\n",
    "\n",
    "# Let's create the intro clip presenting the final dish\n",
    "vfc = getVideoFileClip(final_dish_video, \"final_dish.mp4\")\n",
    "collage.append(vfc.subclip(0, long_duration))\n",
    "# Add the subtitle which provides the name of the dish, along with prep time, cook time and difficulty\n",
    "subs.append(((t, t+long_duration), \"{} Recipe\\nPrep: {}min\\nCook: {}min\\nDifficulty: {}\".format(\n",
    "    recipe_dict[\"dish_name\"].title(), recipe_dict[\"prep_time\"], recipe_dict[\"cook_time\"], recipe_dict[\"difficulty\"]))\n",
    ")\n",
    "t += long_duration\n",
    "\n",
    "# Go through the ingredients list to stich together the ingredients clip\n",
    "for idx, ingredient in enumerate(recipe_dict[\"ingredient_list\"]):\n",
    "    # Write the video to disk and load it as a VideoFileClip\n",
    "    key = ingredient[\"item\"]\n",
    "    vfc = getVideoFileClip(ingredient_videos[key], 'clip_ingredient_{}.mp4'.format(idx))\n",
    "    collage.append(vfc.subclip(0, short_duration))\n",
    "    # Add the subtitle which just provides each ingredient\n",
    "    subs.append(((t, t+short_duration), \"Ingredients:\\n{}\".format(textwrap.fill(key, 35))))\n",
    "    t += short_duration\n",
    "\n",
    "# Go through the recipe steps to stitch together each step of the recipe video\n",
    "for idx, step in enumerate(recipe_dict[\"recipe_steps\"]):\n",
    "    # Write the video to disk and load it as a VideoFileClip\n",
    "    key = step[\"item\"]\n",
    "    vfc = getVideoFileClip(steps_videos[key], 'clip_step_{}.mp4'.format(idx))\n",
    "    collage.append(vfc.subclip(0, long_duration))\n",
    "    # Add the subtitle which just provides each recipe step\n",
    "    subs.append(((t, t+long_duration), \"Step {}:\\n{}\".format(idx, textwrap.fill(key, 35))))\n",
    "    t += long_duration\n",
    "\n",
    "# Add the outtro clip\n",
    "vfc = VideoFileClip('final_dish.mp4'.format(idx))\n",
    "collage.append(vfc.subclip(0, long_duration))\n",
    "# Add the subtitle: Enjoy your {dish_name}\n",
    "subs.append(((t, t+long_duration), \"Enjoy your {}!\".format(recipe_title.title())))\n",
    "t += long_duration"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "aa83e564-ad4a-46ce-afd0-9bc617bacb36",
   "metadata": {
    "id": "aa83e564-ad4a-46ce-afd0-9bc617bacb36"
   },
   "outputs": [],
   "source": [
    "# Concatenate the clips into one initial collage\n",
    "final_clip = concatenate_videoclips(collage)\n",
    "final_clip.to_videofile(\"collage.mp4\", fps=vfc.fps)\n",
    "\n",
    "# Preview the video\n",
    "mp4 = open('collage.mp4','rb').read()\n",
    "data_url = \"data:video/mp4;base64,\" + b64encode(mp4).decode()\n",
    "HTML(\"\"\"\n",
    "<video width=400 controls>\n",
    "      <source src=\"%s\" type=\"video/mp4\">\n",
    "</video>\n",
    "\"\"\" % data_url)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3ed20491-2c83-43e1-8804-883581efdb9e",
   "metadata": {
    "id": "3ed20491-2c83-43e1-8804-883581efdb9e"
   },
   "outputs": [],
   "source": [
    "# Add subtitles to the collage\n",
    "generator = lambda txt: TextClip(\n",
    "    txt,\n",
    "    font='Century-Schoolbook-Roman',\n",
    "    fontsize=30,\n",
    "    color='white',\n",
    "    stroke_color='black',\n",
    "    stroke_width=1.5,\n",
    "    method='label',\n",
    "    transparent=True\n",
    ")\n",
    "subtitles = SubtitlesClip(subs, generator)\n",
    "result = CompositeVideoClip([final_clip, subtitles.margin(bottom=70, opacity=0).set_pos(('center','bottom'))])\n",
    "result.write_videofile(\"collage_sub.mp4\", fps=vfc.fps)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "db3f64fb-a97f-4228-bfef-646e672b13a5",
   "metadata": {
    "id": "db3f64fb-a97f-4228-bfef-646e672b13a5"
   },
   "outputs": [],
   "source": [
    "# Now add a soundtrack: you can browse https://pixabay.com for a track you like\n",
    "# I'm downloading a track called \"once in paris\" by artist pumpupthemind\n",
    "import subprocess\n",
    "\n",
    "subprocess.run([\"wget\", \"-O\", \"audio_track.mp3\", \"http://cdn.pixabay.com/download/audio/2023/09/29/audio_0eaceb1002.mp3\"])\n",
    "\n",
    "# Add the soundtrack to the video\n",
    "videoclip = VideoFileClip(\"collage_sub.mp4\")\n",
    "audioclip = AudioFileClip(\"audio_track.mp3\").subclip(0, videoclip.duration)\n",
    "video = videoclip.set_audio(audioclip)\n",
    "video.write_videofile(\"collage_sub_sound.mp4\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "38ab24e5-064b-4f7a-a23a-2e194993f9a4",
   "metadata": {
    "id": "38ab24e5-064b-4f7a-a23a-2e194993f9a4"
   },
   "outputs": [],
   "source": [
    "# Enjoy your video!\n",
    "mp4 = open('collage_sub_sound.mp4','rb').read()\n",
    "data_url = \"data:video/mp4;base64,\" + b64encode(mp4).decode()\n",
    "HTML(\"\"\"\n",
    "<video width=400 controls>\n",
    "      <source src=\"%s\" type=\"video/mp4\">\n",
    "</video>\n",
    "\"\"\" % data_url)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7wZvX3h_njVw",
   "metadata": {
    "id": "7wZvX3h_njVw"
   },
   "source": [
    "**Authors**\n",
    "- Thierry Moreau, OctoAI - tmoreau@octo.ai\n",
    "- Pedro Toruella, OctoAI - ptoruella@octo.ai\n",
    "\n",
    "Join [OctoAI Discord](https://discord.com/invite/rXTPeRBcG7)"
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "gpuType": "T4",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}