Commit af155c51 authored by chenzk's avatar chenzk
Browse files

v1.0

parents
Pipeline #2732 failed with stages
in 0 seconds
---
comments: true
description: Discover the enhanced features of Ultralytics HUB Pro Plan including 200GB storage, cloud training, and more. Learn how to upgrade and manage your account balance.
keywords: Ultralytics HUB, Pro Plan, upgrade guide, cloud training, storage, inference API, team collaboration, account balance
---
# Ultralytics HUB Pro
[Ultralytics HUB](https://www.ultralytics.com/hub) offers the Pro Plan as a monthly or annual subscription.
The Pro Plan provides early access to upcoming features and includes enhanced benefits:
- 200GB of storage, compared to the standard 20GB.
- Access to our [Cloud Training](./cloud-training.md).
- Access to our [Dedicated Inference API](./inference-api.md#dedicated-inference-api).
- Increased rate limits for our [Shared Inference API](./inference-api.md#shared-inference-api).
- Collaboration features for [teams](./teams.md).
## Upgrade
You can upgrade to the Pro Plan from the [Billing & License](https://hub.ultralytics.com/settings?tab=billing) tab on the [Settings](https://hub.ultralytics.com/settings) page by clicking on the **Upgrade** button.
![Ultralytics HUB screenshot of the Settings page Billing & License tab with an arrow pointing to the Upgrade button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-settings-upgrade-button.avif)
Next, select the Pro Plan.
![Ultralytics HUB screenshot of the Upgrade dialog with an arrow pointing to the Select Plan button](https://github.com/ultralytics/docs/releases/download/0/hub-pro-upgrade-select-plan.avif)
!!! tip
You can save 20% if you choose the annual Pro Plan.
![Ultralytics HUB screenshot of the Upgrade dialog with an arrow pointing to the Save 20% toggle and one to the Select Plan button](https://github.com/ultralytics/docs/releases/download/0/hub-pro-upgrade-save-20-toggle.avif)
Fill in your details during the checkout.
![Ultralytics HUB screenshot of the Checkout with an arrow pointing to the checkbox for saving the payment information for future purchases](https://github.com/ultralytics/docs/releases/download/0/hub-pro-upgrade-save-payment-info.avif)
!!! tip
We recommend ticking the checkbox to save your payment information for future purchases, facilitating easier top-ups to your account balance.
That's it!
![Ultralytics HUB screenshot of the Payment Successful dialog](https://github.com/ultralytics/docs/releases/download/0/payment-successful-dialog.avif)
## Account Balance
The account balance is used to pay for [Ultralytics Cloud Training](./cloud-training.md) resources.
In order to top up your account balance, simply click on the **Top-Up** button.
![Ultralytics HUB screenshot of the Settings page Billing & License tab with an arrow pointing to the Top-Up button](https://github.com/ultralytics/docs/releases/download/0/hub-pro-account-balance-top-up-button.avif)
Next, set the amount you want to top-up.
![Ultralytics HUB screenshot of the Checkout with an arrow pointing to the Change amount button](https://github.com/ultralytics/docs/releases/download/0/hub-pro-account-balance-change-amount.avif)
That's it!
![Ultralytics HUB screenshot of the Payment Successful dialog](https://github.com/ultralytics/docs/releases/download/0/payment-successful-dialog-1.avif)
---
comments: true
description: Optimize your model management with Ultralytics HUB Projects. Easily create, share, edit, and compare models for efficient development.
keywords: Ultralytics HUB, model management, create project, share project, edit project, delete project, compare models, reorder models, transfer models
---
# Ultralytics HUB Projects
[Ultralytics HUB](https://www.ultralytics.com/hub) projects provide an effective solution for consolidating and managing your models. If you are working with several models that perform similar tasks or have related purposes, [Ultralytics HUB](https://www.ultralytics.com/hub) projects allow you to group these models together.
This creates a unified and organized workspace that facilitates easier model management, comparison and development. Having similar models or various iterations together can facilitate rapid benchmarking, as you can compare their effectiveness. This can lead to faster, more insightful iterative development and refinement of your models.
<p align="center">
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/Gc6K5eKrTNQ"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Train YOLOv8 Pose Model on Tiger-Pose Dataset Using Ultralytics HUB
</p>
## Create Project
Navigate to the [Projects](https://hub.ultralytics.com/projects) page by clicking on the **Projects** button in the sidebar and click on the **Create Project** button on the top right of the page.
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Projects button in the sidebar and one to the Create Project button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-create-project-page.avif)
??? tip
You can create a project directly from the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Create Project card](https://github.com/ultralytics/docs/releases/download/0/hub-create-project-card.avif)
This action will trigger the **Create Project** dialog, opening up a suite of options for tailoring your project to your needs.
Type the name of your project in the _Project name_ field or keep the default name and finalize the project creation with a single click.
You have the additional option to enrich your project with a description and a unique image, enhancing its recognizability on the [Projects](https://hub.ultralytics.com/projects) page.
When you're happy with your project configuration, click **Create**.
![Ultralytics HUB screenshot of the Create Project dialog with an arrow pointing to the Create button](https://github.com/ultralytics/docs/releases/download/0/hub-create-project-dialog.avif)
After your project is created, you will be able to access it from the [Projects](https://hub.ultralytics.com/projects) page.
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to one of the projects](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-projects-page.avif)
Next, [train a model](./models.md#train-model) inside your project.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Train Model button](https://github.com/ultralytics/docs/releases/download/0/hub-train-model-button.avif)
## Share Project
!!! info
[Ultralytics HUB](https://www.ultralytics.com/hub)'s sharing functionality provides a convenient way to share projects with others. This feature is designed to accommodate both existing [Ultralytics HUB](https://www.ultralytics.com/hub) users and those who have yet to create an account.
??? note
You have control over the general access of your projects.
You can choose to set the general access to "Private", in which case, only you will have access to it. Alternatively, you can set the general access to "Unlisted" which grants viewing access to anyone who has the direct link to the project, regardless of whether they have an [Ultralytics HUB](https://www.ultralytics.com/hub) account or not.
Navigate to the Project page of the project you want to share, open the project actions dropdown and click on the **Share** option. This action will trigger the **Share Project** dialog.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Share option](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-share-project-dialog.avif)
??? tip
You can share a project directly from the [Projects](https://hub.ultralytics.com/projects) page.
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Share option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-share-project-option.avif)
Set the general access to "Unlisted" and click **Save**.
![Ultralytics HUB screenshot of the Share Project dialog with an arrow pointing to the dropdown and one to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-dialog.avif)
!!! warning
When changing the general access of a project, the general access of the models inside the project will be changed as well.
Now, anyone who has the direct link to your project can view it.
??? tip
You can easily click on the project's link shown in the **Share Project** dialog to copy it.
![Ultralytics HUB screenshot of the Share Project dialog with an arrow pointing to the project's link](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-dialog-arrow.avif)
## Edit Project
Navigate to the Project page of the project you want to edit, open the project actions dropdown and click on the **Edit** option. This action will trigger the **Update Project** dialog.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Edit option](https://github.com/ultralytics/docs/releases/download/0/hub-edit-project-1.avif)
??? tip
You can edit a project directly from the [Projects](https://hub.ultralytics.com/projects) page.
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Edit option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/hub-edit-project-2.avif)
Apply the desired modifications to your project and then confirm the changes by clicking **Save**.
![Ultralytics HUB screenshot of the Update Project dialog with an arrow pointing to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-edit-project-save-button.avif)
## Delete Project
Navigate to the Project page of the project you want to delete, open the project actions dropdown and click on the **Delete** option. This action will delete the project.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Delete option](https://github.com/ultralytics/docs/releases/download/0/hub-delete-project-option.avif)
??? tip
You can delete a project directly from the [Projects](https://hub.ultralytics.com/projects) page.
![Ultralytics HUB screenshot of the Projects page with an arrow pointing to the Delete option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/hub-delete-project-option-1.avif)
!!! warning
When deleting a project, the models inside the project will be deleted as well.
!!! note
If you change your mind, you can restore the project from the [Trash](https://hub.ultralytics.com/trash) page.
![Ultralytics HUB screenshot of the Trash page with an arrow pointing to Trash button in the sidebar and one to the Restore option of one of the projects](https://github.com/ultralytics/docs/releases/download/0/hub-delete-project-restore-option.avif)
## Compare Models
Navigate to the Project page of the project where the models you want to compare are located. To use the model comparison feature, click on the **Charts** tab.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Charts tab](https://github.com/ultralytics/docs/releases/download/0/hub-compare-models-1.avif)
This will display all the relevant charts. Each chart corresponds to a different metric and contains the performance of each model for that metric. The models are represented by different colors, and you can hover over each data point to get more information.
![Ultralytics HUB screenshot of the Charts tab inside the Project page](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-charts-tab.avif)
??? tip
Each chart can be enlarged for better visualization.
![Ultralytics HUB screenshot of the Charts tab inside the Project page with an arrow pointing to the expand icon](https://github.com/ultralytics/docs/releases/download/0/hub-compare-models-expand-icon.avif)
![Ultralytics HUB screenshot of the Charts tab inside the Project page with one of the charts expanded](https://github.com/ultralytics/docs/releases/download/0/hub-compare-models-expanded-chart.avif)
Furthermore, to properly analyze the data, you can utilize the zoom feature.
![Ultralytics HUB screenshot of the Charts tab inside the Project page with one of the charts expanded and zoomed](https://github.com/ultralytics/docs/releases/download/0/hub-charts-tab-expanded-zoomed.avif)
??? tip
You have the flexibility to customize your view by selectively hiding certain models. This feature allows you to concentrate on the models of interest.
![Ultralytics HUB screenshot of the Charts tab inside the Project page with an arrow pointing to the hide/unhide icon of one of the model](https://github.com/ultralytics/docs/releases/download/0/hub-compare-models-hide-icon.avif)
## Reorder Models
??? note
Ultralytics HUB's reordering functionality works only inside projects you own.
Navigate to the Project page of the project where the models you want to reorder are located. Click on the designated reorder icon of the model you want to move and drag it to the desired location.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the reorder icon](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-reorder-models.avif)
## Transfer Models
Navigate to the Project page of the project where the model you want to mode is located, open the project actions dropdown and click on the **Transfer** option. This action will trigger the **Transfer Model** dialog.
![Ultralytics HUB screenshot of the Project page with an arrow pointing to the Transfer option of one of the models](https://github.com/ultralytics/docs/releases/download/0/hub-transfer-models-1.avif)
??? tip
You can also transfer a model directly from the [Models](https://hub.ultralytics.com/models) page.
![Ultralytics HUB screenshot of the Models page with an arrow pointing to the Transfer option of one of the models](https://github.com/ultralytics/docs/releases/download/0/hub-transfer-models-2.avif)
Select the project you want to transfer the model to and click **Save**.
![Ultralytics HUB screenshot of the Transfer Model dialog with an arrow pointing to the dropdown and one to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-transfer-models-dialog.avif)
---
comments: true
description: Get started with Ultralytics HUB! Learn to upload datasets, train YOLO models, and manage projects easily with our user-friendly platform.
keywords: Ultralytics HUB, Quickstart, YOLO models, dataset upload, project management, train models, machine learning
---
# Ultralytics HUB Quickstart
[Ultralytics HUB](https://www.ultralytics.com/hub) is designed to be user-friendly and intuitive, allowing users to quickly upload their datasets and train new YOLO models. It also offers a range of pre-trained models to choose from, making it extremely easy for users to get started. Once a model is trained, it can be effortlessly previewed in the [Ultralytics HUB App](app/index.md) before being deployed for real-time classification, [object detection](https://www.ultralytics.com/glossary/object-detection), and [instance segmentation](https://www.ultralytics.com/glossary/instance-segmentation) tasks.
<p align="center">
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/lveF9iCMIzc?si=_Q4WB5kMB5qNe7q6"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Train Your Custom YOLO Models In A Few Clicks with Ultralytics HUB
</p>
## Get Started
[Ultralytics HUB](https://www.ultralytics.com/hub) offers a variety easy of signup options. You can register and log in using your Google, Apple, or GitHub accounts, or simply with your email address.
![Ultralytics HUB screenshot of the Signup page](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-signup-page.avif)
During the signup, you will be asked to complete your profile.
![Ultralytics HUB screenshot of the Signup page profile form](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-signup-profile-form.avif)
??? tip
You can update your profile from the [Account](https://hub.ultralytics.com/settings?tab=account) tab on the [Settings](https://hub.ultralytics.com/settings) page.
![Ultralytics HUB screenshot of the Settings page Account tab with an arrow pointing to the Profile card](https://github.com/ultralytics/docs/releases/download/0/hub-settings-account-profile.avif)
## Home
After signing in, you will be directed to the [Home](https://hub.ultralytics.com/home) page of [Ultralytics HUB](https://www.ultralytics.com/hub), which provides a comprehensive overview, quick links, and updates.
The sidebar conveniently offers links to important modules of the platform, such as [Datasets](https://hub.ultralytics.com/datasets), [Projects](https://hub.ultralytics.com/projects), and [Models](https://hub.ultralytics.com/models).
![Ultralytics HUB screenshot of the Home page](https://github.com/ultralytics/docs/releases/download/0/hub-home.avif)
### Recent
You can easily search globally or directly access your last updated [Datasets](https://hub.ultralytics.com/datasets), [Projects](https://hub.ultralytics.com/projects), or [Models](https://hub.ultralytics.com/models) using the Recent card on the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Recent card](https://github.com/ultralytics/docs/releases/download/0/hub-recent-card.avif)
### Upload Dataset
You can upload a dataset directly from the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Upload Dataset card](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-upload-dataset-card.avif)
Read more about [datasets](https://docs.ultralytics.com/hub/datasets/).
### Create Project
You can create a project directly from the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Create Project card](https://github.com/ultralytics/docs/releases/download/0/hub-create-project-card.avif)
Read more about [projects](https://docs.ultralytics.com/hub/projects/).
### Train Model
You can train a model directly from the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Train Model card](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-train-model-card.avif)
Read more about [models](https://docs.ultralytics.com/hub/models/).
## Feedback
We value your feedback! Feel free to leave a review at any time.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the Feedback button](https://github.com/ultralytics/docs/releases/download/0/hub-feedback-button.avif)
![Ultralytics HUB screenshot of the Feedback dialog](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-feedback-dialog.avif)
??? info
Only our team will see your feedback, and we will use it to improve our platform.
## Need Help?
If you encounter any issues or have questions, we're here to assist you.
You can report a bug, request a feature, or ask a question on <a href="https://github.com/ultralytics/hub/issues/new/choose">GitHub</a>.
!!! note
When reporting a bug, please include your Environment Details from the [Support](https://hub.ultralytics.com/support) page.
![Ultralytics HUB screenshot of the Support page with an arrow pointing to Support button in the sidebar and one to the Copy Environment Details button](https://github.com/ultralytics/docs/releases/download/0/hub-support-page.avif)
??? tip
You can join our <a href="https://discord.com/invite/ultralytics">Discord</a> community for questions and discussions!
---
comments: true
description: Discover how to manage and collaborate with team members using Ultralytics HUB Teams. Learn to create, edit, and share resources efficiently.
keywords: Ultralytics HUB, Teams, collaboration, team management, AI projects, resource sharing, Pro Plan, data sharing, project management
---
# Ultralytics HUB Teams
We're excited to introduce you to the new Teams feature within [Ultralytics HUB](https://www.ultralytics.com/hub) for our [Pro](./pro.md) users!
Here, you'll learn how to manage team members, share resources seamlessly, and collaborate efficiently on various projects.
!!! note
As this is a new feature, we're still in the process of developing and refining it to ensure it meets your needs.
## Create Team
!!! note
You need to [upgrade](./pro.md#upgrade) to the [Pro Plan](./pro.md) in order to create a team.
![Ultralytics HUB screenshot of the Settings page Teams tab with an arrow pointing to the Upgrade button](https://github.com/ultralytics/docs/releases/download/0/hub-create-team-settings-page.avif)
Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page by clicking on the **Teams** tab in the [Settings](https://hub.ultralytics.com/settings) page and click on the **Create Team** button.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Create Team button](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-create-team-button.avif)
This action will trigger the **Create Team** dialog.
Type the name of your team in the _Team name_ field or keep the default name and finalize the team creation with a single click.
You have the additional option to enrich your team with a description and a unique image, enhancing its recognizability on the [Teams](https://hub.ultralytics.com/settings?tab=teams) page.
When you're happy with your team configuration, click **Create**.
![Ultralytics HUB screenshot of the Create Team dialog with an arrow pointing to the Create button](https://github.com/ultralytics/docs/releases/download/0/hub-create-team-dialog.avif)
After your team is created, you will be able to access it from the [Teams](https://hub.ultralytics.com/settings?tab=teams) page.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-teams-page-arrow-pointing-to-team.avif)
## Edit Team
Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page, open the team actions dropdown of team you want to edit and click on the **Edit** option. This action will trigger the **Update Team** dialog.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Edit option of one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-edit-team-1.avif)
Apply the desired modifications to your team and then confirm the changes by clicking **Save**.
![Ultralytics HUB screenshot of the Update Team dialog with an arrow pointing to the Save button](https://github.com/ultralytics/docs/releases/download/0/hub-update-team-save-button.avif)
## Delete Team
Navigate to the [Teams](https://hub.ultralytics.com/settings?tab=teams) page, open the team actions dropdown of team you want to delete and click on the **Delete** option.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Delete option of one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-delete-team-option.avif)
!!! warning
When deleting a team, the team can't be restored.
## Invite Member
Navigate to the Team page of the team to which you want to add a new member and click on the **Invite Member** button. This action will trigger the **Invite Member** dialog.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Invite Member button](https://github.com/ultralytics/docs/releases/download/0/hub-invite-member-button.avif)
Type the email and select the role of the new member and click **Invite**.
![Ultralytics HUB screenshot of the Invite Member dialog with an arrow pointing to the Invite button](https://github.com/ultralytics/docs/releases/download/0/hub-invite-member-dialog.avif)
![Ultralytics HUB screenshot of the Team page with a new member invited](https://github.com/ultralytics/docs/releases/download/0/hub-invite-member-3.avif)
??? tip
You can cancel the invite before the new member accepts it.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Cancel Invite option of one of the members](https://github.com/ultralytics/docs/releases/download/0/hub-invite-member-cancel-invite.avif)
The **Pending** status disappears after the new member accepts the invite.
![Ultralytics HUB screenshot of the Team page with two members](https://github.com/ultralytics/docs/releases/download/0/team-page-two-members.avif)
??? tip
You can update a member's role at any time.
The **Admin** role allows inviting and removing members, as well as removing shared datasets or projects.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Change Role option of one of the members](https://github.com/ultralytics/docs/releases/download/0/hub-invite-member-change-role.avif)
### Seats
The [Pro Plan](./pro.md) offers one free seat _(yours)_.
When a new unique member joins one of your teams, the number of seats increases, and you will be charged **$20 per month** for each seat, or **$200 per year** if you choose the annual plan.
Each unique member counts as one seat, regardless of how many teams they are in. For example, if John Doe is a member of 5 of your teams, he is using one seat.
When you remove a unique member from the last team they are a member of, the number of seats decreases. The charge is prorated and can be applied to adding other unique members, paying for the [Pro Plan](./pro.md), or topping up your [account balance](./pro.md#account-balance).
You can see the number of seats on the [Teams](https://hub.ultralytics.com/settings?tab=teams) page.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the number of seats](https://github.com/ultralytics/docs/releases/download/0/ultralytics-hub-teams-number-of-seats.avif)
## Remove Member
Navigate to the Team page of the team from which you want to remove a member, open the member actions dropdown, and click on the **Remove** option.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Remove option of one of the members](https://github.com/ultralytics/docs/releases/download/0/hub-remove-member.avif)
## Join Team
When you are invited to a team, you receive an in-app notification.
You can view your notifications by clicking on the **View** button on the **Notifications** card on the [Home](https://hub.ultralytics.com/home) page.
![Ultralytics HUB screenshot of the Home page with an arrow pointing to the View button on the Notifications card](https://github.com/ultralytics/docs/releases/download/0/hub-join-team-1.avif)
Alternatively, you can view your notifications by accessing the [Notifications](https://hub.ultralytics.com/notifications) page directly.
![Ultralytics HUB screenshot of the Notifications page with an arrow pointing to one of the notifications](https://github.com/ultralytics/docs/releases/download/0/notifications-page-arrow.avif)
You can decide whether to join the team on the Team page of the team to which you were invited.
If you want to join the team, click on the **Join Team** button.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Join Team button](https://github.com/ultralytics/docs/releases/download/0/hub-join-team-button.avif)
If you don't want to join the team, click on the **Reject Invitation** button.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Reject Invitation button](https://github.com/ultralytics/docs/releases/download/0/hub-join-team-reject-invitation.avif)
??? tip
You can join the team directly from the [Teams](https://hub.ultralytics.com/settings?tab=teams) page.
![Ultralytics HUB screenshot of the Teams page with an arrow pointing to the Join Team button of one of the teams](https://github.com/ultralytics/docs/releases/download/0/hub-join-team-button-1.avif)
## Leave Team
Navigate to the Team page of the team you want to leave and click on the **Leave Team** button.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Leave Team button](https://github.com/ultralytics/docs/releases/download/0/hub-leave-team-1.avif)
## Share Dataset
Navigate to the Team page of the team you want to share your dataset with and click on the **Add Dataset** button.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Add Dataset button](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-button.avif)
Select the dataset you want to share with your team and click on the **Add** button.
![Ultralytics HUB screenshot of the Add Dataset to Team dialog with an arrow pointing to the Add button](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-add-button.avif)
That's it! Your team now has access to your dataset.
![Ultralytics HUB screenshot of the Team page with a dataset shared](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-team-page.avif)
??? tip
As a team owner or team admin, you can remove a shared dataset.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Remove option of one of the datasets shared](https://github.com/ultralytics/docs/releases/download/0/hub-share-dataset-remove-option.avif)
## Share Project
Navigate to the Team page of the team you want to share your project with and click on the **Add Project** button.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Add Project button](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-button.avif)
Select the project you want to share with your team and click on the **Add** button.
![Ultralytics HUB screenshot of the Add Project to Team dialog with an arrow pointing to the Add button](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-add-button.avif)
That's it! Your team now has access to your project.
![Ultralytics HUB screenshot of the Team page with a project shared](https://github.com/ultralytics/docs/releases/download/0/team-page-project-shared.avif)
??? tip
As a team owner or team admin, you can remove a shared project.
![Ultralytics HUB screenshot of the Team page with an arrow pointing to the Remove option of one of the projects shared](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-remove-option.avif)
!!! note
When you share a project with your team, all models inside the project are shared as well.
![Ultralytics HUB screenshot of the Team page with a model shared](https://github.com/ultralytics/docs/releases/download/0/hub-share-project-team-model.avif)
---
comments: true
description: Discover Ultralytics YOLO - the latest in real-time object detection and image segmentation. Learn its features and maximize its potential in your projects.
keywords: Ultralytics, YOLO, YOLO11, object detection, image segmentation, deep learning, computer vision, AI, machine learning, documentation, tutorial
---
<div align="center">
<a href="https://www.ultralytics.com/events/yolovision" target="_blank"><img width="1024%" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-banner.avif" alt="Ultralytics YOLO banner"></a>
<a href="https://docs.ultralytics.com/zh">中文</a> |
<a href="https://docs.ultralytics.com/ko">한국어</a> |
<a href="https://docs.ultralytics.com/ja">日本語</a> |
<a href="https://docs.ultralytics.com/ru">Русский</a> |
<a href="https://docs.ultralytics.com/de">Deutsch</a> |
<a href="https://docs.ultralytics.com/fr">Français</a> |
<a href="https://docs.ultralytics.com/es/">Español</a> |
<a href="https://docs.ultralytics.com/pt">Português</a> |
<a href="https://docs.ultralytics.com/tr">Türkçe</a> |
<a href="https://docs.ultralytics.com/vi">Tiếng Việt</a> |
<a href="https://docs.ultralytics.com/ar">العربية</a>
<br>
<br>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml"><img src="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yaml/badge.svg" alt="Ultralytics CI"></a>
<a href="https://pepy.tech/projects/ultralytics"><img src="https://static.pepy.tech/badge/ultralytics" alt="Ultralytics Downloads"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"><img src="https://zenodo.org/badge/264818686.svg" alt="Ultralytics YOLO Citation"></a>
<a href="https://discord.com/invite/ultralytics"><img alt="Ultralytics Discord" src="https://img.shields.io/discord/1089800235347353640?logo=discord&logoColor=white&label=Discord&color=blue"></a>
<a href="https://community.ultralytics.com/"><img alt="Ultralytics Forums" src="https://img.shields.io/discourse/users?server=https%3A%2F%2Fcommunity.ultralytics.com&logo=discourse&label=Forums&color=blue"></a>
<a href="https://reddit.com/r/ultralytics"><img alt="Ultralytics Reddit" src="https://img.shields.io/reddit/subreddit-subscribers/ultralytics?style=flat&logo=reddit&logoColor=white&label=Reddit&color=blue"></a>
<br>
<a href="https://console.paperspace.com/github/ultralytics/ultralytics"><img src="https://assets.paperspace.io/img/gradient-badge.svg" alt="Run Ultralytics on Gradient"></a>
<a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open Ultralytics In Colab"></a>
<a href="https://www.kaggle.com/models/ultralytics/yolo11"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open Ultralytics In Kaggle"></a>
<a href="https://mybinder.org/v2/gh/ultralytics/ultralytics/HEAD?labpath=examples%2Ftutorial.ipynb"><img src="https://mybinder.org/badge_logo.svg" alt="Open Ultralytics In Binder"></a>
</div>
Introducing [Ultralytics](https://www.ultralytics.com/) [YOLO11](https://github.com/ultralytics/ultralytics), the latest version of the acclaimed real-time object detection and image segmentation model. YOLO11 is built on cutting-edge advancements in [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv), offering unparalleled performance in terms of speed and [accuracy](https://www.ultralytics.com/glossary/accuracy). Its streamlined design makes it suitable for various applications and easily adaptable to different hardware platforms, from edge devices to cloud APIs.
Explore the Ultralytics Docs, a comprehensive resource designed to help you understand and utilize its features and capabilities. Whether you are a seasoned [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) practitioner or new to the field, this hub aims to maximize YOLO's potential in your projects
<div align="center">
<br>
<a href="https://github.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-github.png" width="3%" alt="Ultralytics GitHub"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://www.linkedin.com/company/ultralytics/"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-linkedin.png" width="3%" alt="Ultralytics LinkedIn"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://twitter.com/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-twitter.png" width="3%" alt="Ultralytics Twitter"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://youtube.com/ultralytics?sub_confirmation=1"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-youtube.png" width="3%" alt="Ultralytics YouTube"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://www.tiktok.com/@ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-tiktok.png" width="3%" alt="Ultralytics TikTok"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://ultralytics.com/bilibili"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-bilibili.png" width="3%" alt="Ultralytics BiliBili"></a>
<img src="https://github.com/ultralytics/assets/raw/main/social/logo-transparent.png" width="3%" alt="space">
<a href="https://discord.com/invite/ultralytics"><img src="https://github.com/ultralytics/assets/raw/main/social/logo-social-discord.png" width="3%" alt="Ultralytics Discord"></a>
</div>
## Where to Start
<div class="grid cards" markdown>
- :material-clock-fast:{ .lg .middle } &nbsp; **Getting Started**
***
Install `ultralytics` with pip and get up and running in minutes to train a YOLO model
***
[:octicons-arrow-right-24: Quickstart](quickstart.md)
- :material-image:{ .lg .middle } &nbsp; **Predict**
***
Predict on new images, videos and streams with YOLO <br /> &nbsp;
***
[:octicons-arrow-right-24: Learn more](modes/predict.md)
- :fontawesome-solid-brain:{ .lg .middle } &nbsp; **Train a Model**
***
Train a new YOLO model on your own custom dataset from scratch or load and train on a pretrained model
***
[:octicons-arrow-right-24: Learn more](modes/train.md)
- :material-magnify-expand:{ .lg .middle } &nbsp; **Explore Tasks**
***
Discover YOLO tasks like detect, segment, classify, pose, OBB and track <br /> &nbsp;
***
[:octicons-arrow-right-24: Explore Tasks](tasks/index.md)
- :rocket:{ .lg .middle } &nbsp; **Explore YOLO11 NEW**
***
Discover Ultralytics' latest state-of-the-art YOLO11 models and their capabilities <br /> &nbsp;
***
[:octicons-arrow-right-24: YOLO11 Models 🚀 NEW](models/yolo11.md)
- :material-scale-balance:{ .lg .middle } &nbsp; **Open Source, AGPL-3.0**
***
Ultralytics offers two licensing options for YOLO: AGPL-3.0 License and Enterprise License. Ultralytics is available on [GitHub](https://github.com/ultralytics/ultralytics)
***
[:octicons-arrow-right-24: License](https://www.ultralytics.com/license)
</div>
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/LNwODJXcvt4?si=7n1UvGRLSd9p5wKs"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Train a YOLO model on Your Custom Dataset in <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb" target="_blank">Google Colab</a>.
</p>
## YOLO: A Brief History
[YOLO](https://arxiv.org/abs/1506.02640) (You Only Look Once), a popular [object detection](https://www.ultralytics.com/glossary/object-detection) and [image segmentation](https://www.ultralytics.com/glossary/image-segmentation) model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. Launched in 2015, YOLO quickly gained popularity for its high speed and accuracy.
- [YOLOv2](https://arxiv.org/abs/1612.08242), released in 2016, improved the original model by incorporating batch normalization, anchor boxes, and dimension clusters.
- [YOLOv3](https://pjreddie.com/media/files/papers/YOLOv3.pdf), launched in 2018, further enhanced the model's performance using a more efficient backbone network, multiple anchors and spatial pyramid pooling.
- [YOLOv4](https://arxiv.org/abs/2004.10934) was released in 2020, introducing innovations like Mosaic [data augmentation](https://www.ultralytics.com/glossary/data-augmentation), a new anchor-free detection head, and a new [loss function](https://www.ultralytics.com/glossary/loss-function).
- [YOLOv5](https://github.com/ultralytics/yolov5) further improved the model's performance and added new features such as hyperparameter optimization, integrated experiment tracking and automatic export to popular export formats.
- [YOLOv6](https://github.com/meituan/YOLOv6) was open-sourced by [Meituan](https://about.meituan.com/) in 2022 and is in use in many of the company's autonomous delivery robots.
- [YOLOv7](https://github.com/WongKinYiu/yolov7) added additional tasks such as pose estimation on the COCO keypoints dataset.
- [YOLOv8](https://github.com/ultralytics/ultralytics) released in 2023 by Ultralytics. YOLOv8 introduced new features and improvements for enhanced performance, flexibility, and efficiency, supporting a full range of vision AI tasks,
- [YOLOv9](models/yolov9.md) introduces innovative methods like Programmable Gradient Information (PGI) and the Generalized Efficient Layer Aggregation Network (GELAN).
- [YOLOv10](models/yolov10.md) is created by researchers from [Tsinghua University](https://www.tsinghua.edu.cn/en/) using the [Ultralytics](https://www.ultralytics.com/) [Python package](https://pypi.org/project/ultralytics/). This version provides real-time [object detection](tasks/detect.md) advancements by introducing an End-to-End head that eliminates Non-Maximum Suppression (NMS) requirements.
- **[YOLO11](models/yolo11.md) 🚀 NEW**: Ultralytics' latest YOLO models delivering state-of-the-art (SOTA) performance across multiple tasks, including [detection](tasks/detect.md), [segmentation](tasks/segment.md), [pose estimation](tasks/pose.md), [tracking](modes/track.md), and [classification](tasks/classify.md), leverage capabilities across diverse AI applications and domains.
## YOLO Licenses: How is Ultralytics YOLO licensed?
Ultralytics offers two licensing options to accommodate diverse use cases:
- **AGPL-3.0 License**: This [OSI-approved](https://opensource.org/license) open-source license is ideal for students and enthusiasts, promoting open collaboration and knowledge sharing. See the [LICENSE](https://github.com/ultralytics/ultralytics/blob/main/LICENSE) file for more details.
- **Enterprise License**: Designed for commercial use, this license permits seamless integration of Ultralytics software and AI models into commercial goods and services, bypassing the open-source requirements of AGPL-3.0. If your scenario involves embedding our solutions into a commercial offering, reach out through [Ultralytics Licensing](https://www.ultralytics.com/license).
Our licensing strategy is designed to ensure that any improvements to our open-source projects are returned to the community. We hold the principles of open source close to our hearts ❤️, and our mission is to guarantee that our contributions can be utilized and expanded upon in ways that are beneficial to all.
## FAQ
### What is Ultralytics YOLO and how does it improve object detection?
Ultralytics YOLO is the latest advancement in the acclaimed YOLO (You Only Look Once) series for real-time object detection and image segmentation. It builds on previous versions by introducing new features and improvements for enhanced performance, flexibility, and efficiency. YOLO supports various [vision AI tasks](tasks/index.md) such as detection, segmentation, pose estimation, tracking, and classification. Its state-of-the-art architecture ensures superior speed and accuracy, making it suitable for diverse applications, including edge devices and cloud APIs.
### How can I get started with YOLO installation and setup?
Getting started with YOLO is quick and straightforward. You can install the Ultralytics package using [pip](https://pypi.org/project/ultralytics/) and get up and running in minutes. Here's a basic installation command:
!!! example "Installation using pip"
=== "CLI"
```bash
pip install ultralytics
```
For a comprehensive step-by-step guide, visit our [quickstart guide](quickstart.md). This resource will help you with installation instructions, initial setup, and running your first model.
### How can I train a custom YOLO model on my dataset?
Training a custom YOLO model on your dataset involves a few detailed steps:
1. Prepare your annotated dataset.
2. Configure the training parameters in a YAML file.
3. Use the `yolo TASK train` command to start training. (Each `TASK` has its own argument)
Here's example code for the Object Detection Task:
!!! example "Train Example for Object Detection Task"
=== "Python"
```python
from ultralytics import YOLO
# Load a pre-trained YOLO model (you can choose n, s, m, l, or x versions)
model = YOLO("yolo11n.pt")
# Start training on your custom dataset
model.train(data="path/to/dataset.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Train a YOLO model from the command line
yolo detect train data=path/to/dataset.yaml epochs=100 imgsz=640
```
For a detailed walkthrough, check out our [Train a Model](modes/train.md) guide, which includes examples and tips for optimizing your training process.
### What are the licensing options available for Ultralytics YOLO?
Ultralytics offers two licensing options for YOLO:
- **AGPL-3.0 License**: This open-source license is ideal for educational and non-commercial use, promoting open collaboration.
- **Enterprise License**: This is designed for commercial applications, allowing seamless integration of Ultralytics software into commercial products without the restrictions of the AGPL-3.0 license.
For more details, visit our [Licensing](https://www.ultralytics.com/license) page.
### How can Ultralytics YOLO be used for real-time object tracking?
Ultralytics YOLO supports efficient and customizable multi-object tracking. To utilize tracking capabilities, you can use the `yolo track` command as shown below:
!!! example "Example for Object Tracking on a Video"
=== "Python"
```python
from ultralytics import YOLO
# Load a pre-trained YOLO model
model = YOLO("yolo11n.pt")
# Start tracking objects in a video
# You can also use live video streams or webcam input
model.track(source="path/to/video.mp4")
```
=== "CLI"
```bash
# Perform object tracking on a video from the command line
# You can specify different sources like webcam (0) or RTSP streams
yolo track source=path/to/video.mp4
```
For a detailed guide on setting up and running object tracking, check our [tracking mode](modes/track.md) documentation, which explains the configuration and practical applications in real-time scenarios.
---
comments: true
description: Learn how to use Albumentations with YOLO11 to enhance data augmentation, improve model performance, and streamline your computer vision projects.
keywords: Albumentations, YOLO11, data augmentation, Ultralytics, computer vision, object detection, model training, image transformations, machine learning
---
# Enhance Your Dataset to Train YOLO11 Using Albumentations
When you are building [computer vision models](../models/index.md), the quality and variety of your [training data](../datasets/index.md) can play a big role in how well your model performs. Albumentations offers a fast, flexible, and efficient way to apply a wide range of image transformations that can improve your model's ability to adapt to real-world scenarios. It easily integrates with [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) and can help you create robust datasets for [object detection](../tasks/detect.md), [segmentation](../tasks/segment.md), and [classification](../tasks/classify.md) tasks.
By using Albumentations, you can boost your YOLO11 training data with techniques like geometric transformations and color adjustments. In this article, we'll see how Albumentations can improve your [data augmentation](../guides/preprocessing_annotated_data.md) process and make your [YOLO11 projects](../solutions/index.md) even more impactful. Let's get started!
## Albumentations for Image Augmentation
[Albumentations](https://albumentations.ai/) is an open-source image augmentation library created in [June 2018](https://arxiv.org/pdf/1809.06839). It is designed to simplify and accelerate the image augmentation process in [computer vision](https://www.ultralytics.com/blog/exploring-image-processing-computer-vision-and-machine-vision). Created with [performance](https://www.ultralytics.com/blog/measuring-ai-performance-to-weigh-the-impact-of-your-innovations) and flexibility in mind, it supports many diverse augmentation techniques, ranging from simple transformations like rotations and flips to more complex adjustments like brightness and contrast changes. Albumentations helps developers generate rich, varied datasets for tasks like [image classification](https://www.youtube.com/watch?v=5BO0Il_YYAg), [object detection](https://www.youtube.com/watch?v=5ku7npMrW40&t=1s), and [segmentation](https://www.youtube.com/watch?v=o4Zd-IeMlSY).
You can use Albumentations to easily apply augmentations to images, [segmentation masks](https://www.ultralytics.com/glossary/image-segmentation), [bounding boxes](https://www.ultralytics.com/glossary/bounding-box), and [key points](../datasets/pose/index.md), and make sure that all elements of your dataset are transformed together. It works seamlessly with popular deep learning frameworks like [PyTorch](../integrations/torchscript.md) and [TensorFlow](../integrations/tensorboard.md), making it accessible for a wide range of projects.
Also, Albumentations is a great option for augmentation whether you're handling small datasets or large-scale [computer vision tasks](../tasks/index.md). It ensures fast and efficient processing, cutting down the time spent on data preparation. At the same time, it helps improve [model performance](../guides/yolo-performance-metrics.md), making your models more effective in real-world applications.
## Key Features of Albumentations
Albumentations offers many useful features that simplify complex image augmentations for a wide range of [computer vision applications](https://www.ultralytics.com/blog/exploring-how-the-applications-of-computer-vision-work). Here are some of the key features:
- **Wide Range of Transformations**: Albumentations offers over [70 different transformations](https://github.com/albumentations-team/albumentations?tab=readme-ov-file#list-of-augmentations), including geometric changes (e.g., rotation, flipping), color adjustments (e.g., brightness, contrast), and noise addition (e.g., Gaussian noise). Having multiple options enables the creation of highly diverse and robust training datasets.
<p align="center">
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/albumentations-augmentation.avif" alt="Example of Image Augmentations">
</p>
- **High Performance Optimization**: Built on OpenCV and NumPy, Albumentations uses advanced optimization techniques like SIMD (Single Instruction, Multiple Data), which processes multiple data points simultaneously to speed up processing. It handles large datasets quickly, making it one of the fastest options available for image augmentation.
- **Three Levels of Augmentation**: Albumentations supports three levels of augmentation: pixel-level transformations, spatial-level transformations, and mixing-level transformation. Pixel-level transformations only affect the input images without altering masks, bounding boxes, or key points. Meanwhile, both the image and its elements, like masks and bounding boxes, are transformed using spatial-level transformations. Furthermore, mixing-level transformations are a unique way to augment data as it combines multiple images into one.
![Overview of the Different Levels of Augmentations](https://github.com/ultralytics/docs/releases/download/0/levels-of-augmentation.avif)
- **[Benchmarking Results](https://albumentations.ai/docs/benchmarking_results/)**: When it comes to benchmarking, Albumentations consistently outperforms other libraries, especially with large datasets.
## Why Should You Use Albumentations for Your Vision AI Projects?
With respect to image augmentation, Albumentations stands out as a reliable tool for computer vision tasks. Here are a few key reasons why you should consider using it for your Vision AI projects:
- **Easy-to-Use API**: Albumentations provides a single, straightforward API for applying a wide range of augmentations to images, masks, bounding boxes, and keypoints. It's designed to adapt easily to different datasets, making [data preparation](../guides/data-collection-and-annotation.md) simpler and more efficient.
- **Rigorous Bug Testing**: Bugs in the augmentation pipeline can silently corrupt input data, often going unnoticed but ultimately degrading model performance. Albumentations addresses this with a thorough test suite that helps catch bugs early in development.
- **Extensibility**: Albumentations can be used to easily add new augmentations and use them in computer vision pipelines through a single interface along with built-in transformations.
## How to Use Albumentations to Augment Data for YOLO11 Training
Now that we've covered what Albumentations is and what it can do, let's look at how to use it to augment your data for YOLO11 model training. It's easy to set up because it integrates directly into [Ultralytics' training mode](../modes/train.md) and applies automatically if you have the Albumentations package installed.
### Installation
To use Albumentations with YOLO11, start by making sure you have the necessary packages installed. If Albumentations isn't installed, the augmentations won't be applied during training. Once set up, you'll be ready to create an augmented dataset for training, with Albumentations integrated to enhance your model automatically.
!!! tip "Installation"
=== "CLI"
```bash
# Install the required packages
pip install albumentations ultralytics
```
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLO11, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
### Usage
After installing the necessary packages, you're ready to start using Albumentations with YOLO11. When you train YOLO11, a set of augmentations is automatically applied through its integration with Albumentations, making it easy to enhance your model's performance.
!!! example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load a pre-trained model
model = YOLO("yolo11n.pt")
# Train the model
results = model.train(data="coco8.yaml", epochs=100, imgsz=640)
```
Next, let's take look a closer look at the specific augmentations that are applied during training.
### Blur
The Blur transformation in Albumentations applies a simple blur effect to the image by averaging pixel values within a small square area, or kernel. This is done using OpenCV's `cv2.blur` function, which helps reduce noise in the image, though it also slightly reduces image details.
Here are the parameters and values used in this integration:
- **blur_limit**: This controls the size range of the blur effect. The default range is (3, 7), meaning the kernel size for the blur can vary between 3 and 7 pixels, with only odd numbers allowed to keep the blur centered.
- **p**: The probability of applying the blur. In the integration, p=0.01, so there's a 1% chance that this blur will be applied to each image. The low probability allows for occasional blur effects, introducing a bit of variation to help the model generalize without over-blurring the images.
<img width="776" alt="An Example of the Blur Augmentation" src="https://github.com/ultralytics/docs/releases/download/0/albumentations-blur.avif">
### Median Blur
The MedianBlur transformation in Albumentations applies a median blur effect to the image, which is particularly useful for reducing noise while preserving edges. Unlike typical blurring methods, MedianBlur uses a median filter, which is especially effective at removing salt-and-pepper noise while maintaining sharpness around the edges.
Here are the parameters and values used in this integration:
- **blur_limit**: This parameter controls the maximum size of the blurring kernel. In this integration, it defaults to a range of (3, 7), meaning the kernel size for the blur is randomly chosen between 3 and 7 pixels, with only odd values allowed to ensure proper alignment.
- **p**: Sets the probability of applying the median blur. Here, p=0.01, so the transformation has a 1% chance of being applied to each image. This low probability ensures that the median blur is used sparingly, helping the model generalize by occasionally seeing images with reduced noise and preserved edges.
The image below shows an example of this augmentation applied to an image.
<img width="764" alt="An Example of the MedianBlur Augmentation" src="https://github.com/ultralytics/docs/releases/download/0/albumentations-median-blur.avif">
### Grayscale
The ToGray transformation in Albumentations converts an image to grayscale, reducing it to a single-channel format and optionally replicating this channel to match a specified number of output channels. Different methods can be used to adjust how grayscale brightness is calculated, ranging from simple averaging to more advanced techniques for realistic perception of contrast and brightness.
Here are the parameters and values used in this integration:
- **num_output_channels**: Sets the number of channels in the output image. If this value is more than 1, the single grayscale channel will be replicated to create a multi-channel grayscale image. By default, it's set to 3, giving a grayscale image with three identical channels.
- **method**: Defines the grayscale conversion method. The default method, "weighted_average", applies a formula (0.299R + 0.587G + 0.114B) that closely aligns with human perception, providing a natural-looking grayscale effect. Other options, like "from_lab", "desaturation", "average", "max", and "pca", offer alternative ways to create grayscale images based on various needs for speed, brightness emphasis, or detail preservation.
- **p**: Controls how often the grayscale transformation is applied. With p=0.01, there is a 1% chance of converting each image to grayscale, making it possible for a mix of color and grayscale images to help the model generalize better.
The image below shows an example of this grayscale transformation applied.
<img width="759" alt="An Example of the ToGray Augmentation" src="https://github.com/ultralytics/docs/releases/download/0/albumentations-grayscale.avif">
### Contrast Limited Adaptive Histogram Equalization (CLAHE)
The CLAHE transformation in Albumentations applies Contrast Limited Adaptive Histogram Equalization (CLAHE), a technique that enhances image contrast by equalizing the histogram in localized regions (tiles) instead of across the whole image. CLAHE produces a balanced enhancement effect, avoiding the overly amplified contrast that can result from standard histogram equalization, especially in areas with initially low contrast.
Here are the parameters and values used in this integration:
- **clip_limit**: Controls the contrast enhancement range. Set to a default range of (1, 4), it determines the maximum contrast allowed in each tile. Higher values are used for more contrast but may also introduce noise.
- **tile_grid_size**: Defines the size of the grid of tiles, typically as (rows, columns). The default value is (8, 8), meaning the image is divided into an 8x8 grid. Smaller tile sizes provide more localized adjustments, while larger ones create effects closer to global equalization.
- **p**: The probability of applying CLAHE. Here, p=0.01 introduces the enhancement effect only 1% of the time, ensuring that contrast adjustments are applied sparingly for occasional variation in training images.
The image below shows an example of the CLAHE transformation applied.
<img width="760" alt="An Example of the CLAHE Augmentation" src="https://github.com/ultralytics/docs/releases/download/0/albumentations-CLAHE.avif">
## Keep Learning about Albumentations
If you are interested in learning more about Albumentations, check out the following resources for more in-depth instructions and examples:
- **[Albumentations Documentation](https://albumentations.ai/docs/)**: The official documentation provides a full range of supported transformations and advanced usage techniques.
- **[Ultralytics Albumentations Guide](https://docs.ultralytics.com/reference/data/augment/?h=albumentation#ultralytics.data.augment.Albumentations)**: Get a closer look at the details of the function that facilitate this integration.
- **[Albumentations GitHub Repository](https://github.com/albumentations-team/albumentations/)**: The repository includes examples, benchmarks, and discussions to help you get started with customizing augmentations.
## Key Takeaways
In this guide, we explored the key aspects of Albumentations, a great Python library for image augmentation. We discussed its wide range of transformations, optimized performance, and how you can use it in your next YOLO11 project.
Also, if you'd like to know more about other Ultralytics YOLO11 integrations, visit our [integration guide page](../integrations/index.md). You'll find valuable resources and insights there.
## FAQ
### How can I integrate Albumentations with YOLO11 for improved data augmentation?
Albumentations integrates seamlessly with YOLO11 and applies automatically during training if you have the package installed. Here's how to get started:
```python
# Install required packages
# !pip install albumentations ultralytics
from ultralytics import YOLO
# Load and train model with automatic augmentations
model = YOLO("yolo11n.pt")
model.train(data="coco8.yaml", epochs=100)
```
The integration includes optimized augmentations like blur, median blur, grayscale conversion, and CLAHE with carefully tuned probabilities to enhance model performance.
### What are the key benefits of using Albumentations over other augmentation libraries?
Albumentations stands out for several reasons:
1. Performance: Built on OpenCV and NumPy with SIMD optimization for superior speed
2. Flexibility: Supports 70+ transformations across pixel-level, spatial-level, and mixing-level augmentations
3. Compatibility: Works seamlessly with popular frameworks like [PyTorch](../integrations/torchscript.md) and [TensorFlow](../integrations/tensorboard.md)
4. Reliability: Extensive test suite prevents silent data corruption
5. Ease of use: Single unified API for all augmentation types
### What types of computer vision tasks can benefit from Albumentations augmentation?
Albumentations enhances various [computer vision tasks](../tasks/index.md) including:
- [Object Detection](../tasks/detect.md): Improves model robustness to lighting, scale, and orientation variations
- [Instance Segmentation](../tasks/segment.md): Enhances mask prediction accuracy through diverse transformations
- [Classification](../tasks/classify.md): Increases model generalization with color and geometric augmentations
- [Pose Estimation](../tasks/pose.md): Helps models adapt to different viewpoints and lighting conditions
The library's diverse augmentation options make it valuable for any vision task requiring robust model performance.
---
comments: true
description: Learn step-by-step how to deploy Ultralytics' YOLO11 on Amazon SageMaker Endpoints, from setup to testing, for powerful real-time inference with AWS services.
keywords: YOLO11, Amazon SageMaker, AWS, Ultralytics, machine learning, computer vision, model deployment, AWS CloudFormation, AWS CDK, real-time inference
---
# A Guide to Deploying YOLO11 on Amazon SageMaker Endpoints
Deploying advanced [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) models like [Ultralytics' YOLO11](https://github.com/ultralytics/ultralytics) on Amazon SageMaker Endpoints opens up a wide range of possibilities for various [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) applications. The key to effectively using these models lies in understanding their setup, configuration, and deployment processes. YOLO11 becomes even more powerful when integrated seamlessly with Amazon SageMaker, a robust and scalable machine learning service by AWS.
This guide will take you through the process of deploying YOLO11 [PyTorch](https://www.ultralytics.com/glossary/pytorch) models on Amazon SageMaker Endpoints step by step. You'll learn the essentials of preparing your AWS environment, configuring the model appropriately, and using tools like AWS CloudFormation and the AWS Cloud Development Kit (CDK) for deployment.
## Amazon SageMaker
<p align="center">
<img width="640" src="https://github.com/ultralytics/docs/releases/download/0/amazon-sagemaker-overview.avif" alt="Amazon SageMaker Overview">
</p>
[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a machine learning service from Amazon Web Services (AWS) that simplifies the process of building, training, and deploying machine learning models. It provides a broad range of tools for handling various aspects of machine learning workflows. This includes automated features for tuning models, options for training models at scale, and straightforward methods for deploying models into production. SageMaker supports popular machine learning frameworks, offering the flexibility needed for diverse projects. Its features also cover data labeling, workflow management, and performance analysis.
## Deploying YOLO11 on Amazon SageMaker Endpoints
Deploying YOLO11 on Amazon SageMaker lets you use its managed environment for real-time inference and take advantage of features like autoscaling. Take a look at the AWS architecture below.
<p align="center">
<img width="640" src="https://github.com/ultralytics/docs/releases/download/0/aws-architecture.avif" alt="AWS Architecture">
</p>
### Step 1: Setup Your AWS Environment
First, ensure you have the following prerequisites in place:
- An AWS Account: If you don't already have one, sign up for an AWS account.
- Configured IAM Roles: You'll need an IAM role with the necessary permissions for Amazon SageMaker, AWS CloudFormation, and Amazon S3. This role should have policies that allow it to access these services.
- AWS CLI: If not already installed, download and install the AWS Command Line Interface (CLI) and configure it with your account details. Follow [the AWS CLI instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) for installation.
- AWS CDK: If not already installed, install the AWS Cloud Development Kit (CDK), which will be used for scripting the deployment. Follow [the AWS CDK instructions](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install) for installation.
- Adequate Service Quota: Confirm that you have sufficient quotas for two separate resources in Amazon SageMaker: one for `ml.m5.4xlarge` for endpoint usage and another for `ml.m5.4xlarge` for notebook instance usage. Each of these requires a minimum of one quota value. If your current quotas are below this requirement, it's important to request an increase for each. You can request a quota increase by following the detailed instructions in the [AWS Service Quotas documentation](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html#quota-console-increase).
### Step 2: Clone the YOLO11 SageMaker Repository
The next step is to clone the specific AWS repository that contains the resources for deploying YOLO11 on SageMaker. This repository, hosted on GitHub, includes the necessary CDK scripts and configuration files.
- Clone the GitHub Repository: Execute the following command in your terminal to clone the host-yolov8-on-sagemaker-endpoint repository:
```bash
git clone https://github.com/aws-samples/host-yolov8-on-sagemaker-endpoint.git
```
- Navigate to the Cloned Directory: Change your directory to the cloned repository:
```bash
cd host-yolov8-on-sagemaker-endpoint/yolov8-pytorch-cdk
```
### Step 3: Set Up the CDK Environment
Now that you have the necessary code, set up your environment for deploying with AWS CDK.
- Create a Python Virtual Environment: This isolates your Python environment and dependencies. Run:
```bash
python3 -m venv .venv
```
- Activate the Virtual Environment:
```bash
source .venv/bin/activate
```
- Install Dependencies: Install the required Python dependencies for the project:
```bash
pip3 install -r requirements.txt
```
- Upgrade AWS CDK Library: Ensure you have the latest version of the AWS CDK library:
```bash
pip install --upgrade aws-cdk-lib
```
### Step 4: Create the AWS CloudFormation Stack
- Synthesize the CDK Application: Generate the AWS CloudFormation template from your CDK code:
```bash
cdk synth
```
- Bootstrap the CDK Application: Prepare your AWS environment for CDK deployment:
```bash
cdk bootstrap
```
- Deploy the Stack: This will create the necessary AWS resources and deploy your model:
```bash
cdk deploy
```
### Step 5: Deploy the YOLO Model
Before diving into the deployment instructions, be sure to check out the range of [YOLO11 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
After creating the AWS CloudFormation Stack, the next step is to deploy YOLO11.
- Open the Notebook Instance: Go to the AWS Console and navigate to the Amazon SageMaker service. Select "Notebook Instances" from the dashboard, then locate the notebook instance that was created by your CDK deployment script. Open the notebook instance to access the Jupyter environment.
- Access and Modify inference.py: After opening the SageMaker notebook instance in Jupyter, locate the inference.py file. Edit the output_fn function in inference.py as shown below and save your changes to the script, ensuring that there are no syntax errors.
```python
import json
def output_fn(prediction_output):
"""Formats model outputs as JSON string, extracting attributes like boxes, masks, keypoints."""
print("Executing output_fn from inference.py ...")
infer = {}
for result in prediction_output:
if result.boxes is not None:
infer["boxes"] = result.boxes.numpy().data.tolist()
if result.masks is not None:
infer["masks"] = result.masks.numpy().data.tolist()
if result.keypoints is not None:
infer["keypoints"] = result.keypoints.numpy().data.tolist()
if result.obb is not None:
infer["obb"] = result.obb.numpy().data.tolist()
if result.probs is not None:
infer["probs"] = result.probs.numpy().data.tolist()
return json.dumps(infer)
```
- Deploy the Endpoint Using 1_DeployEndpoint.ipynb: In the Jupyter environment, open the 1_DeployEndpoint.ipynb notebook located in the sm-notebook directory. Follow the instructions in the notebook and run the cells to download the YOLO11 model, package it with the updated inference code, and upload it to an Amazon S3 bucket. The notebook will guide you through creating and deploying a SageMaker endpoint for the YOLO11 model.
### Step 6: Testing Your Deployment
Now that your YOLO11 model is deployed, it's important to test its performance and functionality.
- Open the Test Notebook: In the same Jupyter environment, locate and open the 2_TestEndpoint.ipynb notebook, also in the sm-notebook directory.
- Run the Test Notebook: Follow the instructions within the notebook to test the deployed SageMaker endpoint. This includes sending an image to the endpoint and running inferences. Then, you'll plot the output to visualize the model's performance and [accuracy](https://www.ultralytics.com/glossary/accuracy), as shown below.
<p align="center">
<img width="640" src="https://github.com/ultralytics/docs/releases/download/0/testing-results-yolov8.avif" alt="Testing Results YOLO11">
</p>
- Clean-Up Resources: The test notebook will also guide you through the process of cleaning up the endpoint and the hosted model. This is an important step to manage costs and resources effectively, especially if you do not plan to use the deployed model immediately.
### Step 7: Monitoring and Management
After testing, continuous monitoring and management of your deployed model are essential.
- Monitor with Amazon CloudWatch: Regularly check the performance and health of your SageMaker endpoint using [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/).
- Manage the Endpoint: Use the SageMaker console for ongoing management of the endpoint. This includes scaling, updating, or redeploying the model as required.
By completing these steps, you will have successfully deployed and tested a YOLO11 model on Amazon SageMaker Endpoints. This process not only equips you with practical experience in using AWS services for machine learning deployment but also lays the foundation for deploying other advanced models in the future.
## Summary
This guide took you step by step through deploying YOLO11 on Amazon SageMaker Endpoints using AWS CloudFormation and the AWS Cloud Development Kit (CDK). The process includes cloning the necessary GitHub repository, setting up the CDK environment, deploying the model using AWS services, and testing its performance on SageMaker.
For more technical details, refer to [this article](https://aws.amazon.com/blogs/machine-learning/hosting-yolov8-pytorch-model-on-amazon-sagemaker-endpoints/) on the AWS Machine Learning Blog. You can also check out the official [Amazon SageMaker Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints.html) for more insights into various features and functionalities.
Are you interested in learning more about different YOLO11 integrations? Visit the [Ultralytics integrations guide page](../integrations/index.md) to discover additional tools and capabilities that can enhance your machine-learning projects.
## FAQ
### How do I deploy the Ultralytics YOLO11 model on Amazon SageMaker Endpoints?
To deploy the Ultralytics YOLO11 model on Amazon SageMaker Endpoints, follow these steps:
1. **Set Up Your AWS Environment**: Ensure you have an AWS Account, IAM roles with necessary permissions, and the AWS CLI configured. Install AWS CDK if not already done (refer to the [AWS CDK instructions](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install)).
2. **Clone the YOLO11 SageMaker Repository**:
```bash
git clone https://github.com/aws-samples/host-yolov8-on-sagemaker-endpoint.git
cd host-yolov8-on-sagemaker-endpoint/yolov8-pytorch-cdk
```
3. **Set Up the CDK Environment**: Create a Python virtual environment, activate it, install dependencies, and upgrade AWS CDK library.
```bash
python3 -m venv .venv
source .venv/bin/activate
pip3 install -r requirements.txt
pip install --upgrade aws-cdk-lib
```
4. **Deploy using AWS CDK**: Synthesize and deploy the CloudFormation stack, bootstrap the environment.
```bash
cdk synth
cdk bootstrap
cdk deploy
```
For further details, review the [documentation section](#step-5-deploy-the-yolo-model).
### What are the prerequisites for deploying YOLO11 on Amazon SageMaker?
To deploy YOLO11 on Amazon SageMaker, ensure you have the following prerequisites:
1. **AWS Account**: Active AWS account ([sign up here](https://aws.amazon.com/)).
2. **IAM Roles**: Configured IAM roles with permissions for SageMaker, CloudFormation, and Amazon S3.
3. **AWS CLI**: Installed and configured AWS Command Line Interface ([AWS CLI installation guide](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)).
4. **AWS CDK**: Installed AWS Cloud Development Kit ([CDK setup guide](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install)).
5. **Service Quotas**: Sufficient quotas for `ml.m5.4xlarge` instances for both endpoint and notebook usage ([request a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html#quota-console-increase)).
For detailed setup, refer to [this section](#step-1-setup-your-aws-environment).
### Why should I use Ultralytics YOLO11 on Amazon SageMaker?
Using Ultralytics YOLO11 on Amazon SageMaker offers several advantages:
1. **Scalability and Management**: SageMaker provides a managed environment with features like autoscaling, which helps in real-time inference needs.
2. **Integration with AWS Services**: Seamlessly integrate with other AWS services, such as S3 for data storage, CloudFormation for infrastructure as code, and CloudWatch for monitoring.
3. **Ease of Deployment**: Simplified setup using AWS CDK scripts and streamlined deployment processes.
4. **Performance**: Leverage Amazon SageMaker's high-performance infrastructure for running large scale inference tasks efficiently.
Explore more about the advantages of using SageMaker in the [introduction section](#amazon-sagemaker).
### Can I customize the inference logic for YOLO11 on Amazon SageMaker?
Yes, you can customize the inference logic for YOLO11 on Amazon SageMaker:
1. **Modify `inference.py`**: Locate and customize the `output_fn` function in the `inference.py` file to tailor output formats.
```python
import json
def output_fn(prediction_output):
"""Formats model outputs as JSON string, extracting attributes like boxes, masks, keypoints."""
infer = {}
for result in prediction_output:
if result.boxes is not None:
infer["boxes"] = result.boxes.numpy().data.tolist()
# Add more processing logic if necessary
return json.dumps(infer)
```
2. **Deploy Updated Model**: Ensure you redeploy the model using Jupyter notebooks provided (`1_DeployEndpoint.ipynb`) to include these changes.
Refer to the [detailed steps](#step-5-deploy-the-yolo-model) for deploying the modified model.
### How can I test the deployed YOLO11 model on Amazon SageMaker?
To test the deployed YOLO11 model on Amazon SageMaker:
1. **Open the Test Notebook**: Locate the `2_TestEndpoint.ipynb` notebook in the SageMaker Jupyter environment.
2. **Run the Notebook**: Follow the notebook's instructions to send an image to the endpoint, perform inference, and display results.
3. **Visualize Results**: Use built-in plotting functionalities to visualize performance metrics, such as bounding boxes around detected objects.
For comprehensive testing instructions, visit the [testing section](#step-6-testing-your-deployment).
---
comments: true
description: Discover how to integrate YOLO11 with ClearML to streamline your MLOps workflow, automate experiments, and enhance model management effortlessly.
keywords: YOLO11, ClearML, MLOps, Ultralytics, machine learning, object detection, model training, automation, experiment management
---
# Training YOLO11 with ClearML: Streamlining Your MLOps Workflow
MLOps bridges the gap between creating and deploying [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) models in real-world settings. It focuses on efficient deployment, scalability, and ongoing management to ensure models perform well in practical applications.
[Ultralytics YOLO11](https://www.ultralytics.com/) effortlessly integrates with ClearML, streamlining and enhancing your [object detection](https://www.ultralytics.com/glossary/object-detection) model's training and management. This guide will walk you through the integration process, detailing how to set up ClearML, manage experiments, automate model management, and collaborate effectively.
## ClearML
<p align="center">
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/clearml-overview.avif" alt="ClearML Overview">
</p>
[ClearML](https://clear.ml/) is an innovative open-source MLOps platform that is skillfully designed to automate, monitor, and orchestrate machine learning workflows. Its key features include automated logging of all training and inference data for full experiment reproducibility, an intuitive web UI for easy [data visualization](https://www.ultralytics.com/glossary/data-visualization) and analysis, advanced hyperparameter [optimization algorithms](https://www.ultralytics.com/glossary/optimization-algorithm), and robust model management for efficient deployment across various platforms.
## YOLO11 Training with ClearML
You can bring automation and efficiency to your machine learning workflow by improving your training process by integrating YOLO11 with ClearML.
## Installation
To install the required packages, run:
!!! tip "Installation"
=== "CLI"
```bash
# Install the required packages for YOLO11 and ClearML
pip install ultralytics clearml
```
For detailed instructions and best practices related to the installation process, be sure to check our [YOLO11 Installation guide](../quickstart.md). While installing the required packages for YOLO11, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
## Configuring ClearML
Once you have installed the necessary packages, the next step is to initialize and configure your ClearML SDK. This involves setting up your ClearML account and obtaining the necessary credentials for a seamless connection between your development environment and the ClearML server.
Begin by initializing the ClearML SDK in your environment. The 'clearml-init' command starts the setup process and prompts you for the necessary credentials.
!!! tip "Initial SDK Setup"
=== "CLI"
```bash
# Initialize your ClearML SDK setup process
clearml-init
```
After executing this command, visit the [ClearML Settings page](https://app.clear.ml/settings/workspace-configuration). Navigate to the top right corner and select "Settings." Go to the "Workspace" section and click on "Create new credentials." Use the credentials provided in the "Create Credentials" pop-up to complete the setup as instructed, depending on whether you are configuring ClearML in a Jupyter Notebook or a local Python environment.
## Usage
Before diving into the usage instructions, be sure to check out the range of [YOLO11 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! example "Usage"
=== "Python"
```python
from clearml import Task
from ultralytics import YOLO
# Step 1: Creating a ClearML Task
task = Task.init(project_name="my_project", task_name="my_yolov8_task")
# Step 2: Selecting the YOLO11 Model
model_variant = "yolo11n"
task.set_parameter("model_variant", model_variant)
# Step 3: Loading the YOLO11 Model
model = YOLO(f"{model_variant}.pt")
# Step 4: Setting Up Training Arguments
args = dict(data="coco8.yaml", epochs=16)
task.connect(args)
# Step 5: Initiating Model Training
results = model.train(**args)
```
### Understanding the Code
Let's understand the steps showcased in the usage code snippet above.
**Step 1: Creating a ClearML Task**: A new task is initialized in ClearML, specifying your project and task names. This task will track and manage your model's training.
**Step 2: Selecting the YOLO11 Model**: The `model_variant` variable is set to 'yolo11n', one of the YOLO11 models. This variant is then logged in ClearML for tracking.
**Step 3: Loading the YOLO11 Model**: The selected YOLO11 model is loaded using Ultralytics' YOLO class, preparing it for training.
**Step 4: Setting Up Training Arguments**: Key training arguments like the dataset (`coco8.yaml`) and the number of [epochs](https://www.ultralytics.com/glossary/epoch) (`16`) are organized in a dictionary and connected to the ClearML task. This allows for tracking and potential modification via the ClearML UI. For a detailed understanding of the model training process and best practices, refer to our [YOLO11 Model Training guide](../modes/train.md).
**Step 5: Initiating Model Training**: The model training is started with the specified arguments. The results of the training process are captured in the `results` variable.
### Understanding the Output
Upon running the usage code snippet above, you can expect the following output:
- A confirmation message indicating the creation of a new ClearML task, along with its unique ID.
- An informational message about the script code being stored, indicating that the code execution is being tracked by ClearML.
- A URL link to the ClearML results page where you can monitor the training progress and view detailed logs.
- Download progress for the YOLO11 model and the specified dataset, followed by a summary of the model architecture and training configuration.
- Initialization messages for various training components like TensorBoard, Automatic [Mixed Precision](https://www.ultralytics.com/glossary/mixed-precision) (AMP), and dataset preparation.
- Finally, the training process starts, with progress updates as the model trains on the specified dataset. For an in-depth understanding of the performance metrics used during training, read [our guide on performance metrics](../guides/yolo-performance-metrics.md).
### Viewing the ClearML Results Page
By clicking on the URL link to the ClearML results page in the output of the usage code snippet, you can access a comprehensive view of your model's training process.
#### Key Features of the ClearML Results Page
- **Real-Time Metrics Tracking**
- Track critical metrics like loss, [accuracy](https://www.ultralytics.com/glossary/accuracy), and validation scores as they occur.
- Provides immediate feedback for timely model performance adjustments.
- **Experiment Comparison**
- Compare different training runs side-by-side.
- Essential for [hyperparameter tuning](https://www.ultralytics.com/glossary/hyperparameter-tuning) and identifying the most effective models.
- **Detailed Logs and Outputs**
- Access comprehensive logs, graphical representations of metrics, and console outputs.
- Gain a deeper understanding of model behavior and issue resolution.
- **Resource Utilization Monitoring**
- Monitor the utilization of computational resources, including CPU, GPU, and memory.
- Key to optimizing training efficiency and costs.
- **Model Artifacts Management**
- View, download, and share model artifacts like trained models and checkpoints.
- Enhances collaboration and streamlines [model deployment](https://www.ultralytics.com/glossary/model-deployment) and sharing.
For a visual walkthrough of what the ClearML Results Page looks like, watch the video below:
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/iLcC7m3bCes?si=oSEAoZbrg8inCg_2"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> YOLO11 MLOps Integration using ClearML
</p>
### Advanced Features in ClearML
ClearML offers several advanced features to enhance your MLOps experience.
#### Remote Execution
ClearML's remote execution feature facilitates the reproduction and manipulation of experiments on different machines. It logs essential details like installed packages and uncommitted changes. When a task is enqueued, the ClearML Agent pulls it, recreates the environment, and runs the experiment, reporting back with detailed results.
Deploying a ClearML Agent is straightforward and can be done on various machines using the following command:
```bash
clearml-agent daemon --queue <queues_to_listen_to> [--docker]
```
This setup is applicable to cloud VMs, local GPUs, or laptops. ClearML Autoscalers help manage cloud workloads on platforms like AWS, GCP, and Azure, automating the deployment of agents and adjusting resources based on your resource budget.
### Cloning, Editing, and Enqueuing
ClearML's user-friendly interface allows easy cloning, editing, and enqueuing of tasks. Users can clone an existing experiment, adjust parameters or other details through the UI, and enqueue the task for execution. This streamlined process ensures that the ClearML Agent executing the task uses updated configurations, making it ideal for iterative experimentation and model fine-tuning.
<p align="center"><br>
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/cloning-editing-enqueuing-clearml.avif" alt="Cloning, Editing, and Enqueuing with ClearML">
</p>
## Summary
This guide has led you through the process of integrating ClearML with Ultralytics' YOLO11. Covering everything from initial setup to advanced model management, you've discovered how to leverage ClearML for efficient training, experiment tracking, and workflow optimization in your machine learning projects.
For further details on usage, visit [ClearML's official documentation](https://clear.ml/docs/latest/docs/integrations/yolov8/).
Additionally, explore more integrations and capabilities of Ultralytics by visiting the [Ultralytics integration guide page](../integrations/index.md), which is a treasure trove of resources and insights.
## FAQ
### What is the process for integrating Ultralytics YOLO11 with ClearML?
Integrating Ultralytics YOLO11 with ClearML involves a series of steps to streamline your MLOps workflow. First, install the necessary packages:
```bash
pip install ultralytics clearml
```
Next, initialize the ClearML SDK in your environment using:
```bash
clearml-init
```
You then configure ClearML with your credentials from the [ClearML Settings page](https://app.clear.ml/settings/workspace-configuration). Detailed instructions on the entire setup process, including model selection and training configurations, can be found in our [YOLO11 Model Training guide](../modes/train.md).
### Why should I use ClearML with Ultralytics YOLO11 for my machine learning projects?
Using ClearML with Ultralytics YOLO11 enhances your machine learning projects by automating experiment tracking, streamlining workflows, and enabling robust model management. ClearML offers real-time metrics tracking, resource utilization monitoring, and a user-friendly interface for comparing experiments. These features help optimize your model's performance and make the development process more efficient. Learn more about the benefits and procedures in our [MLOps Integration guide](../modes/train.md).
### How do I troubleshoot common issues during YOLO11 and ClearML integration?
If you encounter issues during the integration of YOLO11 with ClearML, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips. Typical problems might involve package installation errors, credential setup, or configuration issues. This guide provides step-by-step troubleshooting instructions to resolve these common issues efficiently.
### How do I set up the ClearML task for YOLO11 model training?
Setting up a ClearML task for YOLO11 training involves initializing a task, selecting the model variant, loading the model, setting up training arguments, and finally, starting the model training. Here's a simplified example:
```python
from clearml import Task
from ultralytics import YOLO
# Step 1: Creating a ClearML Task
task = Task.init(project_name="my_project", task_name="my_yolov8_task")
# Step 2: Selecting the YOLO11 Model
model_variant = "yolo11n"
task.set_parameter("model_variant", model_variant)
# Step 3: Loading the YOLO11 Model
model = YOLO(f"{model_variant}.pt")
# Step 4: Setting Up Training Arguments
args = dict(data="coco8.yaml", epochs=16)
task.connect(args)
# Step 5: Initiating Model Training
results = model.train(**args)
```
Refer to our [Usage guide](#usage) for a detailed breakdown of these steps.
### Where can I view the results of my YOLO11 training in ClearML?
After running your YOLO11 training script with ClearML, you can view the results on the ClearML results page. The output will include a URL link to the ClearML dashboard, where you can track metrics, compare experiments, and monitor resource usage. For more details on how to view and interpret the results, check our section on [Viewing the ClearML Results Page](#viewing-the-clearml-results-page).
---
comments: true
description: Learn to simplify the logging of YOLO11 training with Comet ML. This guide covers installation, setup, real-time insights, and custom logging.
keywords: YOLO11, Comet ML, logging, machine learning, training, model checkpoints, metrics, installation, configuration, real-time insights, custom logging
---
# Elevating YOLO11 Training: Simplify Your Logging Process with Comet ML
Logging key training details such as parameters, metrics, image predictions, and model checkpoints is essential in [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml)—it keeps your project transparent, your progress measurable, and your results repeatable.
[Ultralytics YOLO11](https://www.ultralytics.com/) seamlessly integrates with Comet ML, efficiently capturing and optimizing every aspect of your YOLO11 [object detection](https://www.ultralytics.com/glossary/object-detection) model's training process. In this guide, we'll cover the installation process, Comet ML setup, real-time insights, custom logging, and offline usage, ensuring that your YOLO11 training is thoroughly documented and fine-tuned for outstanding results.
## Comet ML
<p align="center">
<img width="640" src="https://www.comet.com/docs/v2/img/landing/home-hero.svg" alt="Comet ML Overview">
</p>
[Comet ML](https://www.comet.com/site/) is a platform for tracking, comparing, explaining, and optimizing machine learning models and experiments. It allows you to log metrics, parameters, media, and more during your model training and monitor your experiments through an aesthetically pleasing web interface. Comet ML helps data scientists iterate more rapidly, enhances transparency and reproducibility, and aids in the development of production models.
## Harnessing the Power of YOLO11 and Comet ML
By combining Ultralytics YOLO11 with Comet ML, you unlock a range of benefits. These include simplified experiment management, real-time insights for quick adjustments, flexible and tailored logging options, and the ability to log experiments offline when internet access is limited. This integration empowers you to make data-driven decisions, analyze performance metrics, and achieve exceptional results.
## Installation
To install the required packages, run:
!!! tip "Installation"
=== "CLI"
```bash
# Install the required packages for YOLO11 and Comet ML
pip install ultralytics comet_ml torch torchvision
```
## Configuring Comet ML
After installing the required packages, you'll need to sign up, get a [Comet API Key](https://www.comet.com/signup), and configure it.
!!! tip "Configuring Comet ML"
=== "CLI"
```bash
# Set your Comet Api Key
export COMET_API_KEY=<Your API Key>
```
Then, you can initialize your Comet project. Comet will automatically detect the API key and proceed with the setup.
!!! example "Initialize Comet project"
=== "Python"
```python
import comet_ml
comet_ml.login(project_name="comet-example-yolo11-coco128")
```
If you are using a Google Colab notebook, the code above will prompt you to enter your API key for initialization.
## Usage
Before diving into the usage instructions, be sure to check out the range of [YOLO11 models offered by Ultralytics](../models/yolo11.md). This will help you choose the most appropriate model for your project requirements.
!!! example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolo11n.pt")
# Train the model
results = model.train(
data="coco8.yaml",
project="comet-example-yolo11-coco128",
batch=32,
save_period=1,
save_json=True,
epochs=3,
)
```
After running the training code, Comet ML will create an experiment in your Comet workspace to track the run automatically. You will then be provided with a link to view the detailed logging of your [YOLO11 model's training](../modes/train.md) process.
Comet automatically logs the following data with no additional configuration: metrics such as mAP and loss, hyperparameters, model checkpoints, interactive confusion matrix, and image [bounding box](https://www.ultralytics.com/glossary/bounding-box) predictions.
## Understanding Your Model's Performance with Comet ML Visualizations
Let's dive into what you'll see on the Comet ML dashboard once your YOLO11 model begins training. The dashboard is where all the action happens, presenting a range of automatically logged information through visuals and statistics. Here's a quick tour:
**Experiment Panels**
The experiment panels section of the Comet ML dashboard organize and present the different runs and their metrics, such as segment mask loss, class loss, precision, and [mean average precision](https://www.ultralytics.com/glossary/mean-average-precision-map).
<p align="center">
<img width="640" src="https://github.com/ultralytics/docs/releases/download/0/comet-ml-dashboard-overview.avif" alt="Comet ML Overview">
</p>
**Metrics**
In the metrics section, you have the option to examine the metrics in a tabular format as well, which is displayed in a dedicated pane as illustrated here.
<p align="center">
<img width="640" src="https://github.com/ultralytics/docs/releases/download/0/comet-ml-metrics-tabular.avif" alt="Comet ML Overview">
</p>
**Interactive [Confusion Matrix](https://www.ultralytics.com/glossary/confusion-matrix)**
The confusion matrix, found in the Confusion Matrix tab, provides an interactive way to assess the model's classification [accuracy](https://www.ultralytics.com/glossary/accuracy). It details the correct and incorrect predictions, allowing you to understand the model's strengths and weaknesses.
<p align="center">
<img width="640" src="https://github.com/ultralytics/docs/releases/download/0/comet-ml-interactive-confusion-matrix.avif" alt="Comet ML Overview">
</p>
**System Metrics**
Comet ML logs system metrics to help identify any bottlenecks in the training process. It includes metrics such as GPU utilization, GPU memory usage, CPU utilization, and RAM usage. These are essential for monitoring the efficiency of resource usage during model training.
<p align="center">
<img width="640" src="https://github.com/ultralytics/docs/releases/download/0/comet-ml-system-metrics.avif" alt="Comet ML Overview">
</p>
## Customizing Comet ML Logging
Comet ML offers the flexibility to customize its logging behavior by setting environment variables. These configurations allow you to tailor Comet ML to your specific needs and preferences. Here are some helpful customization options:
### Logging Image Predictions
You can control the number of image predictions that Comet ML logs during your experiments. By default, Comet ML logs 100 image predictions from the validation set. However, you can change this number to better suit your requirements. For example, to log 200 image predictions, use the following code:
```python
import os
os.environ["COMET_MAX_IMAGE_PREDICTIONS"] = "200"
```
### Batch Logging Interval
Comet ML allows you to specify how often batches of image predictions are logged. The `COMET_EVAL_BATCH_LOGGING_INTERVAL` environment variable controls this frequency. The default setting is 1, which logs predictions from every validation batch. You can adjust this value to log predictions at a different interval. For instance, setting it to 4 will log predictions from every fourth batch.
```python
import os
os.environ["COMET_EVAL_BATCH_LOGGING_INTERVAL"] = "4"
```
### Disabling Confusion Matrix Logging
In some cases, you may not want to log the confusion matrix from your validation set after every [epoch](https://www.ultralytics.com/glossary/epoch). You can disable this feature by setting the `COMET_EVAL_LOG_CONFUSION_MATRIX` environment variable to "false." The confusion matrix will only be logged once, after the training is completed.
```python
import os
os.environ["COMET_EVAL_LOG_CONFUSION_MATRIX"] = "false"
```
### Offline Logging
If you find yourself in a situation where internet access is limited, Comet ML provides an offline logging option. You can set the `COMET_MODE` environment variable to "offline" to enable this feature. Your experiment data will be saved locally in a directory that you can later upload to Comet ML when internet connectivity is available.
```python
import os
os.environ["COMET_MODE"] = "offline"
```
## Summary
This guide has walked you through integrating Comet ML with Ultralytics' YOLO11. From installation to customization, you've learned to streamline experiment management, gain real-time insights, and adapt logging to your project's needs.
Explore [Comet ML's official documentation](https://www.comet.com/docs/v2/integrations/third-party-tools/yolov8/) for more insights on integrating with YOLO11.
Furthermore, if you're looking to dive deeper into the practical applications of YOLO11, specifically for [image segmentation](https://www.ultralytics.com/glossary/image-segmentation) tasks, this detailed guide on [fine-tuning YOLO11 with Comet ML](https://www.comet.com/site/blog/fine-tuning-yolov8-for-image-segmentation-with-comet/) offers valuable insights and step-by-step instructions to enhance your model's performance.
Additionally, to explore other exciting integrations with Ultralytics, check out the [integration guide page](../integrations/index.md), which offers a wealth of resources and information.
## FAQ
### How do I integrate Comet ML with Ultralytics YOLO11 for training?
To integrate Comet ML with Ultralytics YOLO11, follow these steps:
1. **Install the required packages**:
```bash
pip install ultralytics comet_ml torch torchvision
```
2. **Set up your Comet API Key**:
```bash
export COMET_API_KEY=<Your API Key>
```
3. **Initialize your Comet project in your Python code**:
```python
import comet_ml
comet_ml.login(project_name="comet-example-yolo11-coco128")
```
4. **Train your YOLO11 model and log metrics**:
```python
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
results = model.train(
data="coco8.yaml",
project="comet-example-yolo11-coco128",
batch=32,
save_period=1,
save_json=True,
epochs=3,
)
```
For more detailed instructions, refer to the [Comet ML configuration section](#configuring-comet-ml).
### What are the benefits of using Comet ML with YOLO11?
By integrating Ultralytics YOLO11 with Comet ML, you can:
- **Monitor real-time insights**: Get instant feedback on your training results, allowing for quick adjustments.
- **Log extensive metrics**: Automatically capture essential metrics such as mAP, loss, hyperparameters, and model checkpoints.
- **Track experiments offline**: Log your training runs locally when internet access is unavailable.
- **Compare different training runs**: Use the interactive Comet ML dashboard to analyze and compare multiple experiments.
By leveraging these features, you can optimize your machine learning workflows for better performance and reproducibility. For more information, visit the [Comet ML integration guide](../integrations/index.md).
### How do I customize the logging behavior of Comet ML during YOLO11 training?
Comet ML allows for extensive customization of its logging behavior using environment variables:
- **Change the number of image predictions logged**:
```python
import os
os.environ["COMET_MAX_IMAGE_PREDICTIONS"] = "200"
```
- **Adjust batch logging interval**:
```python
import os
os.environ["COMET_EVAL_BATCH_LOGGING_INTERVAL"] = "4"
```
- **Disable confusion matrix logging**:
```python
import os
os.environ["COMET_EVAL_LOG_CONFUSION_MATRIX"] = "false"
```
Refer to the [Customizing Comet ML Logging](#customizing-comet-ml-logging) section for more customization options.
### How do I view detailed metrics and visualizations of my YOLO11 training on Comet ML?
Once your YOLO11 model starts training, you can access a wide range of metrics and visualizations on the Comet ML dashboard. Key features include:
- **Experiment Panels**: View different runs and their metrics, including segment mask loss, class loss, and mean average [precision](https://www.ultralytics.com/glossary/precision).
- **Metrics**: Examine metrics in tabular format for detailed analysis.
- **Interactive Confusion Matrix**: Assess classification accuracy with an interactive confusion matrix.
- **System Metrics**: Monitor GPU and CPU utilization, memory usage, and other system metrics.
For a detailed overview of these features, visit the [Understanding Your Model's Performance with Comet ML Visualizations](#understanding-your-models-performance-with-comet-ml-visualizations) section.
### Can I use Comet ML for offline logging when training YOLO11 models?
Yes, you can enable offline logging in Comet ML by setting the `COMET_MODE` environment variable to "offline":
```python
import os
os.environ["COMET_MODE"] = "offline"
```
This feature allows you to log your experiment data locally, which can later be uploaded to Comet ML when internet connectivity is available. This is particularly useful when working in environments with limited internet access. For more details, refer to the [Offline Logging](#offline-logging) section.
---
comments: true
description: Learn how to export YOLO11 models to CoreML for optimized, on-device machine learning on iOS and macOS. Follow step-by-step instructions.
keywords: CoreML export, YOLO11 models, CoreML conversion, Ultralytics, iOS object detection, macOS machine learning, AI deployment, machine learning integration
---
# CoreML Export for YOLO11 Models
Deploying [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) models on Apple devices like iPhones and Macs requires a format that ensures seamless performance.
The CoreML export format allows you to optimize your [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) models for efficient [object detection](https://www.ultralytics.com/glossary/object-detection) in iOS and macOS applications. In this guide, we'll walk you through the steps for converting your models to the CoreML format, making it easier for your models to perform well on Apple devices.
## CoreML
<p align="center">
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/coreml-overview.avif" alt="CoreML Overview">
</p>
[CoreML](https://developer.apple.com/documentation/coreml) is Apple's foundational machine learning framework that builds upon Accelerate, BNNS, and Metal Performance Shaders. It provides a machine-learning model format that seamlessly integrates into iOS applications and supports tasks such as image analysis, [natural language processing](https://www.ultralytics.com/glossary/natural-language-processing-nlp), audio-to-text conversion, and sound analysis.
Applications can take advantage of Core ML without the need to have a network connection or API calls because the Core ML framework works using on-device computing. This means model inference can be performed locally on the user's device.
## Key Features of CoreML Models
Apple's CoreML framework offers robust features for on-device machine learning. Here are the key features that make CoreML a powerful tool for developers:
- **Comprehensive Model Support**: Converts and runs models from popular frameworks like TensorFlow, [PyTorch](https://www.ultralytics.com/glossary/pytorch), scikit-learn, XGBoost, and LibSVM.
<p align="center">
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/coreml-supported-models.avif" alt="CoreML Supported Models">
</p>
- **On-device [Machine Learning](https://www.ultralytics.com/glossary/machine-learning-ml)**: Ensures data privacy and swift processing by executing models directly on the user's device, eliminating the need for network connectivity.
- **Performance and Optimization**: Uses the device's CPU, GPU, and Neural Engine for optimal performance with minimal power and memory usage. Offers tools for model compression and optimization while maintaining [accuracy](https://www.ultralytics.com/glossary/accuracy).
- **Ease of Integration**: Provides a unified format for various model types and a user-friendly API for seamless integration into apps. Supports domain-specific tasks through frameworks like Vision and Natural Language.
- **Advanced Features**: Includes on-device training capabilities for personalized experiences, asynchronous predictions for interactive ML experiences, and model inspection and validation tools.
## CoreML Deployment Options
Before we look at the code for exporting YOLO11 models to the CoreML format, let's understand where CoreML models are usually used.
CoreML offers various deployment options for machine learning models, including:
- **On-Device Deployment**: This method directly integrates CoreML models into your iOS app. It's particularly advantageous for ensuring low latency, enhanced privacy (since data remains on the device), and offline functionality. This approach, however, may be limited by the device's hardware capabilities, especially for larger and more complex models. On-device deployment can be executed in the following two ways.
- **Embedded Models**: These models are included in the app bundle and are immediately accessible. They are ideal for small models that do not require frequent updates.
- **Downloaded Models**: These models are fetched from a server as needed. This approach is suitable for larger models or those needing regular updates. It helps keep the app bundle size smaller.
- **Cloud-Based Deployment**: CoreML models are hosted on servers and accessed by the iOS app through API requests. This scalable and flexible option enables easy model updates without app revisions. It's ideal for complex models or large-scale apps requiring regular updates. However, it does require an internet connection and may pose latency and security issues.
## Exporting YOLO11 Models to CoreML
Exporting YOLO11 to CoreML enables optimized, on-device machine learning performance within Apple's ecosystem, offering benefits in terms of efficiency, security, and seamless integration with iOS, macOS, watchOS, and tvOS platforms.
### Installation
To install the required package, run:
!!! tip "Installation"
=== "CLI"
```bash
# Install the required package for YOLO11
pip install ultralytics
```
For detailed instructions and best practices related to the installation process, check our [YOLO11 Installation guide](../quickstart.md). While installing the required packages for YOLO11, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
### Usage
Before diving into the usage instructions, be sure to check out the range of [YOLO11 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
!!! example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
# Export the model to CoreML format
model.export(format="coreml") # creates 'yolo11n.mlpackage'
# Load the exported CoreML model
coreml_model = YOLO("yolo11n.mlpackage")
# Run inference
results = coreml_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLO11n PyTorch model to CoreML format
yolo export model=yolo11n.pt format=coreml # creates 'yolo11n.mlpackage''
# Run inference with the exported model
yolo predict model=yolo11n.mlpackage source='https://ultralytics.com/images/bus.jpg'
```
For more details about the export process, visit the [Ultralytics documentation page on exporting](../modes/export.md).
## Deploying Exported YOLO11 CoreML Models
Having successfully exported your Ultralytics YOLO11 models to CoreML, the next critical phase is deploying these models effectively. For detailed guidance on deploying CoreML models in various environments, check out these resources:
- **[CoreML Tools](https://apple.github.io/coremltools/docs-guides/)**: This guide includes instructions and examples to convert models from [TensorFlow](https://www.ultralytics.com/glossary/tensorflow), PyTorch, and other libraries to Core ML.
- **[ML and Vision](https://developer.apple.com/videos/)**: A collection of comprehensive videos that cover various aspects of using and implementing CoreML models.
- **[Integrating a Core ML Model into Your App](https://developer.apple.com/documentation/coreml/integrating-a-core-ml-model-into-your-app)**: A comprehensive guide on integrating a CoreML model into an iOS application, detailing steps from preparing the model to implementing it in the app for various functionalities.
## Summary
In this guide, we went over how to export Ultralytics YOLO11 models to CoreML format. By following the steps outlined in this guide, you can ensure maximum compatibility and performance when exporting YOLO11 models to CoreML.
For further details on usage, visit the [CoreML official documentation](https://developer.apple.com/documentation/coreml).
Also, if you'd like to know more about other Ultralytics YOLO11 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of valuable resources and insights there.
## FAQ
### How do I export YOLO11 models to CoreML format?
To export your [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) models to CoreML format, you'll first need to ensure you have the `ultralytics` package installed. You can install it using:
!!! example "Installation"
=== "CLI"
```bash
pip install ultralytics
```
Next, you can export the model using the following Python or CLI commands:
!!! example "Usage"
=== "Python"
```python
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
model.export(format="coreml")
```
=== "CLI"
```bash
yolo export model=yolo11n.pt format=coreml
```
For further details, refer to the [Exporting YOLO11 Models to CoreML](../modes/export.md) section of our documentation.
### What are the benefits of using CoreML for deploying YOLO11 models?
CoreML provides numerous advantages for deploying [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) models on Apple devices:
- **On-device Processing**: Enables local model inference on devices, ensuring [data privacy](https://www.ultralytics.com/glossary/data-privacy) and minimizing latency.
- **Performance Optimization**: Leverages the full potential of the device's CPU, GPU, and Neural Engine, optimizing both speed and efficiency.
- **Ease of Integration**: Offers a seamless integration experience with Apple's ecosystems, including iOS, macOS, watchOS, and tvOS.
- **Versatility**: Supports a wide range of machine learning tasks such as image analysis, audio processing, and natural language processing using the CoreML framework.
For more details on integrating your CoreML model into an iOS app, check out the guide on [Integrating a Core ML Model into Your App](https://developer.apple.com/documentation/coreml/integrating-a-core-ml-model-into-your-app).
### What are the deployment options for YOLO11 models exported to CoreML?
Once you export your YOLO11 model to CoreML format, you have multiple deployment options:
1. **On-Device Deployment**: Directly integrate CoreML models into your app for enhanced privacy and offline functionality. This can be done as:
- **Embedded Models**: Included in the app bundle, accessible immediately.
- **Downloaded Models**: Fetched from a server as needed, keeping the app bundle size smaller.
2. **Cloud-Based Deployment**: Host CoreML models on servers and access them via API requests. This approach supports easier updates and can handle more complex models.
For detailed guidance on deploying CoreML models, refer to [CoreML Deployment Options](#coreml-deployment-options).
### How does CoreML ensure optimized performance for YOLO11 models?
CoreML ensures optimized performance for [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) models by utilizing various optimization techniques:
- **Hardware Acceleration**: Uses the device's CPU, GPU, and Neural Engine for efficient computation.
- **Model Compression**: Provides tools for compressing models to reduce their footprint without compromising accuracy.
- **Adaptive Inference**: Adjusts inference based on the device's capabilities to maintain a balance between speed and performance.
For more information on performance optimization, visit the [CoreML official documentation](https://developer.apple.com/documentation/coreml).
### Can I run inference directly with the exported CoreML model?
Yes, you can run inference directly using the exported CoreML model. Below are the commands for Python and CLI:
!!! example "Running Inference"
=== "Python"
```python
from ultralytics import YOLO
coreml_model = YOLO("yolo11n.mlpackage")
results = coreml_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
yolo predict model=yolo11n.mlpackage source='https://ultralytics.com/images/bus.jpg'
```
For additional information, refer to the [Usage section](#usage) of the CoreML export guide.
---
comments: true
description: Unlock seamless YOLO11 tracking with DVCLive. Discover how to log, visualize, and analyze experiments for optimized ML model performance.
keywords: YOLO11, DVCLive, experiment tracking, machine learning, model training, data visualization, Git integration
---
# Advanced YOLO11 Experiment Tracking with DVCLive
Experiment tracking in [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) is critical to model development and evaluation. It involves recording and analyzing various parameters, metrics, and outcomes from numerous training runs. This process is essential for understanding model performance and making data-driven decisions to refine and optimize models.
Integrating DVCLive with [Ultralytics YOLO11](https://www.ultralytics.com/) transforms the way experiments are tracked and managed. This integration offers a seamless solution for automatically logging key experiment details, comparing results across different runs, and visualizing data for in-depth analysis. In this guide, we'll understand how DVCLive can be used to streamline the process.
## DVCLive
<p align="center">
<img width="640" src="https://github.com/ultralytics/docs/releases/download/0/dvclive-overview.avif" alt="DVCLive Overview">
</p>
[DVCLive](https://dvc.org/doc/dvclive), developed by DVC, is an innovative open-source tool for experiment tracking in machine learning. Integrating seamlessly with Git and DVC, it automates the logging of crucial experiment data like model parameters and training metrics. Designed for simplicity, DVCLive enables effortless comparison and analysis of multiple runs, enhancing the efficiency of machine learning projects with intuitive [data visualization](https://www.ultralytics.com/glossary/data-visualization) and analysis tools.
## YOLO11 Training with DVCLive
YOLO11 training sessions can be effectively monitored with DVCLive. Additionally, DVC provides integral features for visualizing these experiments, including the generation of a report that enables the comparison of metric plots across all tracked experiments, offering a comprehensive view of the training process.
## Installation
To install the required packages, run:
!!! tip "Installation"
=== "CLI"
```bash
# Install the required packages for YOLO11 and DVCLive
pip install ultralytics dvclive
```
For detailed instructions and best practices related to the installation process, be sure to check our [YOLO11 Installation guide](../quickstart.md). While installing the required packages for YOLO11, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
## Configuring DVCLive
Once you have installed the necessary packages, the next step is to set up and configure your environment with the necessary credentials. This setup ensures a smooth integration of DVCLive into your existing workflow.
Begin by initializing a Git repository, as Git plays a crucial role in version control for both your code and DVCLive configurations.
!!! tip "Initial Environment Setup"
=== "CLI"
```bash
# Initialize a Git repository
git init -q
# Configure Git with your details
git config --local user.email "you@example.com"
git config --local user.name "Your Name"
# Initialize DVCLive in your project
dvc init -q
# Commit the DVCLive setup to your Git repository
git commit -m "DVC init"
```
In these commands, ensure to replace "you@example.com" with the email address associated with your Git account, and "Your Name" with your Git account username.
## Usage
Before diving into the usage instructions, be sure to check out the range of [YOLO11 models offered by Ultralytics](../models/index.md). This will help you choose the most appropriate model for your project requirements.
### Training YOLO11 Models with DVCLive
Start by running your YOLO11 training sessions. You can use different model configurations and training parameters to suit your project needs. For instance:
```bash
# Example training commands for YOLO11 with varying configurations
yolo train model=yolo11n.pt data=coco8.yaml epochs=5 imgsz=512
yolo train model=yolo11n.pt data=coco8.yaml epochs=5 imgsz=640
```
Adjust the model, data, [epochs](https://www.ultralytics.com/glossary/epoch), and imgsz parameters according to your specific requirements. For a detailed understanding of the model training process and best practices, refer to our [YOLO11 Model Training guide](../modes/train.md).
### Monitoring Experiments with DVCLive
DVCLive enhances the training process by enabling the tracking and visualization of key metrics. When installed, Ultralytics YOLO11 automatically integrates with DVCLive for experiment tracking, which you can later analyze for performance insights. For a comprehensive understanding of the specific performance metrics used during training, be sure to explore [our detailed guide on performance metrics](../guides/yolo-performance-metrics.md).
### Analyzing Results
After your YOLO11 training sessions are complete, you can leverage DVCLive's powerful visualization tools for in-depth analysis of the results. DVCLive's integration ensures that all training metrics are systematically logged, facilitating a comprehensive evaluation of your model's performance.
To start the analysis, you can extract the experiment data using DVC's API and process it with Pandas for easier handling and visualization:
```python
import dvc.api
import pandas as pd
# Define the columns of interest
columns = ["Experiment", "epochs", "imgsz", "model", "metrics.mAP50-95(B)"]
# Retrieve experiment data
df = pd.DataFrame(dvc.api.exp_show(), columns=columns)
# Clean the data
df.dropna(inplace=True)
df.reset_index(drop=True, inplace=True)
# Display the DataFrame
print(df)
```
The output of the code snippet above provides a clear tabular view of the different experiments conducted with YOLO11 models. Each row represents a different training run, detailing the experiment's name, the number of epochs, image size (imgsz), the specific model used, and the mAP50-95(B) metric. This metric is crucial for evaluating the model's [accuracy](https://www.ultralytics.com/glossary/accuracy), with higher values indicating better performance.
#### Visualizing Results with Plotly
For a more interactive and visual analysis of your experiment results, you can use Plotly's parallel coordinates plot. This type of plot is particularly useful for understanding the relationships and trade-offs between different parameters and metrics.
```python
from plotly.express import parallel_coordinates
# Create a parallel coordinates plot
fig = parallel_coordinates(df, columns, color="metrics.mAP50-95(B)")
# Display the plot
fig.show()
```
The output of the code snippet above generates a plot that will visually represent the relationships between epochs, image size, model type, and their corresponding mAP50-95(B) scores, enabling you to spot trends and patterns in your experiment data.
#### Generating Comparative Visualizations with DVC
DVC provides a useful command to generate comparative plots for your experiments. This can be especially helpful to compare the performance of different models over various training runs.
```bash
# Generate DVC comparative plots
dvc plots diff $(dvc exp list --names-only)
```
After executing this command, DVC generates plots comparing the metrics across different experiments, which are saved as HTML files. Below is an example image illustrating typical plots generated by this process. The image showcases various graphs, including those representing mAP, [recall](https://www.ultralytics.com/glossary/recall), [precision](https://www.ultralytics.com/glossary/precision), loss values, and more, providing a visual overview of key performance metrics:
<p align="center">
<img width="640" src="https://github.com/ultralytics/docs/releases/download/0/dvclive-comparative-plots.avif" alt="DVCLive Plots">
</p>
### Displaying DVC Plots
If you are using a Jupyter Notebook and you want to display the generated DVC plots, you can use the IPython display functionality.
```python
from IPython.display import HTML
# Display the DVC plots as HTML
HTML(filename="./dvc_plots/index.html")
```
This code will render the HTML file containing the DVC plots directly in your Jupyter Notebook, providing an easy and convenient way to analyze the visualized experiment data.
### Making Data-Driven Decisions
Use the insights gained from these visualizations to make informed decisions about model optimizations, [hyperparameter tuning](https://www.ultralytics.com/glossary/hyperparameter-tuning), and other modifications to enhance your model's performance.
### Iterating on Experiments
Based on your analysis, iterate on your experiments. Adjust model configurations, training parameters, or even the data inputs, and repeat the training and analysis process. This iterative approach is key to refining your model for the best possible performance.
## Summary
This guide has led you through the process of integrating DVCLive with Ultralytics' YOLO11. You have learned how to harness the power of DVCLive for detailed experiment monitoring, effective visualization, and insightful analysis in your machine learning endeavors.
For further details on usage, visit [DVCLive's official documentation](https://dvc.org/doc/dvclive/ml-frameworks/yolo).
Additionally, explore more integrations and capabilities of Ultralytics by visiting the [Ultralytics integration guide page](../integrations/index.md), which is a collection of great resources and insights.
## FAQ
### How do I integrate DVCLive with Ultralytics YOLO11 for experiment tracking?
Integrating DVCLive with Ultralytics YOLO11 is straightforward. Start by installing the necessary packages:
!!! example "Installation"
=== "CLI"
```bash
pip install ultralytics dvclive
```
Next, initialize a Git repository and configure DVCLive in your project:
!!! example "Initial Environment Setup"
=== "CLI"
```bash
git init -q
git config --local user.email "you@example.com"
git config --local user.name "Your Name"
dvc init -q
git commit -m "DVC init"
```
Follow our [YOLO11 Installation guide](../quickstart.md) for detailed setup instructions.
### Why should I use DVCLive for tracking YOLO11 experiments?
Using DVCLive with YOLO11 provides several advantages, such as:
- **Automated Logging**: DVCLive automatically records key experiment details like model parameters and metrics.
- **Easy Comparison**: Facilitates comparison of results across different runs.
- **Visualization Tools**: Leverages DVCLive's robust data visualization capabilities for in-depth analysis.
For further details, refer to our guide on [YOLO11 Model Training](../modes/train.md) and [YOLO Performance Metrics](../guides/yolo-performance-metrics.md) to maximize your experiment tracking efficiency.
### How can DVCLive improve my results analysis for YOLO11 training sessions?
After completing your YOLO11 training sessions, DVCLive helps in visualizing and analyzing the results effectively. Example code for loading and displaying experiment data:
```python
import dvc.api
import pandas as pd
# Define columns of interest
columns = ["Experiment", "epochs", "imgsz", "model", "metrics.mAP50-95(B)"]
# Retrieve experiment data
df = pd.DataFrame(dvc.api.exp_show(), columns=columns)
# Clean data
df.dropna(inplace=True)
df.reset_index(drop=True, inplace=True)
# Display DataFrame
print(df)
```
To visualize results interactively, use Plotly's parallel coordinates plot:
```python
from plotly.express import parallel_coordinates
fig = parallel_coordinates(df, columns, color="metrics.mAP50-95(B)")
fig.show()
```
Refer to our guide on [YOLO11 Training with DVCLive](#yolo11-training-with-dvclive) for more examples and best practices.
### What are the steps to configure my environment for DVCLive and YOLO11 integration?
To configure your environment for a smooth integration of DVCLive and YOLO11, follow these steps:
1. **Install Required Packages**: Use `pip install ultralytics dvclive`.
2. **Initialize Git Repository**: Run `git init -q`.
3. **Setup DVCLive**: Execute `dvc init -q`.
4. **Commit to Git**: Use `git commit -m "DVC init"`.
These steps ensure proper version control and setup for experiment tracking. For in-depth configuration details, visit our [Configuration guide](../quickstart.md).
### How do I visualize YOLO11 experiment results using DVCLive?
DVCLive offers powerful tools to visualize the results of YOLO11 experiments. Here's how you can generate comparative plots:
!!! example "Generate Comparative Plots"
=== "CLI"
```bash
dvc plots diff $(dvc exp list --names-only)
```
To display these plots in a Jupyter Notebook, use:
```python
from IPython.display import HTML
# Display plots as HTML
HTML(filename="./dvc_plots/index.html")
```
These visualizations help identify trends and optimize model performance. Check our detailed guides on [YOLO11 Experiment Analysis](#analyzing-results) for comprehensive steps and examples.
---
comments: true
description: Learn how to export YOLO11 models to TFLite Edge TPU format for high-speed, low-power inferencing on mobile and embedded devices.
keywords: YOLO11, TFLite Edge TPU, TensorFlow Lite, model export, machine learning, edge computing, neural networks, Ultralytics
---
# Learn to Export to TFLite Edge TPU Format From YOLO11 Model
Deploying computer vision models on devices with limited computational power, such as mobile or embedded systems, can be tricky. Using a model format that is optimized for faster performance simplifies the process. The [TensorFlow Lite](https://ai.google.dev/edge/litert) [Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) or TFLite Edge TPU model format is designed to use minimal power while delivering fast performance for neural networks.
The export to TFLite Edge TPU format feature allows you to optimize your [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) models for high-speed and low-power inferencing. In this guide, we'll walk you through converting your models to the TFLite Edge TPU format, making it easier for your models to perform well on various mobile and embedded devices.
## Why Should You Export to TFLite Edge TPU?
Exporting models to [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) Edge TPU makes [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) tasks fast and efficient. This technology suits applications with limited power, computing resources, and connectivity. The Edge TPU is a hardware accelerator by Google. It speeds up TensorFlow Lite models on edge devices. The image below shows an example of the process involved.
<p align="center">
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/tflite-edge-tpu-compile-workflow.avif" alt="TFLite Edge TPU">
</p>
The Edge TPU works with quantized models. Quantization makes models smaller and faster without losing much [accuracy](https://www.ultralytics.com/glossary/accuracy). It is ideal for the limited resources of edge computing, allowing applications to respond quickly by reducing latency and allowing for quick data processing locally, without cloud dependency. Local processing also keeps user data private and secure since it's not sent to a remote server.
## Key Features of TFLite Edge TPU
Here are the key features that make TFLite Edge TPU a great model format choice for developers:
- **Optimized Performance on Edge Devices**: The TFLite Edge TPU achieves high-speed neural networking performance through quantization, model optimization, hardware acceleration, and compiler optimization. Its minimalistic architecture contributes to its smaller size and cost-efficiency.
- **High Computational Throughput**: TFLite Edge TPU combines specialized hardware acceleration and efficient runtime execution to achieve high computational throughput. It is well-suited for deploying machine learning models with stringent performance requirements on edge devices.
- **Efficient Matrix Computations**: The TensorFlow Edge TPU is optimized for matrix operations, which are crucial for [neural network](https://www.ultralytics.com/glossary/neural-network-nn) computations. This efficiency is key in machine learning models, particularly those requiring numerous and complex matrix multiplications and transformations.
## Deployment Options with TFLite Edge TPU
Before we jump into how to export YOLO11 models to the TFLite Edge TPU format, let's understand where TFLite Edge TPU models are usually used.
TFLite Edge TPU offers various deployment options for machine learning models, including:
- **On-Device Deployment**: TensorFlow Edge TPU models can be directly deployed on mobile and embedded devices. On-device deployment allows the models to execute directly on the hardware, eliminating the need for cloud connectivity.
- **Edge Computing with Cloud TensorFlow TPUs**: In scenarios where edge devices have limited processing capabilities, TensorFlow Edge TPUs can offload inference tasks to cloud servers equipped with TPUs.
- **Hybrid Deployment**: A hybrid approach combines on-device and cloud deployment and offers a versatile and scalable solution for deploying machine learning models. Advantages include on-device processing for quick responses and [cloud computing](https://www.ultralytics.com/glossary/cloud-computing) for more complex computations.
## Exporting YOLO11 Models to TFLite Edge TPU
You can expand model compatibility and deployment flexibility by converting YOLO11 models to TensorFlow Edge TPU.
### Installation
To install the required package, run:
!!! tip "Installation"
=== "CLI"
```bash
# Install the required package for YOLO11
pip install ultralytics
```
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLO11, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
### Usage
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLO11 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
# Export the model to TFLite Edge TPU format
model.export(format="edgetpu") # creates 'yolo11n_full_integer_quant_edgetpu.tflite'
# Load the exported TFLite Edge TPU model
edgetpu_model = YOLO("yolo11n_full_integer_quant_edgetpu.tflite")
# Run inference
results = edgetpu_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLO11n PyTorch model to TFLite Edge TPU format
yolo export model=yolo11n.pt format=edgetpu # creates 'yolo11n_full_integer_quant_edgetpu.tflite'
# Run inference with the exported model
yolo predict model=yolo11n_full_integer_quant_edgetpu.tflite source='https://ultralytics.com/images/bus.jpg'
```
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](../guides/model-deployment-options.md).
## Deploying Exported YOLO11 TFLite Edge TPU Models
After successfully exporting your Ultralytics YOLO11 models to TFLite Edge TPU format, you can now deploy them. The primary and recommended first step for running a TFLite Edge TPU model is to use the YOLO("model_edgetpu.tflite") method, as outlined in the previous usage code snippet.
However, for in-depth instructions on deploying your TFLite Edge TPU models, take a look at the following resources:
- **[Coral Edge TPU on a Raspberry Pi with Ultralytics YOLO11](../guides/coral-edge-tpu-on-raspberry-pi.md)**: Discover how to integrate Coral Edge TPUs with Raspberry Pi for enhanced machine learning capabilities.
- **[Code Examples](https://coral.ai/docs/edgetpu/compiler/)**: Access practical TensorFlow Edge TPU deployment examples to kickstart your projects.
- **[Run Inference on the Edge TPU with Python](https://coral.ai/docs/edgetpu/tflite-python/#overview)**: Explore how to use the TensorFlow Lite Python API for Edge TPU applications, including setup and usage guidelines.
## Summary
In this guide, we've learned how to export Ultralytics YOLO11 models to TFLite Edge TPU format. By following the steps mentioned above, you can increase the speed and power of your [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) applications.
For further details on usage, visit the [Edge TPU official website](https://cloud.google.com/tpu).
Also, for more information on other Ultralytics YOLO11 integrations, please visit our [integration guide page](index.md). There, you'll discover valuable resources and insights.
## FAQ
### How do I export a YOLO11 model to TFLite Edge TPU format?
To export a YOLO11 model to TFLite Edge TPU format, you can follow these steps:
!!! example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
# Export the model to TFLite Edge TPU format
model.export(format="edgetpu") # creates 'yolo11n_full_integer_quant_edgetpu.tflite'
# Load the exported TFLite Edge TPU model
edgetpu_model = YOLO("yolo11n_full_integer_quant_edgetpu.tflite")
# Run inference
results = edgetpu_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLO11n PyTorch model to TFLite Edge TPU format
yolo export model=yolo11n.pt format=edgetpu # creates 'yolo11n_full_integer_quant_edgetpu.tflite'
# Run inference with the exported model
yolo predict model=yolo11n_full_integer_quant_edgetpu.tflite source='https://ultralytics.com/images/bus.jpg'
```
For complete details on exporting models to other formats, refer to our [export guide](../modes/export.md).
### What are the benefits of exporting YOLO11 models to TFLite Edge TPU?
Exporting YOLO11 models to TFLite Edge TPU offers several benefits:
- **Optimized Performance**: Achieve high-speed neural network performance with minimal power consumption.
- **Reduced Latency**: Quick local data processing without the need for cloud dependency.
- **Enhanced Privacy**: Local processing keeps user data private and secure.
This makes it ideal for applications in [edge computing](https://www.ultralytics.com/glossary/edge-computing), where devices have limited power and computational resources. Learn more about [why you should export](#why-should-you-export-to-tflite-edge-tpu).
### Can I deploy TFLite Edge TPU models on mobile and embedded devices?
Yes, TensorFlow Lite Edge TPU models can be deployed directly on mobile and embedded devices. This deployment approach allows models to execute directly on the hardware, offering faster and more efficient inferencing. For integration examples, check our [guide on deploying Coral Edge TPU on Raspberry Pi](../guides/coral-edge-tpu-on-raspberry-pi.md).
### What are some common use cases for TFLite Edge TPU models?
Common use cases for TFLite Edge TPU models include:
- **Smart Cameras**: Enhancing real-time image and video analysis.
- **IoT Devices**: Enabling smart home and industrial automation.
- **Healthcare**: Accelerating medical imaging and diagnostics.
- **Retail**: Improving inventory management and customer behavior analysis.
These applications benefit from the high performance and low power consumption of TFLite Edge TPU models. Discover more about [usage scenarios](#deployment-options-with-tflite-edge-tpu).
### How can I troubleshoot issues while exporting or deploying TFLite Edge TPU models?
If you encounter issues while exporting or deploying TFLite Edge TPU models, refer to our [Common Issues guide](../guides/yolo-common-issues.md) for troubleshooting tips. This guide covers common problems and solutions to help you ensure smooth operation. For additional support, visit our [Help Center](https://docs.ultralytics.com/help/).
---
comments: true
description: Learn how to efficiently train Ultralytics YOLO11 models using Google Colab's powerful cloud-based environment. Start your project with ease.
keywords: YOLO11, Google Colab, machine learning, deep learning, model training, GPU, TPU, cloud computing, Jupyter Notebook, Ultralytics
---
# Accelerating YOLO11 Projects with Google Colab
Many developers lack the powerful computing resources needed to build [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models. Acquiring high-end hardware or renting a decent GPU can be expensive. Google Colab is a great solution to this. It's a browser-based platform that allows you to work with large datasets, develop complex models, and share your work with others without a huge cost.
You can use Google Colab to work on projects related to [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) models. Google Colab's user-friendly environment is well suited for efficient model development and experimentation. Let's learn more about Google Colab, its key features, and how you can use it to train YOLO11 models.
## Google Colaboratory
Google Colaboratory, commonly known as Google Colab, was developed by Google Research in 2017. It is a free online cloud-based Jupyter Notebook environment that allows you to train your [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) and deep learning models on CPUs, GPUs, and TPUs. The motivation behind developing Google Colab was Google's broader goals to advance AI technology and educational tools, and encourage the use of cloud services.
You can use Google Colab regardless of the specifications and configurations of your local computer. All you need is a Google account and a web browser, and you're good to go.
## Training YOLO11 Using Google Colaboratory
Training YOLO11 models on Google Colab is pretty straightforward. Thanks to the integration, you can access the [Google Colab YOLO11 Notebook](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb) and start training your model immediately. For a detailed understanding of the model training process and best practices, refer to our [YOLO11 Model Training guide](../modes/train.md).
Sign in to your Google account and run the notebook's cells to train your model.
![Training YOLO11 Using Google Colab](https://github.com/ultralytics/docs/releases/download/0/training-yolov8-using-google-colab.avif)
Learn how to train a YOLO11 model with custom data on YouTube with Nicolai. Check out the guide below.
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/LNwODJXcvt4?si=lB9UAc4hatSSEr2a"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> How to Train Ultralytics YOLO11 models on Your Custom Dataset in Google Colab | Episode 3
</p>
### Common Questions While Working with Google Colab
When working with Google Colab, you might have a few common questions. Let's answer them.
**Q: Why does my Google Colab session timeout?**
A: Google Colab sessions can time out due to inactivity, especially for free users who have a limited session duration.
**Q: Can I increase the session duration in Google Colab?**
A: Free users face limits, but Google Colab Pro offers extended session durations.
**Q: What should I do if my session closes unexpectedly?**
A: Regularly save your work to Google Drive or GitHub to avoid losing unsaved progress.
**Q: How can I check my session status and resource usage?**
A: Colab provides 'RAM Usage' and 'Disk Usage' metrics in the interface to monitor your resources.
**Q: Can I run multiple Colab sessions simultaneously?**
A: Yes, but be cautious about resource usage to avoid performance issues.
**Q: Does Google Colab have GPU access limitations?**
A: Yes, free GPU access has limitations, but Google Colab Pro provides more substantial usage options.
## Key Features of Google Colab
Now, let's look at some of the standout features that make Google Colab a go-to platform for machine learning projects:
- **Library Support:** Google Colab includes pre-installed libraries for data analysis and machine learning and allows additional libraries to be installed as needed. It also supports various libraries for creating interactive charts and visualizations.
- **Hardware Resources:** Users also switch between different hardware options by modifying the runtime settings as shown below. Google Colab provides access to advanced hardware like Tesla K80 GPUs and TPUs, which are specialized circuits designed specifically for machine learning tasks.
![Runtime Settings](https://github.com/ultralytics/docs/releases/download/0/runtime-settings.avif)
- **Collaboration:** Google Colab makes collaborating and working with other developers easy. You can easily share your notebooks with others and perform edits in real-time.
- **Custom Environment:** Users can install dependencies, configure the system, and use shell commands directly in the notebook.
- **Educational Resources:** Google Colab offers a range of tutorials and example notebooks to help users learn and explore various functionalities.
## Why Should You Use Google Colab for Your YOLO11 Projects?
There are many options for training and evaluating YOLO11 models, so what makes the integration with Google Colab unique? Let's explore the advantages of this integration:
- **Zero Setup:** Since Colab runs in the cloud, users can start training models immediately without the need for complex environment setups. Just create an account and start coding.
- **Form Support:** It allows users to create forms for parameter input, making it easier to experiment with different values.
- **Integration with Google Drive:** Colab seamlessly integrates with Google Drive to make data storage, access, and management simple. Datasets and models can be stored and retrieved directly from Google Drive.
- **Markdown Support:** You can use Markdown format for enhanced documentation within notebooks.
- **Scheduled Execution:** Developers can set notebooks to run automatically at specified times.
- **Extensions and Widgets:** Google Colab allows for adding functionality through third-party extensions and interactive widgets.
## Keep Learning about Google Colab
If you'd like to dive deeper into Google Colab, here are a few resources to guide you.
- **[Training Custom Datasets with Ultralytics YOLO11 in Google Colab](https://www.ultralytics.com/blog/training-custom-datasets-with-ultralytics-yolov8-in-google-colab)**: Learn how to train custom datasets with Ultralytics YOLO11 on Google Colab. This comprehensive blog post will take you through the entire process, from initial setup to the training and evaluation stages.
- **[Curated Notebooks](https://colab.google/notebooks/)**: Here you can explore a series of organized and educational notebooks, each grouped by specific topic areas.
- **[Google Colab's Medium Page](https://medium.com/google-colab)**: You can find tutorials, updates, and community contributions here that can help you better understand and utilize this tool.
## Summary
We've discussed how you can easily experiment with Ultralytics YOLO11 models on Google Colab. You can use Google Colab to train and evaluate your models on GPUs and TPUs with a few clicks.
For more details, visit [Google Colab's FAQ page](https://research.google.com/colaboratory/intl/en-GB/faq.html).
Interested in more YOLO11 integrations? Visit the [Ultralytics integration guide page](index.md) to explore additional tools and capabilities that can improve your machine-learning projects.
## FAQ
### How do I start training Ultralytics YOLO11 models on Google Colab?
To start training Ultralytics YOLO11 models on Google Colab, sign in to your Google account, then access the [Google Colab YOLO11 Notebook](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb). This notebook guides you through the setup and training process. After launching the notebook, run the cells step-by-step to train your model. For a full guide, refer to the [YOLO11 Model Training guide](../modes/train.md).
### What are the advantages of using Google Colab for training YOLO11 models?
Google Colab offers several advantages for training YOLO11 models:
- **Zero Setup:** No initial environment setup is required; just log in and start coding.
- **Free GPU Access:** Use powerful GPUs or TPUs without the need for expensive hardware.
- **Integration with Google Drive:** Easily store and access datasets and models.
- **Collaboration:** Share notebooks with others and collaborate in real-time.
For more information on why you should use Google Colab, explore the [training guide](../modes/train.md) and visit the [Google Colab page](https://colab.google/notebooks/).
### How can I handle Google Colab session timeouts during YOLO11 training?
Google Colab sessions timeout due to inactivity, especially for free users. To handle this:
1. **Stay Active:** Regularly interact with your Colab notebook.
2. **Save Progress:** Continuously save your work to Google Drive or GitHub.
3. **Colab Pro:** Consider upgrading to Google Colab Pro for longer session durations.
For more tips on managing your Colab session, visit the [Google Colab FAQ page](https://research.google.com/colaboratory/intl/en-GB/faq.html).
### Can I use custom datasets for training YOLO11 models in Google Colab?
Yes, you can use custom datasets to train YOLO11 models in Google Colab. Upload your dataset to Google Drive and load it directly into your Colab notebook. You can follow Nicolai's YouTube guide, [How to Train YOLO11 Models on Your Custom Dataset](https://www.youtube.com/watch?v=LNwODJXcvt4), or refer to the [Custom Dataset Training guide](https://www.ultralytics.com/blog/training-custom-datasets-with-ultralytics-yolov8-in-google-colab) for detailed steps.
### What should I do if my Google Colab training session is interrupted?
If your Google Colab training session is interrupted:
1. **Save Regularly:** Avoid losing unsaved progress by regularly saving your work to Google Drive or GitHub.
2. **Resume Training:** Restart your session and re-run the cells from where the interruption occurred.
3. **Use Checkpoints:** Incorporate checkpointing in your training script to save progress periodically.
These practices help ensure your progress is secure. Learn more about session management on [Google Colab's FAQ page](https://research.google.com/colaboratory/intl/en-GB/faq.html).
---
comments: true
description: Discover an interactive way to perform object detection with Ultralytics YOLO11 using Gradio. Upload images and adjust settings for real-time results.
keywords: Ultralytics, YOLO11, Gradio, object detection, interactive, real-time, image processing, AI
---
# Interactive [Object Detection](https://www.ultralytics.com/glossary/object-detection): Gradio & Ultralytics YOLO11 🚀
## Introduction to Interactive Object Detection
This Gradio interface provides an easy and interactive way to perform object detection using the [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics/) model. Users can upload images and adjust parameters like confidence threshold and intersection-over-union (IoU) threshold to get real-time detection results.
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/pWYiene9lYw"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Gradio Integration with Ultralytics YOLO11
</p>
## Why Use Gradio for Object Detection?
- **User-Friendly Interface:** Gradio offers a straightforward platform for users to upload images and visualize detection results without any coding requirement.
- **Real-Time Adjustments:** Parameters such as confidence and IoU thresholds can be adjusted on the fly, allowing for immediate feedback and optimization of detection results.
- **Broad Accessibility:** The Gradio web interface can be accessed by anyone, making it an excellent tool for demonstrations, educational purposes, and quick experiments.
<p align="center">
<img width="800" alt="Gradio example screenshot" src="https://github.com/ultralytics/docs/releases/download/0/gradio-example-screenshot.avif">
</p>
## How to Install the Gradio
```bash
pip install gradio
```
## How to Use the Interface
1. **Upload Image:** Click on 'Upload Image' to choose an image file for object detection.
2. **Adjust Parameters:**
- **Confidence Threshold:** Slider to set the minimum confidence level for detecting objects.
- **IoU Threshold:** Slider to set the IoU threshold for distinguishing different objects.
3. **View Results:** The processed image with detected objects and their labels will be displayed.
## Example Use Cases
- **Sample Image 1:** Bus detection with default thresholds.
- **Sample Image 2:** Detection on a sports image with default thresholds.
## Usage Example
This section provides the Python code used to create the Gradio interface with the Ultralytics YOLO11 model. Supports classification tasks, detection tasks, segmentation tasks, and key point tasks.
```python
import gradio as gr
import PIL.Image as Image
from ultralytics import ASSETS, YOLO
model = YOLO("yolo11n.pt")
def predict_image(img, conf_threshold, iou_threshold):
"""Predicts objects in an image using a YOLO11 model with adjustable confidence and IOU thresholds."""
results = model.predict(
source=img,
conf=conf_threshold,
iou=iou_threshold,
show_labels=True,
show_conf=True,
imgsz=640,
)
for r in results:
im_array = r.plot()
im = Image.fromarray(im_array[..., ::-1])
return im
iface = gr.Interface(
fn=predict_image,
inputs=[
gr.Image(type="pil", label="Upload Image"),
gr.Slider(minimum=0, maximum=1, value=0.25, label="Confidence threshold"),
gr.Slider(minimum=0, maximum=1, value=0.45, label="IoU threshold"),
],
outputs=gr.Image(type="pil", label="Result"),
title="Ultralytics Gradio",
description="Upload images for inference. The Ultralytics YOLO11n model is used by default.",
examples=[
[ASSETS / "bus.jpg", 0.25, 0.45],
[ASSETS / "zidane.jpg", 0.25, 0.45],
],
)
if __name__ == "__main__":
iface.launch()
```
## Parameters Explanation
| Parameter Name | Type | Description |
| ---------------- | ------- | -------------------------------------------------------- |
| `img` | `Image` | The image on which object detection will be performed. |
| `conf_threshold` | `float` | Confidence threshold for detecting objects. |
| `iou_threshold` | `float` | Intersection-over-union threshold for object separation. |
### Gradio Interface Components
| Component | Description |
| ------------ | ---------------------------------------- |
| Image Input | To upload the image for detection. |
| Sliders | To adjust confidence and IoU thresholds. |
| Image Output | To display the detection results. |
## FAQ
### How do I use Gradio with Ultralytics YOLO11 for object detection?
To use Gradio with Ultralytics YOLO11 for object detection, you can follow these steps:
1. **Install Gradio:** Use the command `pip install gradio`.
2. **Create Interface:** Write a Python script to initialize the Gradio interface. You can refer to the provided code example in the [documentation](#usage-example) for details.
3. **Upload and Adjust:** Upload your image and adjust the confidence and IoU thresholds on the Gradio interface to get real-time object detection results.
Here's a minimal code snippet for reference:
```python
import gradio as gr
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
def predict_image(img, conf_threshold, iou_threshold):
results = model.predict(
source=img,
conf=conf_threshold,
iou=iou_threshold,
show_labels=True,
show_conf=True,
)
return results[0].plot() if results else None
iface = gr.Interface(
fn=predict_image,
inputs=[
gr.Image(type="pil", label="Upload Image"),
gr.Slider(minimum=0, maximum=1, value=0.25, label="Confidence threshold"),
gr.Slider(minimum=0, maximum=1, value=0.45, label="IoU threshold"),
],
outputs=gr.Image(type="pil", label="Result"),
title="Ultralytics Gradio YOLO11",
description="Upload images for YOLO11 object detection.",
)
iface.launch()
```
### What are the benefits of using Gradio for Ultralytics YOLO11 object detection?
Using Gradio for Ultralytics YOLO11 object detection offers several benefits:
- **User-Friendly Interface:** Gradio provides an intuitive interface for users to upload images and visualize detection results without any coding effort.
- **Real-Time Adjustments:** You can dynamically adjust detection parameters such as confidence and IoU thresholds and see the effects immediately.
- **Accessibility:** The web interface is accessible to anyone, making it useful for quick experiments, educational purposes, and demonstrations.
For more details, you can read this [blog post](https://www.ultralytics.com/blog/ai-and-radiology-a-new-era-of-precision-and-efficiency).
### Can I use Gradio and Ultralytics YOLO11 together for educational purposes?
Yes, Gradio and Ultralytics YOLO11 can be utilized together for educational purposes effectively. Gradio's intuitive web interface makes it easy for students and educators to interact with state-of-the-art [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models like Ultralytics YOLO11 without needing advanced programming skills. This setup is ideal for demonstrating key concepts in object detection and [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv), as Gradio provides immediate visual feedback which helps in understanding the impact of different parameters on the detection performance.
### How do I adjust the confidence and IoU thresholds in the Gradio interface for YOLO11?
In the Gradio interface for YOLO11, you can adjust the confidence and IoU thresholds using the sliders provided. These thresholds help control the prediction [accuracy](https://www.ultralytics.com/glossary/accuracy) and object separation:
- **Confidence Threshold:** Determines the minimum confidence level for detecting objects. Slide to increase or decrease the confidence required.
- **IoU Threshold:** Sets the intersection-over-union threshold for distinguishing between overlapping objects. Adjust this value to refine object separation.
For more information on these parameters, visit the [parameters explanation section](#parameters-explanation).
### What are some practical applications of using Ultralytics YOLO11 with Gradio?
Practical applications of combining Ultralytics YOLO11 with Gradio include:
- **Real-Time Object Detection Demonstrations:** Ideal for showcasing how object detection works in real-time.
- **Educational Tools:** Useful in academic settings to teach object detection and computer vision concepts.
- **Prototype Development:** Efficient for developing and testing prototype object detection applications quickly.
- **Community and Collaborations:** Making it easy to share models with the community for feedback and collaboration.
For examples of similar use cases, check out the [Ultralytics blog](https://www.ultralytics.com/blog/monitoring-animal-behavior-using-ultralytics-yolov8).
Providing this information within the documentation will help in enhancing the usability and accessibility of Ultralytics YOLO11, making it more approachable for users at all levels of expertise.
---
comments: true
description: Dive into our detailed integration guide on using IBM Watson to train a YOLO11 model. Uncover key features and step-by-step instructions on model training.
keywords: IBM Watsonx, IBM Watsonx AI, What is Watson?, IBM Watson Integration, IBM Watson Features, YOLO11, Ultralytics, Model Training, GPU, TPU, cloud computing
---
# A Step-by-Step Guide to Training YOLO11 Models with IBM Watsonx
Nowadays, scalable [computer vision solutions](../guides/steps-of-a-cv-project.md) are becoming more common and transforming the way we handle visual data. A great example is IBM Watsonx, an advanced AI and data platform that simplifies the development, deployment, and management of AI models. It offers a complete suite for the entire AI lifecycle and seamless integration with IBM Cloud services.
You can train [Ultralytics YOLO11 models](https://github.com/ultralytics/ultralytics) using IBM Watsonx. It's a good option for enterprises interested in efficient [model training](../modes/train.md), fine-tuning for specific tasks, and improving [model performance](../guides/model-evaluation-insights.md) with robust tools and a user-friendly setup. In this guide, we'll walk you through the process of training YOLO11 with IBM Watsonx, covering everything from setting up your environment to evaluating your trained models. Let's get started!
## What is IBM Watsonx?
[Watsonx](https://www.ibm.com/watsonx) is IBM's cloud-based platform designed for commercial [generative AI](https://www.ultralytics.com/glossary/generative-ai) and scientific data. IBM Watsonx's three components - watsonx.ai, watsonx.data, and watsonx.governance - come together to create an end-to-end, trustworthy AI platform that can accelerate AI projects aimed at solving business problems. It provides powerful tools for building, training, and [deploying machine learning models](../guides/model-deployment-options.md) and makes it easy to connect with various data sources.
<p align="center">
<img width="800" src="https://github.com/ultralytics/docs/releases/download/0/overview-of-ibm-watsonx.avif" alt="Overview of IBM Watsonx">
</p>
Its user-friendly interface and collaborative capabilities streamline the development process and help with efficient model management and deployment. Whether for computer vision, predictive analytics, [natural language processing](https://www.ultralytics.com/glossary/natural-language-processing-nlp), or other AI applications, IBM Watsonx provides the tools and support needed to drive innovation.
## Key Features of IBM Watsonx
IBM Watsonx is made of three main components: watsonx.ai, watsonx.data, and watsonx.governance. Each component offers features that cater to different aspects of AI and data management. Let's take a closer look at them.
### [Watsonx.ai](https://www.ibm.com/products/watsonx-ai)
Watsonx.ai provides powerful tools for AI development and offers access to IBM-supported custom models, third-party models like [Llama 3](https://www.ultralytics.com/blog/getting-to-know-metas-llama-3), and IBM's own Granite models. It includes the Prompt Lab for experimenting with AI prompts, the Tuning Studio for improving model performance with labeled data, and the Flows Engine for simplifying generative AI application development. Also, it offers comprehensive tools for automating the AI model lifecycle and connecting to various APIs and libraries.
### [Watsonx.data](https://www.ibm.com/products/watsonx-data)
Watsonx.data supports both cloud and on-premises deployments through the IBM Storage Fusion HCI integration. Its user-friendly console provides centralized access to data across environments and makes data exploration easy with common SQL. It optimizes workloads with efficient query engines like Presto and Spark, accelerates data insights with an AI-powered semantic layer, includes a vector database for AI relevance, and supports open data formats for easy sharing of analytics and AI data.
### [Watsonx.governance](https://www.ibm.com/products/watsonx-governance)
Watsonx.governance makes compliance easier by automatically identifying regulatory changes and enforcing policies. It links requirements to internal risk data and provides up-to-date AI factsheets. The platform helps manage risk with alerts and tools to detect issues such as [bias and drift](../guides/model-monitoring-and-maintenance.md). It also automates the monitoring and documentation of the AI lifecycle, organizes AI development with a model inventory, and enhances collaboration with user-friendly dashboards and reporting tools.
## How to Train YOLO11 Using IBM Watsonx
You can use IBM Watsonx to accelerate your YOLO11 model training workflow.
### Prerequisites
You need an [IBM Cloud account](https://cloud.ibm.com/registration) to create a [watsonx.ai](https://www.ibm.com/products/watsonx-ai) project, and you'll also need a [Kaggle](./kaggle.md) account to load the data set.
### Step 1: Set Up Your Environment
First, you'll need to set up an IBM account to use a Jupyter Notebook. Log in to [watsonx.ai](https://eu-de.dataplatform.cloud.ibm.com/registration/stepone?preselect_region=true) using your IBM Cloud account.
Then, create a [watsonx.ai project](https://www.ibm.com/docs/en/watsonx/saas?topic=projects-creating-project), and a [Jupyter Notebook](https://www.ibm.com/docs/en/watsonx/saas?topic=editor-creating-managing-notebooks).
Once you do so, a notebook environment will open for you to load your data set. You can use the code from this tutorial to tackle a simple object detection model training task.
### Step 2: Install and Import Relevant Libraries
Next, you can install and import the necessary Python libraries.
!!! tip "Installation"
=== "CLI"
```bash
# Install the required packages
pip install torch torchvision torchaudio
pip install opencv-contrib-python-headless
pip install ultralytics==8.0.196
```
For detailed instructions and best practices related to the installation process, check our [Ultralytics Installation guide](../quickstart.md). While installing the required packages for YOLO11, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
Then, you can import the needed packages.
!!! example "Import Relevant Libraries"
=== "Python"
```python
# Import ultralytics
import ultralytics
ultralytics.checks()
# Import packages to retrieve and display image files
```
### Step 3: Load the Data
For this tutorial, we will use a [marine litter dataset](https://www.kaggle.com/datasets/atiqishrak/trash-dataset-icra19) available on Kaggle. With this dataset, we will custom-train a YOLO11 model to detect and classify litter and biological objects in underwater images.
We can load the dataset directly into the notebook using the Kaggle API. First, create a free Kaggle account. Once you have created an account, you'll need to generate an API key. Directions for generating your key can be found in the [Kaggle API documentation](https://github.com/Kaggle/kaggle-api/blob/main/docs/README.md) under the section "API credentials".
Copy and paste your Kaggle username and API key into the following code. Then run the code to install the API and load the dataset into Watsonx.
!!! tip "Installation"
=== "CLI"
```bash
# Install kaggle
pip install kaggle
```
After installing Kaggle, we can load the dataset into Watsonx.
!!! example "Load the Data"
=== "Python"
```python
# Replace "username" string with your username
os.environ["KAGGLE_USERNAME"] = "username"
# Replace "apiKey" string with your key
os.environ["KAGGLE_KEY"] = "apiKey"
# Load dataset
os.system("kaggle datasets download atiqishrak/trash-dataset-icra19 --unzip")
# Store working directory path as work_dir
work_dir = os.getcwd()
# Print work_dir path
print(os.getcwd())
# Print work_dir contents
print(os.listdir(f"{work_dir}"))
# Print trash_ICRA19 subdirectory contents
print(os.listdir(f"{work_dir}/trash_ICRA19"))
```
After loading the dataset, we printed and saved our working directory. We have also printed the contents of our working directory to confirm the "trash_ICRA19" data set was loaded properly.
If you see "trash_ICRA19" among the directory's contents, then it has loaded successfully. You should see three files/folders: a `config.yaml` file, a `videos_for_testing` directory, and a `dataset` directory. We will ignore the `videos_for_testing` directory, so feel free to delete it.
We will use the config.yaml file and the contents of the dataset directory to train our [object detection](https://www.ultralytics.com/glossary/object-detection) model. Here is a sample image from our marine litter data set.
<p align="center">
<img width="400" src="https://github.com/ultralytics/docs/releases/download/0/marine-litter-bounding-box.avif" alt="Marine Litter with Bounding Box">
</p>
### Step 4: Preprocess the Data
Fortunately, all labels in the marine litter data set are already formatted as YOLO .txt files. However, we need to rearrange the structure of the image and label directories in order to help our model process the image and labels. Right now, our loaded data set directory follows this structure:
<p align="center">
<img width="400" src="https://github.com/ultralytics/docs/releases/download/0/marine-litter-bounding-box-1.avif" alt="Loaded Dataset Directory">
</p>
But, YOLO models by default require separate images and labels in subdirectories within the train/val/test split. We need to reorganize the directory into the following structure:
<p align="center">
<img width="400" src="https://github.com/ultralytics/docs/releases/download/0/yolo-directory-structure.avif" alt="Yolo Directory Structure">
</p>
To reorganize the data set directory, we can run the following script:
!!! example "Preprocess the Data"
=== "Python"
```python
# Function to reorganize dir
def organize_files(directory):
for subdir in ["train", "test", "val"]:
subdir_path = os.path.join(directory, subdir)
if not os.path.exists(subdir_path):
continue
images_dir = os.path.join(subdir_path, "images")
labels_dir = os.path.join(subdir_path, "labels")
# Create image and label subdirs if non-existent
os.makedirs(images_dir, exist_ok=True)
os.makedirs(labels_dir, exist_ok=True)
# Move images and labels to respective subdirs
for filename in os.listdir(subdir_path):
if filename.endswith(".txt"):
shutil.move(os.path.join(subdir_path, filename), os.path.join(labels_dir, filename))
elif filename.endswith(".jpg") or filename.endswith(".png") or filename.endswith(".jpeg"):
shutil.move(os.path.join(subdir_path, filename), os.path.join(images_dir, filename))
# Delete .xml files
elif filename.endswith(".xml"):
os.remove(os.path.join(subdir_path, filename))
if __name__ == "__main__":
directory = f"{work_dir}/trash_ICRA19/dataset"
organize_files(directory)
```
Next, we need to modify the .yaml file for the data set. This is the setup we will use in our .yaml file. Class ID numbers start from 0:
```yaml
path: /path/to/dataset/directory # root directory for dataset
train: train/images # train images subdirectory
val: train/images # validation images subdirectory
test: test/images # test images subdirectory
# Classes
names:
0: plastic
1: bio
2: rov
```
Run the following script to delete the current contents of config.yaml and replace it with the above contents that reflect our new data set directory structure. Be certain to replace the work_dir portion of the root directory path in line 4 with your own working directory path we retrieved earlier. Leave the train, val, and test subdirectory definitions. Also, do not change {work_dir} in line 23 of the code.
!!! example "Edit the .yaml File"
=== "Python"
```python
# Contents of new confg.yaml file
def update_yaml_file(file_path):
data = {
"path": "work_dir/trash_ICRA19/dataset",
"train": "train/images",
"val": "train/images",
"test": "test/images",
"names": {0: "plastic", 1: "bio", 2: "rov"},
}
# Ensures the "names" list appears after the sub/directories
names_data = data.pop("names")
with open(file_path, "w") as yaml_file:
yaml.dump(data, yaml_file)
yaml_file.write("\n")
yaml.dump({"names": names_data}, yaml_file)
if __name__ == "__main__":
file_path = f"{work_dir}/trash_ICRA19/config.yaml" # .yaml file path
update_yaml_file(file_path)
print(f"{file_path} updated successfully.")
```
### Step 5: Train the YOLO11 model
Run the following command-line code to fine tune a pretrained default YOLO11 model.
!!! example "Train the YOLO11 model"
=== "CLI"
```bash
!yolo task=detect mode=train data={work_dir}/trash_ICRA19/config.yaml model=yolo11n.pt epochs=2 batch=32 lr0=.04 plots=True
```
Here's a closer look at the parameters in the model training command:
- **task**: It specifies the [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) task for which you are using the specified YOLO model and data set.
- **mode**: Denotes the purpose for which you are loading the specified model and data. Since we are training a model, it is set to "train." Later, when we test our model's performance, we will set it to "predict."
- **epochs**: This delimits the number of times YOLO11 will pass through our entire data set.
- **batch**: The numerical value stipulates the training [batch sizes](https://www.ultralytics.com/glossary/batch-size). Batches are the number of images a model processes before it updates its parameters.
- **lr0**: Specifies the model's initial [learning rate](https://www.ultralytics.com/glossary/learning-rate).
- **plots**: Directs YOLO to generate and save plots of our model's training and evaluation metrics.
For a detailed understanding of the model training process and best practices, refer to the [YOLO11 Model Training guide](../modes/train.md). This guide will help you get the most out of your experiments and ensure you're using YOLO11 effectively.
### Step 6: Test the Model
We can now run inference to test the performance of our fine-tuned model:
!!! example "Test the YOLO11 model"
=== "CLI"
```bash
!yolo task=detect mode=predict source={work_dir}/trash_ICRA19/dataset/test/images model={work_dir}/runs/detect/train/weights/best.pt conf=0.5 iou=.5 save=True save_txt=True
```
This brief script generates predicted labels for each image in our test set, as well as new output image files that overlay the predicted [bounding box](https://www.ultralytics.com/glossary/bounding-box) atop the original image.
Predicted .txt labels for each image are saved via the `save_txt=True` argument and the output images with bounding box overlays are generated through the `save=True` argument.
The parameter `conf=0.5` informs the model to ignore all predictions with a confidence level of less than 50%.
Lastly, `iou=.5` directs the model to ignore boxes in the same class with an overlap of 50% or greater. It helps to reduce potential duplicate boxes generated for the same object.
we can load the images with predicted bounding box overlays to view how our model performs on a handful of images.
!!! example "Display Predictions"
=== "Python"
```python
# Show the first ten images from the preceding prediction task
for pred_dir in glob.glob(f"{work_dir}/runs/detect/predict/*.jpg")[:10]:
img = Image.open(pred_dir)
display(img)
```
The code above displays ten images from the test set with their predicted bounding boxes, accompanied by class name labels and confidence levels.
### Step 7: Evaluate the Model
We can produce visualizations of the model's [precision](https://www.ultralytics.com/glossary/precision) and recall for each class. These visualizations are saved in the home directory, under the train folder. The precision score is displayed in the P_curve.png:
<p align="center">
<img width="800" src="https://github.com/ultralytics/docs/releases/download/0/precision-confidence-curve.avif" alt="Precision Confidence Curve">
</p>
The graph shows an exponential increase in precision as the model's confidence level for predictions increases. However, the model precision has not yet leveled out at a certain confidence level after two [epochs](https://www.ultralytics.com/glossary/epoch).
The [recall](https://www.ultralytics.com/glossary/recall) graph (R_curve.png) displays an inverse trend:
<p align="center">
<img width="800" src="https://github.com/ultralytics/docs/releases/download/0/recall-confidence-curve.avif" alt="Recall Confidence Curve">
</p>
Unlike precision, recall moves in the opposite direction, showing greater recall with lower confidence instances and lower recall with higher confidence instances. This is an apt example of the trade-off in precision and recall for classification models.
### Step 8: Calculating [Intersection Over Union](https://www.ultralytics.com/glossary/intersection-over-union-iou)
You can measure the prediction [accuracy](https://www.ultralytics.com/glossary/accuracy) by calculating the IoU between a predicted bounding box and a ground truth bounding box for the same object. Check out [IBM's tutorial on training YOLO11](https://developer.ibm.com/tutorials/awb-train-yolo-object-detection-model-in-python/) for more details.
## Summary
We explored IBM Watsonx key features, and how to train a YOLO11 model using IBM Watsonx. We also saw how IBM Watsonx can enhance your AI workflows with advanced tools for model building, data management, and compliance.
For further details on usage, visit [IBM Watsonx official documentation](https://www.ibm.com/watsonx).
Also, be sure to check out the [Ultralytics integration guide page](./index.md), to learn more about different exciting integrations.
## FAQ
### How do I train a YOLO11 model using IBM Watsonx?
To train a YOLO11 model using IBM Watsonx, follow these steps:
1. **Set Up Your Environment**: Create an IBM Cloud account and set up a Watsonx.ai project. Use a Jupyter Notebook for your coding environment.
2. **Install Libraries**: Install necessary libraries like `torch`, `opencv`, and `ultralytics`.
3. **Load Data**: Use the Kaggle API to load your dataset into Watsonx.
4. **Preprocess Data**: Organize your dataset into the required directory structure and update the `.yaml` configuration file.
5. **Train the Model**: Use the YOLO command-line interface to train your model with specific parameters like `epochs`, `batch size`, and `learning rate`.
6. **Test and Evaluate**: Run inference to test the model and evaluate its performance using metrics like precision and recall.
For detailed instructions, refer to our [YOLO11 Model Training guide](../modes/train.md).
### What are the key features of IBM Watsonx for AI model training?
IBM Watsonx offers several key features for AI model training:
- **Watsonx.ai**: Provides tools for AI development, including access to IBM-supported custom models and third-party models like Llama 3. It includes the Prompt Lab, Tuning Studio, and Flows Engine for comprehensive AI lifecycle management.
- **Watsonx.data**: Supports cloud and on-premises deployments, offering centralized data access, efficient query engines like Presto and Spark, and an AI-powered semantic layer.
- **Watsonx.governance**: Automates compliance, manages risk with alerts, and provides tools for detecting issues like bias and drift. It also includes dashboards and reporting tools for collaboration.
For more information, visit the [IBM Watsonx official documentation](https://www.ibm.com/watsonx).
### Why should I use IBM Watsonx for training Ultralytics YOLO11 models?
IBM Watsonx is an excellent choice for training Ultralytics YOLO11 models due to its comprehensive suite of tools that streamline the AI lifecycle. Key benefits include:
- **Scalability**: Easily scale your model training with IBM Cloud services.
- **Integration**: Seamlessly integrate with various data sources and APIs.
- **User-Friendly Interface**: Simplifies the development process with a collaborative and intuitive interface.
- **Advanced Tools**: Access to powerful tools like the Prompt Lab, Tuning Studio, and Flows Engine for enhancing model performance.
Learn more about [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics) and how to train models using IBM Watsonx in our [integration guide](./index.md).
### How can I preprocess my dataset for YOLO11 training on IBM Watsonx?
To preprocess your dataset for YOLO11 training on IBM Watsonx:
1. **Organize Directories**: Ensure your dataset follows the YOLO directory structure with separate subdirectories for images and labels within the train/val/test split.
2. **Update .yaml File**: Modify the `.yaml` configuration file to reflect the new directory structure and class names.
3. **Run Preprocessing Script**: Use a Python script to reorganize your dataset and update the `.yaml` file accordingly.
Here's a sample script to organize your dataset:
```python
import os
import shutil
def organize_files(directory):
for subdir in ["train", "test", "val"]:
subdir_path = os.path.join(directory, subdir)
if not os.path.exists(subdir_path):
continue
images_dir = os.path.join(subdir_path, "images")
labels_dir = os.path.join(subdir_path, "labels")
os.makedirs(images_dir, exist_ok=True)
os.makedirs(labels_dir, exist_ok=True)
for filename in os.listdir(subdir_path):
if filename.endswith(".txt"):
shutil.move(os.path.join(subdir_path, filename), os.path.join(labels_dir, filename))
elif filename.endswith(".jpg") or filename.endswith(".png") or filename.endswith(".jpeg"):
shutil.move(os.path.join(subdir_path, filename), os.path.join(images_dir, filename))
if __name__ == "__main__":
directory = f"{work_dir}/trash_ICRA19/dataset"
organize_files(directory)
```
For more details, refer to our [data preprocessing guide](../guides/preprocessing_annotated_data.md).
### What are the prerequisites for training a YOLO11 model on IBM Watsonx?
Before you start training a YOLO11 model on IBM Watsonx, ensure you have the following prerequisites:
- **IBM Cloud Account**: Create an account on IBM Cloud to access Watsonx.ai.
- **Kaggle Account**: For loading datasets, you'll need a Kaggle account and an API key.
- **Jupyter Notebook**: Set up a Jupyter Notebook environment within Watsonx.ai for coding and model training.
For more information on setting up your environment, visit our [Ultralytics Installation guide](../quickstart.md).
---
comments: true
description: Discover Ultralytics integrations for streamlined ML workflows, dataset management, optimized model training, and robust deployment solutions.
keywords: Ultralytics, machine learning, ML workflows, dataset management, model training, model deployment, Roboflow, ClearML, Comet ML, DVC, MLFlow, Ultralytics HUB, Neptune, Ray Tune, TensorBoard, Weights & Biases, Amazon SageMaker, Paperspace Gradient, Google Colab, Neural Magic, Gradio, TorchScript, ONNX, OpenVINO, TensorRT, CoreML, TF SavedModel, TF GraphDef, TFLite, TFLite Edge TPU, TF.js, PaddlePaddle, NCNN
---
# Ultralytics Integrations
Welcome to the Ultralytics Integrations page! This page provides an overview of our partnerships with various tools and platforms, designed to streamline your [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) workflows, enhance dataset management, simplify model training, and facilitate efficient deployment.
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/ultralytics-yolov8-ecosystem-integrations.avif" alt="Ultralytics YOLO ecosystem and integrations">
<p align="center">
<br>
<iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/ZzUSXQkLbNw"
title="YouTube video player" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Ultralytics YOLO11 Deployment and Integrations
</p>
## Datasets Integrations
- [Roboflow](roboflow.md): Facilitate seamless dataset management for Ultralytics models, offering robust annotation, preprocessing, and augmentation capabilities.
## Training Integrations
- [Amazon SageMaker](amazon-sagemaker.md): Leverage Amazon SageMaker to efficiently build, train, and deploy Ultralytics models, providing an all-in-one platform for the ML lifecycle.
- [ClearML](clearml.md): Automate your Ultralytics ML workflows, monitor experiments, and foster team collaboration.
- [Comet ML](comet.md): Enhance your model development with Ultralytics by tracking, comparing, and optimizing your machine learning experiments.
- [DVC](dvc.md): Implement version control for your Ultralytics machine learning projects, synchronizing data, code, and models effectively.
- [Google Colab](google-colab.md): Use Google Colab to train and evaluate Ultralytics models in a cloud-based environment that supports collaboration and sharing.
- [IBM Watsonx](ibm-watsonx.md): See how IBM Watsonx simplifies the training and evaluation of Ultralytics models with its cutting-edge AI tools, effortless integration, and advanced model management system.
- [JupyterLab](jupyterlab.md): Find out how to use JupyterLab's interactive and customizable environment to train and evaluate Ultralytics models with ease and efficiency.
- [Kaggle](kaggle.md): Explore how you can use Kaggle to train and evaluate Ultralytics models in a cloud-based environment with pre-installed libraries, GPU support, and a vibrant community for collaboration and sharing.
- [MLFlow](mlflow.md): Streamline the entire ML lifecycle of Ultralytics models, from experimentation and reproducibility to deployment.
- [Neptune](https://neptune.ai/): Maintain a comprehensive log of your ML experiments with Ultralytics in this metadata store designed for MLOps.
- [Paperspace Gradient](paperspace.md): Paperspace Gradient simplifies working on YOLO11 projects by providing easy-to-use cloud tools for training, testing, and deploying your models quickly.
- [Ray Tune](ray-tune.md): Optimize the hyperparameters of your Ultralytics models at any scale.
- [TensorBoard](tensorboard.md): Visualize your Ultralytics ML workflows, monitor model metrics, and foster team collaboration.
- [Ultralytics HUB](https://hub.ultralytics.com/): Access and contribute to a community of pre-trained Ultralytics models.
- [Weights & Biases (W&B)](weights-biases.md): Monitor experiments, visualize metrics, and foster reproducibility and collaboration on Ultralytics projects.
- [VS Code](vscode.md): An extension for VS Code that provides code snippets for accelerating development workflows with Ultralytics and also for anyone looking for examples to help learn or get started with Ultralytics.
- [Albumentations](albumentations.md): Enhance your Ultralytics models with powerful image augmentations to improve model robustness and generalization.
- [SONY IMX500](sony-imx500.md): Optimize and deploy [Ultralytics YOLOv8](https://docs.ultralytics.com/models/yolov8/) models on Raspberry Pi AI Cameras with the IMX500 sensor for fast, low-power performance.
## Deployment Integrations
- [CoreML](coreml.md): CoreML, developed by [Apple](https://www.apple.com/), is a framework designed for efficiently integrating machine learning models into applications across iOS, macOS, watchOS, and tvOS, using Apple's hardware for effective and secure [model deployment](https://www.ultralytics.com/glossary/model-deployment).
- [Gradio](gradio.md) 🚀 NEW: Deploy Ultralytics models with Gradio for real-time, interactive object detection demos.
- [NCNN](ncnn.md): Developed by [Tencent](http://www.tencent.com/), NCNN is an efficient [neural network](https://www.ultralytics.com/glossary/neural-network-nn) inference framework tailored for mobile devices. It enables direct deployment of AI models into apps, optimizing performance across various mobile platforms.
- [MNN](mnn.md): Developed by [Alibaba](https://www.alibabagroup.com/), MNN is a highly efficient and lightweight deep learning framework. It supports inference and training of deep learning models and has industry-leading performance for inference and training on-device.
- [Neural Magic](neural-magic.md): Leverage Quantization Aware Training (QAT) and pruning techniques to optimize Ultralytics models for superior performance and leaner size.
- [ONNX](onnx.md): An open-source format created by [Microsoft](https://www.microsoft.com/) for facilitating the transfer of AI models between various frameworks, enhancing the versatility and deployment flexibility of Ultralytics models.
- [OpenVINO](openvino.md): Intel's toolkit for optimizing and deploying [computer vision](https://www.ultralytics.com/glossary/computer-vision-cv) models efficiently across various Intel CPU and GPU platforms.
- [PaddlePaddle](paddlepaddle.md): An open-source deep learning platform by [Baidu](https://www.baidu.com/), PaddlePaddle enables the efficient deployment of AI models and focuses on the scalability of industrial applications.
- [TF GraphDef](tf-graphdef.md): Developed by [Google](https://www.google.com/), GraphDef is TensorFlow's format for representing computation graphs, enabling optimized execution of machine learning models across diverse hardware.
- [TF SavedModel](tf-savedmodel.md): Developed by [Google](https://www.google.com/), TF SavedModel is a universal serialization format for [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) models, enabling easy sharing and deployment across a wide range of platforms, from servers to edge devices.
- [TF.js](tfjs.md): Developed by [Google](https://www.google.com/) to facilitate machine learning in browsers and Node.js, TF.js allows JavaScript-based deployment of ML models.
- [TFLite](tflite.md): Developed by [Google](https://www.google.com/), TFLite is a lightweight framework for deploying machine learning models on mobile and edge devices, ensuring fast, efficient inference with minimal memory footprint.
- [TFLite Edge TPU](edge-tpu.md): Developed by [Google](https://www.google.com/) for optimizing TensorFlow Lite models on Edge TPUs, this model format ensures high-speed, efficient [edge computing](https://www.ultralytics.com/glossary/edge-computing).
- [TensorRT](tensorrt.md): Developed by [NVIDIA](https://www.nvidia.com/), this high-performance [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) inference framework and model format optimizes AI models for accelerated speed and efficiency on NVIDIA GPUs, ensuring streamlined deployment.
- [TorchScript](torchscript.md): Developed as part of the [PyTorch](https://pytorch.org/) framework, TorchScript enables efficient execution and deployment of machine learning models in various production environments without the need for Python dependencies.
### Export Formats
We also support a variety of model export formats for deployment in different environments. Here are the available formats:
{% include "macros/export-table.md" %}
Explore the links to learn more about each integration and how to get the most out of them with Ultralytics. See full `export` details in the [Export](../modes/export.md) page.
## Contribute to Our Integrations
We're always excited to see how the community integrates Ultralytics YOLO with other technologies, tools, and platforms! If you have successfully integrated YOLO with a new system or have valuable insights to share, consider contributing to our Integrations Docs.
By writing a guide or tutorial, you can help expand our documentation and provide real-world examples that benefit the community. It's an excellent way to contribute to the growing ecosystem around Ultralytics YOLO.
To contribute, please check out our [Contributing Guide](../help/contributing.md) for instructions on how to submit a Pull Request (PR) 🛠️. We eagerly await your contributions!
Let's collaborate to make the Ultralytics YOLO ecosystem more expansive and feature-rich 🙏!
## FAQ
### What is Ultralytics HUB, and how does it streamline the ML workflow?
Ultralytics HUB is a cloud-based platform designed to make machine learning (ML) workflows for Ultralytics models seamless and efficient. By using this tool, you can easily upload datasets, train models, perform real-time tracking, and deploy YOLO11 models without needing extensive coding skills. You can explore the key features on the [Ultralytics HUB](https://hub.ultralytics.com/) page and get started quickly with our [Quickstart](https://docs.ultralytics.com/hub/quickstart/) guide.
### How do I integrate Ultralytics YOLO models with Roboflow for dataset management?
Integrating Ultralytics YOLO models with Roboflow enhances dataset management by providing robust tools for annotation, preprocessing, and augmentation. To get started, follow the steps on the [Roboflow](roboflow.md) integration page. This partnership ensures efficient dataset handling, which is crucial for developing accurate and robust YOLO models.
### Can I track the performance of my Ultralytics models using MLFlow?
Yes, you can. Integrating MLFlow with Ultralytics models allows you to track experiments, improve reproducibility, and streamline the entire ML lifecycle. Detailed instructions for setting up this integration can be found on the [MLFlow](mlflow.md) integration page. This integration is particularly useful for monitoring model metrics and managing the ML workflow efficiently.
### What are the benefits of using Neural Magic for YOLO11 model optimization?
Neural Magic optimizes YOLO11 models by leveraging techniques like Quantization Aware Training (QAT) and pruning, resulting in highly efficient, smaller models that perform better on resource-limited hardware. Check out the [Neural Magic](neural-magic.md) integration page to learn how to implement these optimizations for superior performance and leaner models. This is especially beneficial for deployment on edge devices.
### How do I deploy Ultralytics YOLO models with Gradio for interactive demos?
To deploy Ultralytics YOLO models with Gradio for interactive [object detection](https://www.ultralytics.com/glossary/object-detection) demos, you can follow the steps outlined on the [Gradio](gradio.md) integration page. Gradio allows you to create easy-to-use web interfaces for real-time model inference, making it an excellent tool for showcasing your YOLO model's capabilities in a user-friendly format suitable for both developers and end-users.
By addressing these common questions, we aim to improve user experience and provide valuable insights into the powerful capabilities of Ultralytics products. Incorporating these FAQs will not only enhance the documentation but also drive more organic traffic to the Ultralytics website.
---
comments: true
description: Explore our integration guide that explains how you can use JupyterLab to train a YOLO11 model. We'll also cover key features and tips for common issues.
keywords: JupyterLab, What is JupyterLab, How to Use JupyterLab, JupyterLab How to Use, YOLO11, Ultralytics, Model Training, GPU, TPU, cloud computing
---
# A Guide on How to Use JupyterLab to Train Your YOLO11 Models
Building [deep learning](https://www.ultralytics.com/glossary/deep-learning-dl) models can be tough, especially when you don't have the right tools or environment to work with. If you are facing this issue, JupyterLab might be the right solution for you. JupyterLab is a user-friendly, web-based platform that makes coding more flexible and interactive. You can use it to handle big datasets, create complex models, and even collaborate with others, all in one place.
You can use JupyterLab to [work on projects](../guides/steps-of-a-cv-project.md) related to [Ultralytics YOLO11 models](https://github.com/ultralytics/ultralytics). JupyterLab is a great option for efficient model development and experimentation. It makes it easy to start experimenting with and [training YOLO11 models](../modes/train.md) right from your computer. Let's dive deeper into JupyterLab, its key features, and how you can use it to train YOLO11 models.
## What is JupyterLab?
JupyterLab is an open-source web-based platform designed for working with Jupyter notebooks, code, and data. It's an upgrade from the traditional Jupyter Notebook interface that provides a more versatile and powerful user experience.
JupyterLab allows you to work with notebooks, text editors, terminals, and other tools all in one place. Its flexible design lets you organize your workspace to fit your needs and makes it easier to perform tasks like data analysis, visualization, and [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml). JupyterLab also supports real-time collaboration, making it ideal for team projects in research and data science.
## Key Features of JupyterLab
Here are some of the key features that make JupyterLab a great option for model development and experimentation:
- **All-in-One Workspace**: JupyterLab is a one-stop shop for all your data science needs. Unlike the classic Jupyter Notebook, which had separate interfaces for text editing, terminal access, and notebooks, JupyterLab integrates all these features into a single, cohesive environment. You can view and edit various file formats, including JPEG, PDF, and CSV, directly within JupyterLab. An all-in-one workspace lets you access everything you need at your fingertips, streamlining your workflow and saving you time.
- **Flexible Layouts**: One of JupyterLab's standout features is its flexible layout. You can drag, drop, and resize tabs to create a personalized layout that helps you work more efficiently. The collapsible left sidebar keeps essential tabs like the file browser, running kernels, and command palette within easy reach. You can have multiple windows open at once, allowing you to multitask and manage your projects more effectively.
- **Interactive Code Consoles**: Code consoles in JupyterLab provide an interactive space to test out snippets of code or functions. They also serve as a log of computations made within a notebook. Creating a new console for a notebook and viewing all kernel activity is straightforward. This feature is especially useful when you're experimenting with new ideas or troubleshooting issues in your code.
- **Markdown Preview**: Working with Markdown files is more efficient in JupyterLab, thanks to its simultaneous preview feature. As you write or edit your Markdown file, you can see the formatted output in real-time. It makes it easier to double-check that your documentation looks perfect, saving you from having to switch back and forth between editing and preview modes.
- **Run Code from Text Files**: If you're sharing a text file with code, JupyterLab makes it easy to run it directly within the platform. You can highlight the code and press Shift + Enter to execute it. It is great for verifying code snippets quickly and helps guarantee that the code you share is functional and error-free.
## Why Should You Use JupyterLab for Your YOLO11 Projects?
There are multiple platforms for developing and evaluating machine learning models, so what makes JupyterLab stand out? Let's explore some of the unique aspects that JupyterLab offers for your machine-learning projects:
- **Easy Cell Management**: Managing cells in JupyterLab is a breeze. Instead of the cumbersome cut-and-paste method, you can simply drag and drop cells to rearrange them.
- **Cross-Notebook Cell Copying**: JupyterLab makes it simple to copy cells between different notebooks. You can drag and drop cells from one notebook to another.
- **Easy Switch to Classic Notebook View**: For those who miss the classic Jupyter Notebook interface, JupyterLab offers an easy switch back. Simply replace `/lab` in the URL with `/tree` to return to the familiar notebook view.
- **Multiple Views**: JupyterLab supports multiple views of the same notebook, which is particularly useful for long notebooks. You can open different sections side-by-side for comparison or exploration, and any changes made in one view are reflected in the other.
- **Customizable Themes**: JupyterLab includes a built-in Dark theme for the notebook, which is perfect for late-night coding sessions. There are also themes available for the text editor and terminal, allowing you to customize the appearance of your entire workspace.
## Common Issues While Working with JupyterLab
When working with Kaggle, you might come across some common issues. Here are some tips to help you navigate the platform smoothly:
- **Managing Kernels**: Kernels are crucial because they manage the connection between the code you write in JupyterLab and the environment where it runs. They can also access and share data between notebooks. When you close a Jupyter Notebook, the kernel might still be running because other notebooks could be using it. If you want to completely shut down a kernel, you can select it, right-click, and choose "Shut Down Kernel" from the pop-up menu.
- **Installing Python Packages**: Sometimes, you might need additional Python packages that aren't pre-installed on the server. You can easily install these packages in your home directory or a virtual environment by using the command `python -m pip install package-name`. To see all installed packages, use `python -m pip list`.
- **Deploying Flask/FastAPI API to Posit Connect**: You can deploy your Flask and FastAPI APIs to Posit Connect using the [rsconnect-python](https://docs.posit.co/rsconnect-python/) package from the terminal. Doing so makes it easier to integrate your web applications with JupyterLab and share them with others.
- **Installing JupyterLab Extensions**: JupyterLab supports various extensions to enhance functionality. You can install and customize these extensions to suit your needs. For detailed instructions, refer to [JupyterLab Extensions Guide](https://jupyterlab.readthedocs.io/en/latest/user/extensions.html) for more information.
- **Using Multiple Versions of Python**: If you need to work with different versions of Python, you can use Jupyter kernels configured with different Python versions.
## How to Use JupyterLab to Try Out YOLO11
JupyterLab makes it easy to experiment with YOLO11. To get started, follow these simple steps.
### Step 1: Install JupyterLab
First, you need to install JupyterLab. Open your terminal and run the command:
!!! tip "Installation"
=== "CLI"
```bash
# Install the required package for JupyterLab
pip install jupyterlab
```
### Step 2: Download the YOLO11 Tutorial Notebook
Next, download the [tutorial.ipynb](https://github.com/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb) file from the Ultralytics GitHub repository. Save this file to any directory on your local machine.
### Step 3: Launch JupyterLab
Navigate to the directory where you saved the notebook file using your terminal. Then, run the following command to launch JupyterLab:
!!! example "Usage"
=== "CLI"
```bash
jupyter lab
```
Once you've run this command, it will open JupyterLab in your default web browser, as shown below.
![Image Showing How JupyterLab Opens On the Browser](https://github.com/ultralytics/docs/releases/download/0/jupyterlab-browser-launch.avif)
### Step 4: Start Experimenting
In JupyterLab, open the tutorial.ipynb notebook. You can now start running the cells to explore and experiment with YOLO11.
![Image Showing Opened YOLO11 Notebook in JupyterLab](https://github.com/ultralytics/docs/releases/download/0/opened-yolov8-notebook-jupyterlab.avif)
JupyterLab's interactive environment allows you to modify code, visualize outputs, and document your findings all in one place. You can try out different configurations and understand how YOLO11 works.
For a detailed understanding of the model training process and best practices, refer to the [YOLO11 Model Training guide](../modes/train.md). This guide will help you get the most out of your experiments and ensure you're using YOLO11 effectively.
## Keep Learning about Jupyterlab
If you're excited to learn more about JupyterLab, here are some great resources to get you started:
- [**JupyterLab Documentation**](https://jupyterlab.readthedocs.io/en/stable/getting_started/starting.html): Dive into the official JupyterLab Documentation to explore its features and capabilities. It's a great way to understand how to use this powerful tool to its fullest potential.
- [**Try It With Binder**](https://mybinder.org/v2/gh/jupyterlab/jupyterlab-demo/HEAD?urlpath=lab/tree/demo): Experiment with JupyterLab without installing anything by using Binder, which lets you launch a live JupyterLab instance directly in your browser. It's a great way to start experimenting immediately.
- [**Installation Guide**](https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html): For a step-by-step guide on installing JupyterLab on your local machine, check out the installation guide.
## Summary
We've explored how JupyterLab can be a powerful tool for experimenting with Ultralytics YOLO11 models. Using its flexible and interactive environment, you can easily set up JupyterLab on your local machine and start working with YOLO11. JupyterLab makes it simple to [train](../guides/model-training-tips.md) and [evaluate](../guides/model-testing.md) your models, visualize outputs, and [document your findings](../guides/model-monitoring-and-maintenance.md) all in one place.
For more details, visit the [JupyterLab FAQ Page](https://jupyterlab.readthedocs.io/en/stable/getting_started/faq.html).
Interested in more YOLO11 integrations? Check out the [Ultralytics integration guide](./index.md) to explore additional tools and capabilities for your machine learning projects.
## FAQ
### How do I use JupyterLab to train a YOLO11 model?
To train a YOLO11 model using JupyterLab:
1. Install JupyterLab and the Ultralytics package:
```bash
pip install jupyterlab ultralytics
```
2. Launch JupyterLab and open a new notebook.
3. Import the YOLO model and load a pretrained model:
```python
from ultralytics import YOLO
model = YOLO("yolo11n.pt")
```
4. Train the model on your custom dataset:
```python
results = model.train(data="path/to/your/data.yaml", epochs=100, imgsz=640)
```
5. Visualize training results using JupyterLab's built-in plotting capabilities:
```ipython
%matplotlib inline
from ultralytics.utils.plotting import plot_results
plot_results(results)
```
JupyterLab's interactive environment allows you to easily modify parameters, visualize results, and iterate on your model training process.
### What are the key features of JupyterLab that make it suitable for YOLO11 projects?
JupyterLab offers several features that make it ideal for YOLO11 projects:
1. Interactive code execution: Test and debug YOLO11 code snippets in real-time.
2. Integrated file browser: Easily manage datasets, model weights, and configuration files.
3. Flexible layout: Arrange multiple notebooks, terminals, and output windows side-by-side for efficient workflow.
4. Rich output display: Visualize YOLO11 detection results, training curves, and model performance metrics inline.
5. Markdown support: Document your YOLO11 experiments and findings with rich text and images.
6. Extension ecosystem: Enhance functionality with extensions for version control, [remote computing](google-colab.md), and more.
These features allow for a seamless development experience when working with YOLO11 models, from data preparation to [model deployment](https://www.ultralytics.com/glossary/model-deployment).
### How can I optimize YOLO11 model performance using JupyterLab?
To optimize YOLO11 model performance in JupyterLab:
1. Use the autobatch feature to determine the optimal batch size:
```python
from ultralytics.utils.autobatch import autobatch
optimal_batch_size = autobatch(model)
```
2. Implement [hyperparameter tuning](../guides/hyperparameter-tuning.md) using libraries like Ray Tune:
```python
from ultralytics.utils.tuner import run_ray_tune
best_results = run_ray_tune(model, data="path/to/data.yaml")
```
3. Visualize and analyze model metrics using JupyterLab's plotting capabilities:
```python
from ultralytics.utils.plotting import plot_results
plot_results(results.results_dict)
```
4. Experiment with different model architectures and [export formats](../modes/export.md) to find the best balance of speed and [accuracy](https://www.ultralytics.com/glossary/accuracy) for your specific use case.
JupyterLab's interactive environment allows for quick iterations and real-time feedback, making it easier to optimize your YOLO11 models efficiently.
### How do I handle common issues when working with JupyterLab and YOLO11?
When working with JupyterLab and YOLO11, you might encounter some common issues. Here's how to handle them:
1. GPU memory issues:
- Use `torch.cuda.empty_cache()` to clear GPU memory between runs.
- Adjust [batch size](https://www.ultralytics.com/glossary/batch-size) or image size to fit your GPU memory.
2. Package conflicts:
- Create a separate conda environment for your YOLO11 projects to avoid conflicts.
- Use `!pip install package_name` in a notebook cell to install missing packages.
3. Kernel crashes:
- Restart the kernel and run cells one by one to identify the problematic code.
---
comments: true
description: Dive into our guide on YOLO11's integration with Kaggle. Find out what Kaggle is, its key features, and how to train a YOLO11 model using the integration.
keywords: What is Kaggle, What is Kaggle Used For, YOLO11, Kaggle Machine Learning, Model Training, GPU, TPU, cloud computing
---
# A Guide on Using Kaggle to Train Your YOLO11 Models
If you are learning about AI and working on [small projects](../solutions/index.md), you might not have access to powerful computing resources yet, and high-end hardware can be pretty expensive. Fortunately, Kaggle, a platform owned by Google, offers a great solution. Kaggle provides a free, cloud-based environment where you can access GPU resources, handle large datasets, and collaborate with a diverse community of data scientists and [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) enthusiasts.
Kaggle is a great choice for [training](../guides/model-training-tips.md) and experimenting with [Ultralytics YOLO11](https://github.com/ultralytics/ultralytics?tab=readme-ov-file) models. Kaggle Notebooks make using popular machine-learning libraries and frameworks in your projects easy. Let's explore Kaggle's main features and learn how you can train YOLO11 models on this platform!
## What is Kaggle?
Kaggle is a platform that brings together data scientists from around the world to collaborate, learn, and compete in solving real-world data science problems. Launched in 2010 by Anthony Goldbloom and Jeremy Howard and acquired by Google in 2017. Kaggle enables users to connect, discover and share datasets, use GPU-powered notebooks, and participate in data science competitions. The platform is designed to help both seasoned professionals and eager learners achieve their goals by offering robust tools and resources.
With more than [10 million users](https://www.kaggle.com/discussions/general/332147) as of 2022, Kaggle provides a rich environment for developing and experimenting with machine learning models. You don't need to worry about your local machine's specs or setup; you can dive right in with just a Kaggle account and a web browser.
## Training YOLO11 Using Kaggle
Training YOLO11 models on Kaggle is simple and efficient, thanks to the platform's access to powerful GPUs.
To get started, access the [Kaggle YOLO11 Notebook](https://www.kaggle.com/code/glennjocherultralytics/yolo11). Kaggle's environment comes with pre-installed libraries like [TensorFlow](https://www.ultralytics.com/glossary/tensorflow) and [PyTorch](https://www.ultralytics.com/glossary/pytorch), making the setup process hassle-free.
![What is the kaggle integration with respect to YOLO11?](https://github.com/ultralytics/docs/releases/download/0/kaggle-integration-yolov8.avif)
Once you sign in to your Kaggle account, you can click on the option to copy and edit the code, select a GPU under the accelerator settings, and run the notebook's cells to begin training your model. For a detailed understanding of the model training process and best practices, refer to our [YOLO11 Model Training guide](../modes/train.md).
![Using kaggle for machine learning model training with a GPU](https://github.com/ultralytics/docs/releases/download/0/using-kaggle-for-machine-learning-model-training-with-a-gpu.avif)
On the [official YOLO11 Kaggle notebook page](https://www.kaggle.com/code/glennjocherultralytics/yolo11), if you click on the three dots in the upper right-hand corner, you'll notice more options will pop up.
![Overview of Options From the Official YOLO11 Kaggle Notebook Page](https://github.com/ultralytics/docs/releases/download/0/overview-options-yolov8-kaggle-notebook.avif)
These options include:
- **View Versions**: Browse through different versions of the notebook to see changes over time and revert to previous versions if needed.
- **Copy API Command**: Get an API command to programmatically interact with the notebook, which is useful for automation and integration into workflows.
- **Open in Google Notebooks**: Open the notebook in Google's hosted notebook environment.
- **Open in Colab**: Launch the notebook in [Google Colab](./google-colab.md) for further editing and execution.
- **Follow Comments**: Subscribe to the comments section to get updates and engage with the community.
- **Download Code**: Download the entire notebook as a Jupyter (.ipynb) file for offline use or version control in your local environment.
- **Add to Collection**: Save the notebook to a collection within your Kaggle account for easy access and organization.
- **Bookmark**: Bookmark the notebook for quick access in the future.
- **Embed Notebook**: Get an embed link to include the notebook in blogs, websites, or documentation.
### Common Issues While Working with Kaggle
When working with Kaggle, you might come across some common issues. Here are some points to help you navigate the platform smoothly:
- **Access to GPUs**: In your Kaggle notebooks, you can activate a GPU at any time, with usage allowed for up to 30 hours per week. Kaggle provides the NVIDIA Tesla P100 GPU with 16GB of memory and also offers the option of using a NVIDIA GPU T4 x2. Powerful hardware accelerates your machine-learning tasks, making model training and inference much faster.
- **Kaggle Kernels**: Kaggle Kernels are free Jupyter notebook servers that can integrate GPUs, allowing you to perform machine learning operations on cloud computers. You don't have to rely on your own computer's CPU, avoiding overload and freeing up your local resources.
- **Kaggle Datasets**: Kaggle datasets are free to download. However, it's important to check the license for each dataset to understand any usage restrictions. Some datasets may have limitations on academic publications or commercial use. You can download datasets directly to your Kaggle notebook or anywhere else via the Kaggle API.
- **Saving and Committing Notebooks**: To save and commit a notebook on Kaggle, click "Save Version." This saves the current state of your notebook. Once the background kernel finishes generating the output files, you can access them from the Output tab on the main notebook page.
- **Collaboration**: Kaggle supports collaboration, but multiple users cannot edit a notebook simultaneously. Collaboration on Kaggle is asynchronous, meaning users can share and work on the same notebook at different times.
- **Reverting to a Previous Version**: If you need to revert to a previous version of your notebook, open the notebook and click on the three vertical dots in the top right corner to select "View Versions." Find the version you want to revert to, click on the "..." menu next to it, and select "Revert to Version." After the notebook reverts, click "Save Version" to commit the changes.
## Key Features of Kaggle
Next, let's understand the features Kaggle offers that make it an excellent platform for data science and machine learning enthusiasts. Here are some of the key highlights:
- **Datasets**: Kaggle hosts a massive collection of datasets on various topics. You can easily search and use these datasets in your projects, which is particularly handy for training and testing your YOLO11 models.
- **Competitions**: Known for its exciting competitions, Kaggle allows data scientists and machine learning enthusiasts to solve real-world problems. Competing helps you improve your skills, learn new techniques, and gain recognition in the community.
- **Free Access to TPUs**: Kaggle provides free access to powerful TPUs, which are essential for training complex machine learning models. This means you can speed up processing and boost the performance of your YOLO11 projects without incurring extra costs.
- **Integration with Github**: Kaggle allows you to easily connect your GitHub repository to upload notebooks and save your work. This integration makes it convenient to manage and access your files.
- **Community and Discussions**: Kaggle boasts a strong community of data scientists and machine learning practitioners. The discussion forums and shared notebooks are fantastic resources for learning and troubleshooting. You can easily find help, share your knowledge, and collaborate with others.
## Why Should You Use Kaggle for Your YOLO11 Projects?
There are multiple platforms for training and evaluating machine learning models, so what makes Kaggle stand out? Let's dive into the benefits of using Kaggle for your machine-learning projects:
- **Public Notebooks**: You can make your Kaggle notebooks public, allowing other users to view, vote, fork, and discuss your work. Kaggle promotes collaboration, feedback, and the sharing of ideas, helping you improve your YOLO11 models.
- **Comprehensive History of Notebook Commits**: Kaggle creates a detailed history of your notebook commits. This allows you to review and track changes over time, making it easier to understand the evolution of your project and revert to previous versions if needed.
- **Console Access**: Kaggle provides a console, giving you more control over your environment. This feature allows you to perform various tasks directly from the command line, enhancing your workflow and productivity.
- **Resource Availability**: Each notebook editing session on Kaggle is provided with significant resources: 12 hours of execution time for CPU and GPU sessions, 9 hours of execution time for TPU sessions, and 20 gigabytes of auto-saved disk space.
- **Notebook Scheduling**: Kaggle allows you to schedule your notebooks to run at specific times. You can automate repetitive tasks without manual intervention, such as training your model at regular intervals.
## Keep Learning about Kaggle
If you want to learn more about Kaggle, here are some helpful resources to guide you:
- [**Kaggle Learn**](https://www.kaggle.com/learn): Discover a variety of free, interactive tutorials on Kaggle Learn. These courses cover essential data science topics and provide hands-on experience to help you master new skills.
- [**Getting Started with Kaggle**](https://www.kaggle.com/code/alexisbcook/getting-started-with-kaggle): This comprehensive guide walks you through the basics of using Kaggle, from joining competitions to creating your first notebook. It's a great starting point for newcomers.
- [**Kaggle Medium Page**](https://medium.com/@kaggleteam): Explore tutorials, updates, and community contributions on Kaggle's Medium page. It's an excellent source for staying up-to-date with the latest trends and gaining deeper insights into data science.
## Summary
We've seen how Kaggle can boost your YOLO11 projects by providing free access to powerful GPUs, making model training and evaluation efficient. Kaggle's platform is user-friendly, with pre-installed libraries for quick setup.
For more details, visit [Kaggle's documentation](https://www.kaggle.com/docs).
Interested in more YOLO11 integrations? Check out the[ Ultralytics integration guide](https://docs.ultralytics.com/integrations/) to explore additional tools and capabilities for your machine learning projects.
## FAQ
### How do I train a YOLO11 model on Kaggle?
Training a YOLO11 model on Kaggle is straightforward. First, access the [Kaggle YOLO11 Notebook](https://www.kaggle.com/code/glennjocherultralytics/yolo11). Sign in to your Kaggle account, copy and edit the notebook, and select a GPU under the accelerator settings. Run the notebook cells to start training. For more detailed steps, refer to our [YOLO11 Model Training guide](../modes/train.md).
### What are the benefits of using Kaggle for YOLO11 model training?
Kaggle offers several advantages for training YOLO11 models:
- **Free GPU Access**: Utilize powerful GPUs like NVIDIA Tesla P100 or T4 x2 for up to 30 hours per week.
- **Pre-installed Libraries**: Libraries like TensorFlow and PyTorch are pre-installed, simplifying the setup.
- **Community Collaboration**: Engage with a vast community of data scientists and machine learning enthusiasts.
- **Version Control**: Easily manage different versions of your notebooks and revert to previous versions if needed.
For more details, visit our [Ultralytics integration guide](https://docs.ultralytics.com/integrations/).
### What common issues might I encounter when using Kaggle for YOLO11, and how can I resolve them?
Common issues include:
- **Access to GPUs**: Ensure you activate a GPU in your notebook settings. Kaggle allows up to 30 hours of GPU usage per week.
- **Dataset Licenses**: Check the license of each dataset to understand usage restrictions.
- **Saving and Committing Notebooks**: Click "Save Version" to save your notebook's state and access output files from the Output tab.
- **Collaboration**: Kaggle supports asynchronous collaboration; multiple users cannot edit a notebook simultaneously.
For more troubleshooting tips, see our [Common Issues guide](../guides/yolo-common-issues.md).
### Why should I choose Kaggle over other platforms like Google Colab for training YOLO11 models?
Kaggle offers unique features that make it an excellent choice:
- **Public Notebooks**: Share your work with the community for feedback and collaboration.
- **Free Access to TPUs**: Speed up training with powerful TPUs without extra costs.
- **Comprehensive History**: Track changes over time with a detailed history of notebook commits.
- **Resource Availability**: Significant resources are provided for each notebook session, including 12 hours of execution time for CPU and GPU sessions.
For a comparison with Google Colab, refer to our [Google Colab guide](./google-colab.md).
### How can I revert to a previous version of my Kaggle notebook?
To revert to a previous version:
1. Open the notebook and click on the three vertical dots in the top right corner.
2. Select "View Versions."
3. Find the version you want to revert to, click on the "..." menu next to it, and select "Revert to Version."
4. Click "Save Version" to commit the changes.
---
comments: true
description: Learn how to set up and use MLflow logging with Ultralytics YOLO for enhanced experiment tracking, model reproducibility, and performance improvements.
keywords: MLflow, Ultralytics YOLO, machine learning, experiment tracking, metrics logging, parameter logging, artifact logging
---
# MLflow Integration for Ultralytics YOLO
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/mlflow-integration-ultralytics-yolo.avif" alt="MLflow ecosystem">
## Introduction
Experiment logging is a crucial aspect of [machine learning](https://www.ultralytics.com/glossary/machine-learning-ml) workflows that enables tracking of various metrics, parameters, and artifacts. It helps to enhance model reproducibility, debug issues, and improve model performance. [Ultralytics](https://www.ultralytics.com/) YOLO, known for its real-time [object detection](https://www.ultralytics.com/glossary/object-detection) capabilities, now offers integration with [MLflow](https://mlflow.org/), an open-source platform for complete machine learning lifecycle management.
This documentation page is a comprehensive guide to setting up and utilizing the MLflow logging capabilities for your Ultralytics YOLO project.
## What is MLflow?
[MLflow](https://mlflow.org/) is an open-source platform developed by [Databricks](https://www.databricks.com/) for managing the end-to-end machine learning lifecycle. It includes tools for tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow is designed to work with any machine learning library and programming language.
## Features
- **Metrics Logging**: Logs metrics at the end of each epoch and at the end of the training.
- **Parameter Logging**: Logs all the parameters used in the training.
- **Artifacts Logging**: Logs model artifacts, including weights and configuration files, at the end of the training.
## Setup and Prerequisites
Ensure MLflow is installed. If not, install it using pip:
```bash
pip install mlflow
```
Make sure that MLflow logging is enabled in Ultralytics settings. Usually, this is controlled by the settings `mflow` key. See the [settings](../quickstart.md#ultralytics-settings) page for more info.
!!! example "Update Ultralytics MLflow Settings"
=== "Python"
Within the Python environment, call the `update` method on the `settings` object to change your settings:
```python
from ultralytics import settings
# Update a setting
settings.update({"mlflow": True})
# Reset settings to default values
settings.reset()
```
=== "CLI"
If you prefer using the command-line interface, the following commands will allow you to modify your settings:
```bash
# Update a setting
yolo settings runs_dir='/path/to/runs'
# Reset settings to default values
yolo settings reset
```
## How to Use
### Commands
1. **Set a Project Name**: You can set the project name via an environment variable:
```bash
export MLFLOW_EXPERIMENT_NAME=<your_experiment_name>
```
Or use the `project=<project>` argument when training a YOLO model, i.e. `yolo train project=my_project`.
2. **Set a Run Name**: Similar to setting a project name, you can set the run name via an environment variable:
```bash
export MLFLOW_RUN=<your_run_name>
```
Or use the `name=<name>` argument when training a YOLO model, i.e. `yolo train project=my_project name=my_name`.
3. **Start Local MLflow Server**: To start tracking, use:
```bash
mlflow server --backend-store-uri runs/mlflow'
```
This will start a local server at http://127.0.0.1:5000 by default and save all mlflow logs to the 'runs/mlflow' directory. To specify a different URI, set the `MLFLOW_TRACKING_URI` environment variable.
4. **Kill MLflow Server Instances**: To stop all running MLflow instances, run:
```bash
ps aux | grep 'mlflow' | grep -v 'grep' | awk '{print $2}' | xargs kill -9
```
### Logging
The logging is taken care of by the `on_pretrain_routine_end`, `on_fit_epoch_end`, and `on_train_end` callback functions. These functions are automatically called during the respective stages of the training process, and they handle the logging of parameters, metrics, and artifacts.
## Examples
1. **Logging Custom Metrics**: You can add custom metrics to be logged by modifying the `trainer.metrics` dictionary before `on_fit_epoch_end` is called.
2. **View Experiment**: To view your logs, navigate to your MLflow server (usually http://127.0.0.1:5000) and select your experiment and run. <img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/yolo-mlflow-experiment.avif" alt="YOLO MLflow Experiment">
3. **View Run**: Runs are individual models inside an experiment. Click on a Run and see the Run details, including uploaded artifacts and model weights. <img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/yolo-mlflow-run.avif" alt="YOLO MLflow Run">
## Disabling MLflow
To turn off MLflow logging:
```bash
yolo settings mlflow=False
```
## Conclusion
MLflow logging integration with Ultralytics YOLO offers a streamlined way to keep track of your machine learning experiments. It empowers you to monitor performance metrics and manage artifacts effectively, thus aiding in robust model development and deployment. For further details please visit the MLflow [official documentation](https://mlflow.org/docs/latest/index.html).
## FAQ
### How do I set up MLflow logging with Ultralytics YOLO?
To set up MLflow logging with Ultralytics YOLO, you first need to ensure MLflow is installed. You can install it using pip:
```bash
pip install mlflow
```
Next, enable MLflow logging in Ultralytics settings. This can be controlled using the `mlflow` key. For more information, see the [settings guide](../quickstart.md#ultralytics-settings).
!!! example "Update Ultralytics MLflow Settings"
=== "Python"
```python
from ultralytics import settings
# Update a setting
settings.update({"mlflow": True})
# Reset settings to default values
settings.reset()
```
=== "CLI"
```bash
# Update a setting
yolo settings runs_dir='/path/to/runs'
# Reset settings to default values
yolo settings reset
```
Finally, start a local MLflow server for tracking:
```bash
mlflow server --backend-store-uri runs/mlflow
```
### What metrics and parameters can I log using MLflow with Ultralytics YOLO?
Ultralytics YOLO with MLflow supports logging various metrics, parameters, and artifacts throughout the training process:
- **Metrics Logging**: Tracks metrics at the end of each [epoch](https://www.ultralytics.com/glossary/epoch) and upon training completion.
- **Parameter Logging**: Logs all parameters used in the training process.
- **Artifacts Logging**: Saves model artifacts like weights and configuration files after training.
For more detailed information, visit the [Ultralytics YOLO tracking documentation](#features).
### Can I disable MLflow logging once it is enabled?
Yes, you can disable MLflow logging for Ultralytics YOLO by updating the settings. Here's how you can do it using the CLI:
```bash
yolo settings mlflow=False
```
For further customization and resetting settings, refer to the [settings guide](../quickstart.md#ultralytics-settings).
### How can I start and stop an MLflow server for Ultralytics YOLO tracking?
To start an MLflow server for tracking your experiments in Ultralytics YOLO, use the following command:
```bash
mlflow server --backend-store-uri runs/mlflow
```
This command starts a local server at http://127.0.0.1:5000 by default. If you need to stop running MLflow server instances, use the following bash command:
```bash
ps aux | grep 'mlflow' | grep -v 'grep' | awk '{print $2}' | xargs kill -9
```
Refer to the [commands section](#commands) for more command options.
### What are the benefits of integrating MLflow with Ultralytics YOLO for experiment tracking?
Integrating MLflow with Ultralytics YOLO offers several benefits for managing your machine learning experiments:
- **Enhanced Experiment Tracking**: Easily track and compare different runs and their outcomes.
- **Improved Model Reproducibility**: Ensure that your experiments are reproducible by logging all parameters and artifacts.
- **Performance Monitoring**: Visualize performance metrics over time to make data-driven decisions for model improvements.
For an in-depth look at setting up and leveraging MLflow with Ultralytics YOLO, explore the [MLflow Integration for Ultralytics YOLO](#introduction) documentation.
---
comments: true
description: Optimize YOLO11 models for mobile and embedded devices by exporting to MNN format.
keywords: Ultralytics, YOLO11, MNN, model export, machine learning, deployment, mobile, embedded systems, deep learning, AI models
---
# MNN Export for YOLO11 Models and Deploy
## MNN
<p align="center">
<img width="100%" src="https://mnn-docs.readthedocs.io/en/latest/_images/architecture.png" alt="MNN architecture">
</p>
[MNN](https://github.com/alibaba/MNN) is a highly efficient and lightweight deep learning framework. It supports inference and training of deep learning models and has industry-leading performance for inference and training on-device. At present, MNN has been integrated into more than 30 apps of Alibaba Inc, such as Taobao, Tmall, Youku, DingTalk, Xianyu, etc., covering more than 70 usage scenarios such as live broadcast, short video capture, search recommendation, product searching by image, interactive marketing, equity distribution, security risk control. In addition, MNN is also used on embedded devices, such as IoT.
## Export to MNN: Converting Your YOLO11 Model
You can expand model compatibility and deployment flexibility by converting YOLO11 models to MNN format.
### Installation
To install the required packages, run:
!!! tip "Installation"
=== "CLI"
```bash
# Install the required package for YOLO11 and MNN
pip install ultralytics
pip install MNN
```
### Usage
Before diving into the usage instructions, it's important to note that while all [Ultralytics YOLO11 models](../models/index.md) are available for exporting, you can ensure that the model you select supports export functionality [here](../modes/export.md).
!!! example "Usage"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
# Export the model to MNN format
model.export(format="mnn") # creates 'yolo11n.mnn'
# Load the exported MNN model
mnn_model = YOLO("yolo11n.mnn")
# Run inference
results = mnn_model("https://ultralytics.com/images/bus.jpg")
```
=== "CLI"
```bash
# Export a YOLO11n PyTorch model to MNN format
yolo export model=yolo11n.pt format=mnn # creates 'yolo11n.mnn'
# Run inference with the exported model
yolo predict model='yolo11n.mnn' source='https://ultralytics.com/images/bus.jpg'
```
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](../guides/model-deployment-options.md).
### MNN-Only Inference
A function that relies solely on MNN for YOLO11 inference and preprocessing is implemented, providing both Python and C++ versions for easy deployment in any scenario.
!!! example "MNN"
=== "Python"
```python
import argparse
import MNN
import MNN.cv as cv2
import MNN.numpy as np
def inference(model, img, precision, backend, thread):
config = {}
config["precision"] = precision
config["backend"] = backend
config["numThread"] = thread
rt = MNN.nn.create_runtime_manager((config,))
# net = MNN.nn.load_module_from_file(model, ['images'], ['output0'], runtime_manager=rt)
net = MNN.nn.load_module_from_file(model, [], [], runtime_manager=rt)
original_image = cv2.imread(img)
ih, iw, _ = original_image.shape
length = max((ih, iw))
scale = length / 640
image = np.pad(original_image, [[0, length - ih], [0, length - iw], [0, 0]], "constant")
image = cv2.resize(
image, (640, 640), 0.0, 0.0, cv2.INTER_LINEAR, -1, [0.0, 0.0, 0.0], [1.0 / 255.0, 1.0 / 255.0, 1.0 / 255.0]
)
input_var = np.expand_dims(image, 0)
input_var = MNN.expr.convert(input_var, MNN.expr.NC4HW4)
output_var = net.forward(input_var)
output_var = MNN.expr.convert(output_var, MNN.expr.NCHW)
output_var = output_var.squeeze()
# output_var shape: [84, 8400]; 84 means: [cx, cy, w, h, prob * 80]
cx = output_var[0]
cy = output_var[1]
w = output_var[2]
h = output_var[3]
probs = output_var[4:]
# [cx, cy, w, h] -> [y0, x0, y1, x1]
x0 = cx - w * 0.5
y0 = cy - h * 0.5
x1 = cx + w * 0.5
y1 = cy + h * 0.5
boxes = np.stack([x0, y0, x1, y1], axis=1)
# get max prob and idx
scores = np.max(probs, 0)
class_ids = np.argmax(probs, 0)
result_ids = MNN.expr.nms(boxes, scores, 100, 0.45, 0.25)
print(result_ids.shape)
# nms result box, score, ids
result_boxes = boxes[result_ids]
result_scores = scores[result_ids]
result_class_ids = class_ids[result_ids]
for i in range(len(result_boxes)):
x0, y0, x1, y1 = result_boxes[i].read_as_tuple()
y0 = int(y0 * scale)
y1 = int(y1 * scale)
x0 = int(x0 * scale)
x1 = int(x1 * scale)
print(result_class_ids[i])
cv2.rectangle(original_image, (x0, y0), (x1, y1), (0, 0, 255), 2)
cv2.imwrite("res.jpg", original_image)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--model", type=str, required=True, help="the yolo11 model path")
parser.add_argument("--img", type=str, required=True, help="the input image path")
parser.add_argument("--precision", type=str, default="normal", help="inference precision: normal, low, high, lowBF")
parser.add_argument(
"--backend",
type=str,
default="CPU",
help="inference backend: CPU, OPENCL, OPENGL, NN, VULKAN, METAL, TRT, CUDA, HIAI",
)
parser.add_argument("--thread", type=int, default=4, help="inference using thread: int")
args = parser.parse_args()
inference(args.model, args.img, args.precision, args.backend, args.thread)
```
=== "CPP"
```cpp
#include <stdio.h>
#include <MNN/ImageProcess.hpp>
#include <MNN/expr/Module.hpp>
#include <MNN/expr/Executor.hpp>
#include <MNN/expr/ExprCreator.hpp>
#include <MNN/expr/Executor.hpp>
#include <cv/cv.hpp>
using namespace MNN;
using namespace MNN::Express;
using namespace MNN::CV;
int main(int argc, const char* argv[]) {
if (argc < 3) {
MNN_PRINT("Usage: ./yolo11_demo.out model.mnn input.jpg [forwardType] [precision] [thread]\n");
return 0;
}
int thread = 4;
int precision = 0;
int forwardType = MNN_FORWARD_CPU;
if (argc >= 4) {
forwardType = atoi(argv[3]);
}
if (argc >= 5) {
precision = atoi(argv[4]);
}
if (argc >= 6) {
thread = atoi(argv[5]);
}
MNN::ScheduleConfig sConfig;
sConfig.type = static_cast<MNNForwardType>(forwardType);
sConfig.numThread = thread;
BackendConfig bConfig;
bConfig.precision = static_cast<BackendConfig::PrecisionMode>(precision);
sConfig.backendConfig = &bConfig;
std::shared_ptr<Executor::RuntimeManager> rtmgr = std::shared_ptr<Executor::RuntimeManager>(Executor::RuntimeManager::createRuntimeManager(sConfig));
if(rtmgr == nullptr) {
MNN_ERROR("Empty RuntimeManger\n");
return 0;
}
rtmgr->setCache(".cachefile");
std::shared_ptr<Module> net(Module::load(std::vector<std::string>{}, std::vector<std::string>{}, argv[1], rtmgr));
auto original_image = imread(argv[2]);
auto dims = original_image->getInfo()->dim;
int ih = dims[0];
int iw = dims[1];
int len = ih > iw ? ih : iw;
float scale = len / 640.0;
std::vector<int> padvals { 0, len - ih, 0, len - iw, 0, 0 };
auto pads = _Const(static_cast<void*>(padvals.data()), {3, 2}, NCHW, halide_type_of<int>());
auto image = _Pad(original_image, pads, CONSTANT);
image = resize(image, Size(640, 640), 0, 0, INTER_LINEAR, -1, {0., 0., 0.}, {1./255., 1./255., 1./255.});
auto input = _Unsqueeze(image, {0});
input = _Convert(input, NC4HW4);
auto outputs = net->onForward({input});
auto output = _Convert(outputs[0], NCHW);
output = _Squeeze(output);
// output shape: [84, 8400]; 84 means: [cx, cy, w, h, prob * 80]
auto cx = _Gather(output, _Scalar<int>(0));
auto cy = _Gather(output, _Scalar<int>(1));
auto w = _Gather(output, _Scalar<int>(2));
auto h = _Gather(output, _Scalar<int>(3));
std::vector<int> startvals { 4, 0 };
auto start = _Const(static_cast<void*>(startvals.data()), {2}, NCHW, halide_type_of<int>());
std::vector<int> sizevals { -1, -1 };
auto size = _Const(static_cast<void*>(sizevals.data()), {2}, NCHW, halide_type_of<int>());
auto probs = _Slice(output, start, size);
// [cx, cy, w, h] -> [y0, x0, y1, x1]
auto x0 = cx - w * _Const(0.5);
auto y0 = cy - h * _Const(0.5);
auto x1 = cx + w * _Const(0.5);
auto y1 = cy + h * _Const(0.5);
auto boxes = _Stack({x0, y0, x1, y1}, 1);
auto scores = _ReduceMax(probs, {0});
auto ids = _ArgMax(probs, 0);
auto result_ids = _Nms(boxes, scores, 100, 0.45, 0.25);
auto result_ptr = result_ids->readMap<int>();
auto box_ptr = boxes->readMap<float>();
auto ids_ptr = ids->readMap<int>();
auto score_ptr = scores->readMap<float>();
for (int i = 0; i < 100; i++) {
auto idx = result_ptr[i];
if (idx < 0) break;
auto x0 = box_ptr[idx * 4 + 0] * scale;
auto y0 = box_ptr[idx * 4 + 1] * scale;
auto x1 = box_ptr[idx * 4 + 2] * scale;
auto y1 = box_ptr[idx * 4 + 3] * scale;
auto class_idx = ids_ptr[idx];
auto score = score_ptr[idx];
rectangle(original_image, {x0, y0}, {x1, y1}, {0, 0, 255}, 2);
}
if (imwrite("res.jpg", original_image)) {
MNN_PRINT("result image write to `res.jpg`.\n");
}
rtmgr->updateCache();
return 0;
}
```
## Summary
In this guide, we introduce how to export the Ultralytics YOLO11 model to MNN and use MNN for inference.
For more usage, please refer to the [MNN documentation](https://mnn-docs.readthedocs.io/en/latest).
## FAQ
### How do I export Ultralytics YOLO11 models to MNN format?
To export your Ultralytics YOLO11 model to MNN format, follow these steps:
!!! example "Export"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
# Export to MNN format
model.export(format="mnn") # creates 'yolo11n.mnn' with fp32 weight
model.export(format="mnn", half=True) # creates 'yolo11n.mnn' with fp16 weight
model.export(format="mnn", int8=True) # creates 'yolo11n.mnn' with int8 weight
```
=== "CLI"
```bash
yolo export model=yolo11n.pt format=mnn # creates 'yolo11n.mnn' with fp32 weight
yolo export model=yolo11n.pt format=mnn half=True # creates 'yolo11n.mnn' with fp16 weight
yolo export model=yolo11n.pt format=mnn int8=True # creates 'yolo11n.mnn' with int8 weight
```
For detailed export options, check the [Export](../modes/export.md) page in the documentation.
### How do I predict with an exported YOLO11 MNN model?
To predict with an exported YOLO11 MNN model, use the `predict` function from the YOLO class.
!!! example "Predict"
=== "Python"
```python
from ultralytics import YOLO
# Load the YOLO11 MNN model
model = YOLO("yolo11n.mnn")
# Export to MNN format
results = mnn_model("https://ultralytics.com/images/bus.jpg") # predict with `fp32`
results = mnn_model("https://ultralytics.com/images/bus.jpg", half=True) # predict with `fp16` if device support
for result in results:
result.show() # display to screen
result.save(filename="result.jpg") # save to disk
```
=== "CLI"
```bash
yolo predict model='yolo11n.mnn' source='https://ultralytics.com/images/bus.jpg' # predict with `fp32`
yolo predict model='yolo11n.mnn' source='https://ultralytics.com/images/bus.jpg' --half=True # predict with `fp16` if device support
```
### What platforms are supported for MNN?
MNN is versatile and supports various platforms:
- **Mobile**: Android, iOS, Harmony.
- **Embedded Systems and IoT Devices**: Devices like Raspberry Pi and NVIDIA Jetson.
- **Desktop and Servers**: Linux, Windows, and macOS.
### How can I deploy Ultralytics YOLO11 MNN models on Mobile Devices?
To deploy your YOLO11 models on Mobile devices:
1. **Build for Android**: Follow the [MNN Android](https://github.com/alibaba/MNN/tree/master/project/android).
2. **Build for iOS**: Follow the [MNN iOS](https://github.com/alibaba/MNN/tree/master/project/ios).
3. **Build for Harmony**: Follow the [MNN Harmony](https://github.com/alibaba/MNN/tree/master/project/harmony).
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment