Datasets:
CHANNEL_NAME
stringclasses 1
value | URL
stringclasses 2
values | TITLE
stringclasses 2
values | DESCRIPTION
stringclasses 2
values | TRANSCRIPTION
stringclasses 2
values | SEGMENTS
stringclasses 2
values |
---|---|---|---|---|---|
Azure | https://www.youtube.com/watch?v=MlWyhdrgBwg | Introducing Hugging Face Endpoints on Azure | Transformers like BERT changed machine learning starting with natural language processing, but these large and complex models are challenging for any machine learning engineer to deploy into production.
With Hugging Face Endpoints on Azure, it鈥檚 easy for developers to deploy any Hugging Face model into a dedicated endpoint with secure, enterprise-grade infrastructure.
The new service supports powerful yet simple auto-scaling, secure connections to VNET via Azure PrivateLink. Just pick the region, instance type and select your Hugging Face model to start serving predictions at scale.
Blog post: https://huggingface.co/blog/hugging-face-endpoints-on-azure
Go to your Azure portal, search for Hugging Face, and get started. 馃聽
We can also discuss your questions in the forum https://discuss.huggingface.co/c/endpoints-azure/61 | Hello, my name is Philip Schmidt and I am technical lead at HygneFace. I'm excited to be able to show you and demo you the new Escher HygneFace endpoints. Okay, let's get started. In the demo, we are going to create a new Escher HygneFace endpoint for text classification. HygneFace endpoints are managed Escher application powered by Escher machine learning. We are currently in the UI where we can create our endpoint. First, we need to select our subscription, then our resource group. Next we are going to select our region, in this case we are going to use East US and define an endpoint name. Let's go with distl, distl, bird. As next we are going to select our model from the HygneFace hub for this we can go to filter, text classification and then select one of the available models. We are going to use the distl, bird model which also fits our endpoint name. If you demodel ID and paste it into our model ID input field, then we are going to select our text classification to cast. Okay, as next we are going to define an application name. Let's go with text distl and then we need to provide a managed resource group name. You can go with HygneFace distl. Our application managed resource group holds all resources that are required to create our HygneFace endpoint. After we have done that, we can select our compute. In this case we are going to use a CPU instance but you can also go with a GPU instance. The nice part about HygneFace endpoints is that they are going to deliver auto scaling out of the box. Meaning we can select our min instance card and our max instance card. We are going to do this for max with two. So we are going to be able to scale from one to two instances and then we need to provide our threshold. We are going to use requests per minute as our threshold target and we are going to select 1000. This means that our endpoint will receive requests and if we have an average over 1000 requests over a period for five minutes, the endpoint will automatically scale out. And if the load decreases, the endpoint will also automatically scale in again. Okay, now let's review and create. All our validation has passed. We can scroll down and then see again our provided will use we have our model ID with distal bird, our task, text classification and the resource name we have provided. We can also see our instance type and the auto scaling configuration. Okay, let's create. Our endpoint will now be created. This will take a few minutes. I will be back when the endpoint all the deployment is complete. Our deployment is complete and we can test our endpoint. Therefore we go to the resource to outputs where we can see our inference UI, our API of authentication token and the playground URL we can use to test our endpoint. Here we have a widget where we can immediately use to run inference and code snippet. Let's test it. The new hugging face and points are easy to use. And we can see our model predicted the correct sentiment for it. We can also copy the Python snippet and execute it for example in a Trubiton notebook. Here you can also see that our model has predicted our label correctly. I am Philip and this was a quick look at the new Azure hugging face and points. Thank you. Thank you. | [{"start": 0.0, "end": 10.68, "text": " Hello, my name is Philip Schmidt and I am technical lead at HygneFace."}, {"start": 10.68, "end": 16.8, "text": " I'm excited to be able to show you and demo you the new Escher HygneFace endpoints."}, {"start": 16.8, "end": 19.56, "text": " Okay, let's get started."}, {"start": 19.56, "end": 31.28, "text": " In the demo, we are going to create a new Escher HygneFace endpoint for text classification."}, {"start": 31.28, "end": 36.12, "text": " HygneFace endpoints are managed Escher application powered by Escher machine learning."}, {"start": 36.12, "end": 39.72, "text": " We are currently in the UI where we can create our endpoint."}, {"start": 39.72, "end": 45.519999999999996, "text": " First, we need to select our subscription, then our resource group."}, {"start": 45.52, "end": 50.400000000000006, "text": " Next we are going to select our region, in this case we are going to use East US and"}, {"start": 50.400000000000006, "end": 52.24, "text": " define an endpoint name."}, {"start": 52.24, "end": 57.120000000000005, "text": " Let's go with distl, distl, bird."}, {"start": 57.120000000000005, "end": 63.400000000000006, "text": " As next we are going to select our model from the HygneFace hub for this we can go to filter,"}, {"start": 63.400000000000006, "end": 67.48, "text": " text classification and then select one of the available models."}, {"start": 67.48, "end": 72.48, "text": " We are going to use the distl, bird model which also fits our endpoint name."}, {"start": 72.48, "end": 77.92, "text": " If you demodel ID and paste it into our model ID input field, then we are going to select"}, {"start": 77.92, "end": 81.4, "text": " our text classification to cast."}, {"start": 81.4, "end": 85.68, "text": " Okay, as next we are going to define an application name."}, {"start": 85.68, "end": 91.24000000000001, "text": " Let's go with text distl and then we need to provide a managed resource group name."}, {"start": 91.24000000000001, "end": 95.56, "text": " You can go with HygneFace distl."}, {"start": 95.56, "end": 100.4, "text": " Our application managed resource group holds all resources that are required to create"}, {"start": 100.4, "end": 102.84, "text": " our HygneFace endpoint."}, {"start": 102.84, "end": 106.12, "text": " After we have done that, we can select our compute."}, {"start": 106.12, "end": 111.88000000000001, "text": " In this case we are going to use a CPU instance but you can also go with a GPU instance."}, {"start": 111.88000000000001, "end": 116.24000000000001, "text": " The nice part about HygneFace endpoints is that they are going to deliver auto scaling"}, {"start": 116.24000000000001, "end": 117.32000000000001, "text": " out of the box."}, {"start": 117.32000000000001, "end": 121.80000000000001, "text": " Meaning we can select our min instance card and our max instance card."}, {"start": 121.80000000000001, "end": 124.60000000000001, "text": " We are going to do this for max with two."}, {"start": 124.60000000000001, "end": 129.20000000000002, "text": " So we are going to be able to scale from one to two instances and then we need to provide"}, {"start": 129.2, "end": 130.64, "text": " our threshold."}, {"start": 130.64, "end": 135.11999999999998, "text": " We are going to use requests per minute as our threshold target and we are going to"}, {"start": 135.11999999999998, "end": 136.76, "text": " select 1000."}, {"start": 136.76, "end": 143.11999999999998, "text": " This means that our endpoint will receive requests and if we have an average over 1000 requests"}, {"start": 143.11999999999998, "end": 147.72, "text": " over a period for five minutes, the endpoint will automatically scale out."}, {"start": 147.72, "end": 152.2, "text": " And if the load decreases, the endpoint will also automatically scale in again."}, {"start": 152.2, "end": 157.32, "text": " Okay, now let's review and create."}, {"start": 157.32, "end": 168.28, "text": " All our validation has passed."}, {"start": 168.28, "end": 173.07999999999998, "text": " We can scroll down and then see again our provided will use we have our model ID with"}, {"start": 173.07999999999998, "end": 178.2, "text": " distal bird, our task, text classification and the resource name we have provided."}, {"start": 178.2, "end": 182.56, "text": " We can also see our instance type and the auto scaling configuration."}, {"start": 182.56, "end": 185.12, "text": " Okay, let's create."}, {"start": 185.12, "end": 194.12, "text": " Our endpoint will now be created."}, {"start": 194.12, "end": 195.6, "text": " This will take a few minutes."}, {"start": 195.6, "end": 202.08, "text": " I will be back when the endpoint all the deployment is complete."}, {"start": 202.08, "end": 205.24, "text": " Our deployment is complete and we can test our endpoint."}, {"start": 205.24, "end": 213.04000000000002, "text": " Therefore we go to the resource to outputs where we can see our inference UI, our API of"}, {"start": 213.04, "end": 219.07999999999998, "text": " authentication token and the playground URL we can use to test our endpoint."}, {"start": 219.07999999999998, "end": 225.28, "text": " Here we have a widget where we can immediately use to run inference and code snippet."}, {"start": 225.28, "end": 227.23999999999998, "text": " Let's test it."}, {"start": 227.23999999999998, "end": 235.76, "text": " The new hugging face and points are easy to use."}, {"start": 235.76, "end": 239.79999999999998, "text": " And we can see our model predicted the correct sentiment for it."}, {"start": 239.8, "end": 245.96, "text": " We can also copy the Python snippet and execute it for example in a Trubiton notebook."}, {"start": 245.96, "end": 258.6, "text": " Here you can also see that our model has predicted our label correctly."}, {"start": 258.6, "end": 263.76, "text": " I am Philip and this was a quick look at the new Azure hugging face and points."}, {"start": 263.76, "end": 264.76, "text": " Thank you."}, {"start": 264.76, "end": 271.76, "text": " Thank you."}] |
Azure | https://www.youtube.com/watch?v=zwC3FsVj3jw | Introducing Hugging Face AzureML Endpoints | Transformers like BERT changed machine learning starting with natural language processing, but these large and complex models are challenging for any machine learning engineer to deploy into production.
With Hugging Face Endpoints on Azure, it鈥檚 easy for developers to deploy any Hugging Face model into a dedicated endpoint with secure, enterprise-grade infrastructure.
The new service supports powerful yet simple auto-scaling, secure connections to VNET via Azure PrivateLink. Just pick the region, instance type and select your Hugging Face model to start serving predictions at scale.
Blog post: https://huggingface.co/blog/hugging-face-endpoints-on-azure
Go to your Azure portal, search for Hugging Face, and get started. 馃
We can also discuss your questions in the forum https://discuss.huggingface.co/c/endpoints-azure/61 | Hi everyone, my name is Philip and I'm a technically a Kying phase and today I'm going to show you how you can deploy Hiking phase transformers to Azure using the Hiking phase Azure MLN points. We start at the Azure partner and the first thing we need to do is to search for our Hiking phase Azure MLN points. You can do this by searching for Hiking and then at the Marketplace section we should see the Hiking phase Azure MLN points. This will bring us to the Marketplace offering we can click on Create which will then allow us to create our Hiking phase managed application on Azure. As a resource group I'm going to select the test resource group here you can select the resource group where you want your managed app to be deployed region we are going to go with the East US region the name will be Hiking phase video and the application it's also going to be Hiking phase and the managed resource group is not important it's basically where later all of our Azure resources are stored you can change it if you want but you can also leave it as it is. The next step is text we don't need any text right now and then we can go to Review and Create which will validate our inputs and then we have to access the conditions this basically means that you as a user allowing us Hiking phase to later deploy or create resources for you and now we can click Create. What's now going to happen is that our managed application with some base Azure resources are going to be created this takes around one minute after the deployment is done I'll be back. The deployment of our managed application is not complete and we can now start creating our Azure MLN points therefore we go to the resource which will bring us to our Hiking phase managed application and then on the left side we can find under Resources Preview Hiking phase endpoints which is our central place for all our Hiking phase endpoints later meaning we will have an overview of all our created endpoints we can add more we can delete them or edit them. Obviously since we created our managed application we don't have any endpoints available yet therefore we will add one and to create a new Hiking phase endpoint we only have to provide a Hiking phase model ID and need to select an instance time. For our model we go to the Hiking phase hub select a model in this case we will use the Distrabert based on Cased fine tune on SSD2 which is a Distrabert model fine tune for text classification we go back to our Azure resource paste our Hiking phase model ID and then we can select our compute instance time here you need to make sure that you have available Azure welcomed you for your instance so the drop down is showing all available Azure ML instances but it's not validating in the UI if you have capacity or not so in our case if we would like to deploy a T4 you have to make sure that you have available capacity if not the creation will fail but for our case we don't need a GPU we will go with the F2 series which is the CPU optimized instance for our model then we can hit review and submit make sure that we have the correct model ID yes in our instance time and then we can click submit and now our resources created on the top right side we can see is creating resources meaning that our backend is now validating all of the information we provided meaning that it checks if the model ID we provided is a valid model on the Hiking phase hub it also makes sure that we have enough capacity available if for example we wouldn't have enough capacity for our F2 instance series we would see a failure here and then we can go to the details and check what is the arrow and then we should see out of capacity a few more seconds and then we should see our green checkbox and also should see our first endpoint with endpoint creation state also something pretty cool about the Hiking phase as you know LN points is that we are automatically deriving the task for the model so you have seen that we only provided the model ID and didn't provide which task we want to use the task is automatically derived from the model information so we didn't need to specify that our model is a text classification model this was done automatically and while I was talking our validation or custom resource was successfully created and the dashboard is currently loading so yes perfect here we see our new endpoint with our status which is currently in creating and within 5 to 10 minutes our endpoint should be created and our deployment was also created once this is done I'll come back the deployment of our endpoint succeeded and we can now see that the managed application derived the task we can also see our endpoint URI which we could directly copy and paste into our application we can see our instance time as well as the as actual resource link which we are going to use to test our endpoint we can copy the actual resource link paste into our process which will now like jump into Azure Machine Learning Studio to our endpoint which has been created in here we can go to test and can test our endpoint the endpoint is using the same API schema as the high-emphase inference API meaning that we need to provide a JSON with the inputs key and then a sentence this model runs on Azure I am happy and now I can hit test and we'll get back our positive from our model also if you want to integrate it into your application and as you know provides code snippets for Python C sharp and R which you can directly copy and paste into your application as well as you can regenerate your keys in case of any security guidelines okay and then we can go back to our overview of our managed application and could either create more endpoints into our high-emphase application or we can again delete our endpoint yeah if you have any questions about high-end phase Azure ML endpoints feel free to reach out to us via email and when you deploy your managed application there should be your support email you can contact or we also have a dedicated section in our high-end phase forum since the purpose of this video was to demo you how you can create it we also make sure that we delete our endpoint to save resources which will now deleting our custom resource should reload in a second and then we'll clean up all of the infrastructure behind the scenes thank you and bye | [{"start": 0.0, "end": 8.96, "text": " Hi everyone, my name is Philip and I'm a technically a Kying phase and today I'm going to show"}, {"start": 8.96, "end": 16.64, "text": " you how you can deploy Hiking phase transformers to Azure using the Hiking phase Azure MLN points."}, {"start": 16.64, "end": 22.6, "text": " We start at the Azure partner and the first thing we need to do is to search for our Hiking"}, {"start": 22.6, "end": 24.64, "text": " phase Azure MLN points."}, {"start": 24.64, "end": 29.68, "text": " You can do this by searching for Hiking and then at the Marketplace section we should"}, {"start": 29.68, "end": 34.92, "text": " see the Hiking phase Azure MLN points."}, {"start": 34.92, "end": 41.32, "text": " This will bring us to the Marketplace offering we can click on Create which will then allow"}, {"start": 41.32, "end": 47.480000000000004, "text": " us to create our Hiking phase managed application on Azure."}, {"start": 47.480000000000004, "end": 52.120000000000005, "text": " As a resource group I'm going to select the test resource group here you can select the"}, {"start": 52.12, "end": 56.8, "text": " resource group where you want your managed app to be deployed region we are going to go"}, {"start": 56.8, "end": 64.84, "text": " with the East US region the name will be Hiking phase video and the application it's also"}, {"start": 64.84, "end": 70.08, "text": " going to be Hiking phase and the managed resource group is not important it's basically"}, {"start": 70.08, "end": 75.75999999999999, "text": " where later all of our Azure resources are stored you can change it if you want but you"}, {"start": 75.75999999999999, "end": 78.2, "text": " can also leave it as it is."}, {"start": 78.2, "end": 83.84, "text": " The next step is text we don't need any text right now and then we can go to Review and"}, {"start": 83.84, "end": 93.76, "text": " Create which will validate our inputs and then we have to access the conditions this basically"}, {"start": 93.76, "end": 102.48, "text": " means that you as a user allowing us Hiking phase to later deploy or create resources for"}, {"start": 102.48, "end": 106.08, "text": " you and now we can click Create."}, {"start": 106.08, "end": 112.72, "text": " What's now going to happen is that our managed application with some base Azure resources"}, {"start": 112.72, "end": 119.36, "text": " are going to be created this takes around one minute after the deployment is done I'll"}, {"start": 119.36, "end": 123.24, "text": " be back."}, {"start": 123.24, "end": 128.16, "text": " The deployment of our managed application is not complete and we can now start creating"}, {"start": 128.16, "end": 133.68, "text": " our Azure MLN points therefore we go to the resource which will bring us to our Hiking"}, {"start": 133.68, "end": 139.08, "text": " phase managed application and then on the left side we can find under Resources Preview"}, {"start": 139.08, "end": 145.36, "text": " Hiking phase endpoints which is our central place for all our Hiking phase endpoints later"}, {"start": 145.36, "end": 150.44, "text": " meaning we will have an overview of all our created endpoints we can add more we can delete"}, {"start": 150.44, "end": 152.76000000000002, "text": " them or edit them."}, {"start": 152.76000000000002, "end": 157.88, "text": " Obviously since we created our managed application we don't have any endpoints available yet"}, {"start": 157.88, "end": 162.60000000000002, "text": " therefore we will add one and to create a new Hiking phase endpoint we only have to"}, {"start": 162.6, "end": 167.12, "text": " provide a Hiking phase model ID and need to select an instance time."}, {"start": 167.12, "end": 172.51999999999998, "text": " For our model we go to the Hiking phase hub select a model in this case we will use the"}, {"start": 172.51999999999998, "end": 178.79999999999998, "text": " Distrabert based on Cased fine tune on SSD2 which is a Distrabert model fine tune for text"}, {"start": 178.79999999999998, "end": 185.28, "text": " classification we go back to our Azure resource paste our Hiking phase model ID and then we"}, {"start": 185.28, "end": 190.04, "text": " can select our compute instance time here you need to make sure that you have available"}, {"start": 190.04, "end": 197.48, "text": " Azure welcomed you for your instance so the drop down is showing all available Azure"}, {"start": 197.48, "end": 203.51999999999998, "text": " ML instances but it's not validating in the UI if you have capacity or not so in our"}, {"start": 203.51999999999998, "end": 208.44, "text": " case if we would like to deploy a T4 you have to make sure that you have available capacity"}, {"start": 208.44, "end": 215.32, "text": " if not the creation will fail but for our case we don't need a GPU we will go with the"}, {"start": 215.32, "end": 224.79999999999998, "text": " F2 series which is the CPU optimized instance for our model then we can hit review and submit"}, {"start": 224.79999999999998, "end": 230.35999999999999, "text": " make sure that we have the correct model ID yes in our instance time and then we can"}, {"start": 230.35999999999999, "end": 236.35999999999999, "text": " click submit and now our resources created on the top right side we can see is creating"}, {"start": 236.35999999999999, "end": 244.07999999999998, "text": " resources meaning that our backend is now validating all of the information we provided meaning"}, {"start": 244.08, "end": 249.52, "text": " that it checks if the model ID we provided is a valid model on the Hiking phase hub it"}, {"start": 249.52, "end": 254.56, "text": " also makes sure that we have enough capacity available if for example we wouldn't have"}, {"start": 254.56, "end": 260.84000000000003, "text": " enough capacity for our F2 instance series we would see a failure here and then we can"}, {"start": 260.84000000000003, "end": 267.64, "text": " go to the details and check what is the arrow and then we should see out of capacity a few"}, {"start": 267.64, "end": 273.36, "text": " more seconds and then we should see our green checkbox and also should see our first"}, {"start": 273.36, "end": 283.52000000000004, "text": " endpoint with endpoint creation state also something pretty cool about the Hiking phase"}, {"start": 283.52000000000004, "end": 290.84000000000003, "text": " as you know LN points is that we are automatically deriving the task for the model so you have"}, {"start": 290.84000000000003, "end": 295.6, "text": " seen that we only provided the model ID and didn't provide which task we want to use"}, {"start": 295.6, "end": 301.40000000000003, "text": " the task is automatically derived from the model information so we didn't need to specify"}, {"start": 301.4, "end": 306.56, "text": " that our model is a text classification model this was done automatically and while"}, {"start": 306.56, "end": 314.2, "text": " I was talking our validation or custom resource was successfully created and the dashboard"}, {"start": 314.2, "end": 319.67999999999995, "text": " is currently loading so yes perfect here we see our new endpoint with our status which"}, {"start": 319.67999999999995, "end": 328.03999999999996, "text": " is currently in creating and within 5 to 10 minutes our endpoint should be created and"}, {"start": 328.04, "end": 337.12, "text": " our deployment was also created once this is done I'll come back the deployment of our"}, {"start": 337.12, "end": 345.68, "text": " endpoint succeeded and we can now see that the managed application derived the task we"}, {"start": 345.68, "end": 351.84000000000003, "text": " can also see our endpoint URI which we could directly copy and paste into our application"}, {"start": 351.84, "end": 357.79999999999995, "text": " we can see our instance time as well as the as actual resource link which we are going to"}, {"start": 357.79999999999995, "end": 365.71999999999997, "text": " use to test our endpoint we can copy the actual resource link paste into our process which"}, {"start": 365.71999999999997, "end": 373.47999999999996, "text": " will now like jump into Azure Machine Learning Studio to our endpoint which has been created"}, {"start": 373.48, "end": 382.36, "text": " in here we can go to test and can test our endpoint the endpoint is using the same API schema"}, {"start": 382.36, "end": 389.16, "text": " as the high-emphase inference API meaning that we need to provide a JSON with the inputs key"}, {"start": 389.16, "end": 400.92, "text": " and then a sentence this model runs on Azure I am happy and now I can hit test and we'll"}, {"start": 400.92, "end": 408.16, "text": " get back our positive from our model also if you want to integrate it into your application"}, {"start": 408.16, "end": 415.56, "text": " and as you know provides code snippets for Python C sharp and R which you can directly copy"}, {"start": 415.56, "end": 425.16, "text": " and paste into your application as well as you can regenerate your keys in case of any security"}, {"start": 425.16, "end": 442.8, "text": " guidelines okay and then we can go back to our overview of our managed application and could"}, {"start": 442.8, "end": 450.36, "text": " either create more endpoints into our high-emphase application or we can again delete our endpoint"}, {"start": 450.36, "end": 459.28000000000003, "text": " yeah if you have any questions about high-end phase Azure ML endpoints feel free to reach out to"}, {"start": 459.28000000000003, "end": 464.12, "text": " us via email and when you deploy your managed application there should be your support email"}, {"start": 464.12, "end": 470.6, "text": " you can contact or we also have a dedicated section in our high-end phase forum since the"}, {"start": 470.6, "end": 475.6, "text": " purpose of this video was to demo you how you can create it we also make sure that we delete"}, {"start": 475.6, "end": 482.32000000000005, "text": " our endpoint to save resources which will now deleting our custom resource should reload"}, {"start": 482.32, "end": 508.56, "text": " in a second and then we'll clean up all of the infrastructure behind the scenes thank you and bye"}] |
- Downloads last month
- 43