原文版:Intel OpenVINO? Toolkit and AWS* Greengrass!!!

https://software.intel.com/en-us/articles/get-started-with-the-openvino-toolkit-and-aws-greengrass

Hardware-Accelerated Function-as-a-Service Using AWS Greengrass

Hardware Accelerated Function-as-a-Service (FaaS) enables cloud developers to deploy inference functionalities on Intel® IoT edge devices with accelerators such as Intel® Processor Graphics, Intel® FPGA, and Intel® Movidius™ Neural Compute Stick.  These functions provide a great developer experience and seamless migration of visual analytics from cloud to edge in a secure manner using a containerized environment. Hardware-accelerated FaaS provides the best-in-class performance by accessing optimized deep learning libraries on Intel IoT edge devices with accelerators.

This section describes implementation of FaaS inference samples (based on Python* 2.7) using Amazon Web Services (AWS) Greengrass* and AWS Lambda* software. AWS Lambda functions (Lambdas) can be created, modified, or updated in the cloud and can be deployed from cloud to edge using AWS Greengrass. This document covers:

  • Description of the sample
  • Supported platforms
  • Pre-requisites for Intel IoT® edge devices
  • Configuration of an AWS Greengrass group
  • Creation and packaging of Lambda functions
  • Deployment of Lambdas
  • Various options to consume the inference output

Description

Sample File: greengrass_object_detection_sample_ssd.py
This AWS Greengrass sample detects objects in a video stream and classifies them using single-shot multi-box detection (SSD) networks such as SSD SqueezeNet, SSD MobileNet, and SSD300. This sample publishes detection outputs such as class label, class confidence, and bounding box coordinates on AWS IoT* Cloud every second.

Supported Platforms

Pre-requisites

  • Download and install the Intel® Distribution of OpenVINO™ toolkit. **The demo installs in /opt/intel/computer_vision_sdk/ by default.  If you installed the toolkit in a different location, use that path for the <INSTALL_DIR>.
  • Install both Python 2.7* and Python*3.0.

Note: Python* 2.7 with opencv-python, numpy, and boto3 is required for use with AWS Greengrass. Use the instruction sudo pip2 to install the packages in locations accessible by AWS Greengrass. Python* 3.0+ is required for use with the Intel® Distribution of OpenVINO™ toolkit model optimizer.

  • Create an AWS account.
  • To run the samples, the Intel® Distribution of OpenVINO™ toolkit provides the pre-compiled libcpu_extension libraries available in the directory:
     /opt/intel/computer_vision_sdk/deployment_tools/inference_engine/lib/Ubuntu_16.04/intel64/

- libcpu_extension_sse4.so – for use with Intel Atom® processors
              - libcpu_extension_avx2.so – for use with Intel® Core™ and Intel® Xeon® processors

  • To run the samples on other devices, rebuild the libraries for a specific target to see a performance gain. For build instructions, refer to the Inference Engine Developer Guide.

Pre-Configuration

The previous command creates the IR (Intermediate Representation) files with .xml and .bin file extensions. If it fails, install the prerequisites for the model optimizer:

1 cd install_prerequisites/
2 ./install_prerequisites.sh
  1. Download Intel® edge-optimized models available at GitHub. Any custom pre-trained classification or SSD models may be used.

    1 git clone https://github.com/intel/Edge-optimized-models.git
    2 cd Edge-optimized-models/
  2. For this demo, use the Squeesezet 5-Class detection model. Copy the .caffemodel and the .prototxt files into Intel® Distribution of OpenVINO™ toolkit model optimizer directory:
    1 cd SqueezeNet\ 5-Class\ detection/
    2 sudo cp SqueezeNetSSD-5Class.* /deployment_tools/model_optimizer/
  3. Switch to the model optimizer directory and run these commands to optimize the model (Python* version 3.5 is used in this demo):

    Note: For CPU, models must use data type FP32 for best performance. For GPU and FPGA, models must use data type FP16 for the best performance. For more information on how to use the Model Optimizer, follow the instructions at Intel® Distribution of OpenVINO™ toolkit Model Optimizer.

    1 cd /deployment_tools/model_optimizer/
    2 sudo python3 mo.py --input_model SqueezeNetSSD-5Class.caffemodel --input_proto SqueezeNetSSD-5Class.prototxt --data_type FP16
  4. Create a new folder in your home directory to store the IR files, which will be accessed by Greengrass later. Copy the .xml and .bin files to this directory:
    1 mkdir ~/greengrass-input-files
    2 cp SqueezNetSSD-5Class.xml SqueezeNetSSD-5Class.bin ~/greengrass-input-files

Add Input Video to the Folder

Add an input video to the ~/greengrass-input-files folder. Upload a custom video or select one of the sample videos. In this demo we use the SqueezeNet 5 Class Model which detects Bicycle, Bus, Car, Motorbike, and Person classes. Make sure to choose an appropriate video so that the model can make valid inferences.

Configure an AWS Greengrass group

For each Intel‘s edge platform, create a new AWS Greengrass group and install AWS Greengrass core software to establish the connection between cloud and edge. Follow the instructions in the AWS Greengrass Developer Guide. To create an AWS Greengrass Group, see Configure AWS Greengrass on AWS IoT. To install and configure an AWS Greengrass core on edge platform, see AWS Greengrass core on edge platform.

After configuring Greengrass on the edge device, set group and user permissions to start the daemon. Run the following commands:

1 sudo adduser --system ggc_user
2 sudo addgroup --system ggc_group

Start the daemon by typing the following:

1 cd /greengrass/ggc/core/
2 sudo ./greengrassd start

Package Lambda Functions

This section describes how to:

  • Make a project directory
  • Download AWS Greengrass SDKs
  • Package Lambda files.
  1. Create a project folder to store files for the Lambda function:

    mkdir ~/greengrass_project
  2. To download the AWS Greengrass Core SDK for Python* 2.7 follow the AWS Greengrass Developer Guide, Create and Package  a Lambda Function, steps 1-3.
  3. Next extract the contents of the tar package:
    1 sudo tar –xvf <Download_Location>/greengrass-core-python-sdk-1.2.0.tar.gz
    2 cd aws_greengrass_core_sdk/examples/HelloWorld
    3 sudo unzip greengrassHelloWorld.zip
    4 cd greengrassHelloWorld

    This step creates the directory greengrasssdk. The SDK is needed to deploy a Lambda on your edge device.

  4. Copy this into the project folder:
    cp -r greengrasssdk/ ~/greengrass_project
  5. Use the object_detection_demo from Intel® Distribution of OpenVINO™ toolkit. Copy this file into the project folder:
    cp<INSTALL_DIR>/deployment_tools/inference_engine/samples/python_samples/greengrass_samples/greengrass_object_detection_sample_ssd.py ~/greengrass_project
  6. Finally, copy the greengrass_common and greengrass_ipc_python_sdk directories into the project folder:
    1 cd /greengrass/ggc/packages/1.6.0/runtime/python2.7/
    2 cp -r greengrass_common/ ~/greengrass_project
    3 cp -r greengrass_ipc_python_sdk/ ~/greengrass_project
  7. Change directory (cd) to  ~/greengrass_project to see the following contents:

    • greengrass_common
    • greengrass_ipc_python_sdk
    • greengrasssdk
    • greengrass_object_detection_sample_ssd.py
  8. Zip these files for upload to AWS Lambda:
zip -r greengrass_sample_python_lambda.zip greengrass_common greengrass_ipc_python_sdk        greengrasssdk greengrass_object_detection_sample_ssd.py

Create Lambda Functions with AWS CLI

This demo involves creating a Lambda using AWS CLI. The CLI enables an updates to an Alias pointing to the Lambda code. This feature is very useful for users who make frequent changes to the Lambda code.

     $ aws lambda create-function     --region region     --function-name greengrass_object_detection     --zip-file fileb://~/greengrass-project/greengrass_sample_python_lambda.zip     --role role-arn     --handler greengrass_object_detection_sample_ssd.function_handler     --runtime python2.7     --profile default

Note: For this demo we set --region to us-east-1 and --role-arn to the arn of the IAM role we wish to apply. You may have to create an IAM role for Lambda first. Make sure --handler is in the format: <mainfile_name>.function_handler and your region is the same as your greengrass group.

1        aws lambda create-alias \
2        --region region \
3        --function-name greengrass_object_detection \
4        --description "Alias for Greengrass" \
5        --function-version 1 \
6        --name GG_Alias \
7        --profile default

If you experience issues creating the Lambda function, see AWS Greengrass Developer Guide, Tutorial: Using AWS Lambda Aliases.

  1. To setup the AWS CLI, follow AWS Greengrass Developer Guide,Set Up the AWS Command Line Interface, steps 1-3.
  2. Once the AWS CLI is configured properly, create a Lambda with AWS CLI. Type in the following in your terminal:
  3. Publish the first version:
    1 aws lambda publish-version \
    2        --region region \      
    3        --function-name greengrass_object_detection \
    4        --profile default
  4. Create an alias for this Lambda:
  5. Login to AWS and navigate to the Lambda Console. Find the greengrass_object_detection under Functions. Click on the link to see the contents and information about the Lambda.

Deployment of Lambdas

Configure the Lambda function

After creating the AWS Greengrass group and the Lambda function, configure the Lambda function for AWS Greengrass. Follow the instructions in the AWS Greengrass Developer Guide, Configure the Lambda Function for AWS Greengrass, Steps 1-8.

Use the name of the Lambda and Alias in the instructions you followed previously. Additionally, in step 8, change the memory limit to 2048MB to accommodate large input video streams.

Add the environment variables in Table 1 as key-value pairs when editing the Lambda configuration and click on Update. See Table 2 for key-value pairs used in the demo.

Table 1. Environment Variables: Key-value Pairs


Key


Value


LD_LIBRARY_PATH


<INSTALL_DIR>/opencv/share/OpenCV/3rdparty/lib:
<INSTALL_DIR>/opencv/lib:/opt/intel/opencl:
<INSTALL_DIR>/deployment_tools/inference_engine/external/cldnn/lib:
<INSTALL_DIR>/deployment_tools/inference_engine/external/mkltiny_lnx/lib:
<INSTALL_DIR>/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64:
<INSTALL_DIR>/deployment_tools/model_optimizer/model_optimizer_caffe/bin:
<INSTALL_DIR>/openvx/lib


PYTHONPATH


<INSTALL_DIR>/deployment_tools/inference_engine/python_api/Ubuntu_1604/python2


PARAM_MODEL_XML


<MODEL_DIR>/<IR.xml>, where <MODEL_DIR> is user specified and contains IR.xml, the Intermediate Representation file from Intel Model Optimizer


PARAM_INPUT_SOURCE


<DATA_DIR>/input.mp4 to be specified by user. Holds both input and output data.


PARAM_DEVICE


For CPU, specify `CPU`. For GPU, specify `GPU`. For FPGA, specify `HETERO:FPGA,CPU`.


PARAM_CPU_EXTENSION_PATH


<INSTALL_DIR>/deployment_tools/inference_engine/lib/Ubuntu_16.04/intel64/<CPU_EXTENSION_LIB>, where CPU_EXTENSION_LIB is libcpu_extension_sse4.so for Intel Atom® processors and libcpu_extension_avx2.so for Intel® Core™ and Intel® Xeon® processors.


PARAM_OUTPUT_DIRECTORY


<DATA_DIR> to be specified by user. Holds both input and output data.


PARAM_NUM_TOP_RESULTS


User specified for classification sample (e.g. 1 for top-1 result, 5 for top-5 results)

Note: Table 1 lists the general paths for environment variables accessed during Greengrass deployment. Environment variable paths depend on the version of Intel® Distribution of OpenVINO™ toolkit installed. When running an Intel® Distribution of OpenVINO™ toolkit application without AWS Greengrass, the <INSTALL_DIR>/bin/setupvars.sh script is sourced first. With Greengrass deployment, however, the environment variables are sourced through the Lambda configuration.

This demo uses Intel® Distribution of OpenVINO™ toolkit R3 on the Up Squared* platform. Table 2 lists the environment variables for the Lambda configuration.

Table 2. Environment Variables: Key-value Pairs for Demo


Key


Value


LD_LIBRARY_PATH


/opt/intel/computer_vision_sdk_2018.3.343/opencv/share/OpenCV/3rdparty/lib:/opt/intel/computer_vision_sdk_2018.3.343/opencv/lib:/opt/intel/opencl:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/inference_engine/external/cldnn/lib:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/inference_engine/external/gna/lib:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/model_optimizer_caffe/bin:/opt/intel/computer_vision_sdk_2018.3.343/openvx/lib:


PYTHONPATH


/opt/intel/computer_vision_sdk_2018.3.343/python/python2.7:/opt/intel/computer_vision_sdk_2018.3.343/python/python2.7/ubuntu16:/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer:


PARAM_MODEL_XML


/home/upsquared/greengrass-input-files/SqueezeNetSSD-5Class.xml


PARAM_INPUT_SOURCE


/home/upsquared/greengrass-input-files/sample-videos/inputvideo.mp4


PARAM_DEVICE


GPU


PARAM_CPU_EXTENSION_PATH


/opt/intel/computer_vision_sdk_2018.2.319/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64/libcpu_extension_sse4.so


PARAM_OUTPUT_DIRECTORY


/home/upsquared/greengrass-output


PARAM_NUM_TOP_RESULTS


3

Table 3 lists the  LD_LIBRARY_PATH and additional environment variables for Intel® Arria® 10 GX FPGA Development Kit.

Table 3. Environment Variables: Additional Key-value pairs for Intel® Arria® 10 GX FPGA Developer Kit


Key


Value


LD_LIBRARY_PATH


/opt/altera/aocl-pro-rte/aclrte-linux64/board/a10_ref/linux64/lib:

/opt/altera/aocl-pro-rte/aclrte-linux64/host/linux64/lib:

<INSTALL_DIR>/opencv/share/OpenCV/3rdparty/lib:

<INSTALL_DIR>/opencv/lib:/opt/intel/opencl:

<INSTALL_DIR>/deployment_tools/inference_engine/external/cldnn/lib:

<INSTALL_DIR>/deployment_tools/inference_engine/external/mkltiny_lnx/lib:

<INSTALL_DIR>/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64:

<INSTALL_DIR>/deployment_tools/model_optimizer/model_optimizer_caffe/bin:

<INSTALL_DIR>/openvx/lib


DLA_AOCX


<INSTALL_DIR>/a10_devkit_bitstreams/0-8-1_a10dk_fp16_8x48_arch06.aocx


CL_CONTEXT_COMPILER_MODE_INTELFPGA


3

Figure 1. Environment Variable Example

Add Subscription

To subscribe or publish messages from AWS Greengrass Lambda function, follow the AWS Greengrass Developer Guide, Configure the Lamb Function for AWS Greengrass, Steps 10-14.

The Optional topic filter field should be mentioned inside the Lambda function. For example, openvino/ssd is the topic used in greengrass_object_detection_sample_ssd.py.

Configure Local Resources

To grant Greengrass access to the hardware resources, as well as the environment variable paths, follow the AWS Greengrass Developer Guide,

Table 4. Resource Access

Name Resource Type Local Path Access
InputDir Volume home/<username>/greengrass-input-files Read-Only
Webcam

Device

/dev/video0

Read-Only

OutputDir

Volume

home/<username>/greengrass-output

Read and Write

OpenVINOPath

Volume

<INSTALL_DIR> (OpenVINO Install location)

Read-Only

Note: If using a webcam rather than a pre-recorded video, modify the code in greengrass_object_detection_sample_ssd.py.  
Change the PARAM_INPUT_SOURCE line:
From : PARAM_INPUT_SOURCE = os.environ.get("PARAM_INPUT_SOURCE")
TO: PARAM_INPUT_SOURCE = 0

The 0 value represents the suffix of the video device in the /dev folder.

Table 5. Resource Access for GPU

Name Resource Type Local Path Access
GPU Device /dev/dri/renderD128 Read and Write

Table 6. Resource Access for FPGA

Name Resource Type Local Path Access
FPGA Device /dev/acla10_ref0 Read and Write
FPGA_DIR1 Volume /opt/Intel/OpenCL/Boards Read and Write
FPGA_DIR2 Volume /etc/OpenCL/vendors Read and Write

Figure 2. Resource Access Example

Add Role

Lastly, add a role to the Greengrass group.

1. Go to the Greengrass Console > Groups. Select your group name.
2. Choose Settings and Add Role for the Group Role Section.

Note: You may have to create a Greengrass IAM role prior to following the Add Role instructions. Adding a role is required to upload images to S3 and access other AWS resources.

Deploy

To deploy the Lambda function to AWS Greengrass core device, select Deployments on group page and follow the instructions in Deploy Cloud Configurations to AWS Greengrass Core Device.

Upon first deployment, an error may occur.

Figure 3. First Deployment Error

  1. To fix the error, give ggc_user:ggc_group permission to access the Intel® Distribution of OpenVINO™ toolkit Install location. On the command line, type on the core device:

           chown ggc_user:ggc_group /opt/intel/computer_vision_sdk
  2. Repeat the same chown process on other directories if the error occurs again. Additionally, make sure to remove the Webcam resource if it is not plugged in, or an error will occur when deploying the Lambda.

Update/Change Lambda Code

This section describes how to deploy new version of the Lambda to AWS Greengrass after changing the code inside the Lambda Console. For example, modifying greengrass_object_detection_sample_ssd.py requires deploying a new version.

  1. Inside of Lambda Console, choose Actions > Publish New Version.
  2. From the Terminal, using AWS CLI, update the alias to point to the new version.
    1        aws lambda update-alias \
    2        --region region \
    3        --function-name greengrass_object_detection \
    4        --function-version 2 \
    5        --name GG_Alias \
    6        --profile default

    For --function-version, specify the function version that you published to in the Lambda Console.

Output Consumption

There are four options available for output consumption:

  • AWS IoT Cloud Output
  • AWS Kinesis Streaming*
  • Cloud Storage using AWS S3 Bucket*
  • Local Storage

These options are used to report, stream, upload and store inference output at an interval defined by the variable reporting_interval in the AWS Greengrass samples.

Descriptions

AWS IoT Cloud Output

The AWS Greengrass samples enable AWS IoT Cloud Output by default with the variable enable_iot_cloud_output. The option verifies the Lambda running the edge device. It also enables publishing messages to AWS IoT Cloud using subscription topics specified in the Lambda. For example, the option publishes messages for classification using the subscription topic openvino/classification. The option publishes the top class label to AWS IoT cloud. It uses the subscription topic openvino/ssd for object detection samples. For SSD object detection, it publishes bounding box co-ordinates of objects, class label, and class confidence.

To view the output on AWS IoT cloud, follow the AWS Greengrass Developer Guide, Verify the Lambda Function is Running on the Device.

AWS Kinesis Streaming

The AWS Kinesis Streaming option enables inference output to be streamed from the edge device to cloud using AWS Kinesis streams when enable_kinesis_output is set to True. The edge devices act as data producers and continually push processed data to the cloud. Users set up and specify AWS Kinesis stream name, AWS Kinesis shard, and AWS region in the AWS Greengrass samples.

Cloud Storage using AWS S3* Bucket

The Cloud Storage Using AWS S3 Bucket option enables uploading and storing processed frames (JPEG format) in an AWS S3* bucket when the enable_s3_jpeg_output variable is set to True. The users need to set up and specify the AWS S3 bucket name in the AWS Greengrass samples to store the JPEG images. The images are named using the timestamp and uploaded to AWS S3.

Local Storage

The Local Storage option enables storing processed frames (JPEG format) on the edge device when the enable_s3_jpeg_output variable is set to True. The images are named using the timestamp and stored in a directory specified by PARAM_OUTPUT_DIRECTORY.

原文地址:https://www.cnblogs.com/cloudrivers/p/11483703.html

时间: 2024-10-05 19:38:02

原文版:Intel OpenVINO? Toolkit and AWS* Greengrass!!!的相关文章

Welcome to AWS Greengrass Demo on RaspBerry Pi 4B with OpenVino

https://michaelawsoracle.s3.amazonaws.com/index.html Welcome to AWS Greengrass Demoon RaspBerry Pi 4B with OpenVino    System Architecture Sample Pictures    AWS Greengrass Inside 原文地址:https://www.cnblogs.com/cloudrivers/p/11631230.html

入坑Intel OpenVINO:记录一个示例出错的原因和解决方法

今天试用OpenVINO的例子,在过程中发现了一些其他人没有经历的坑特别记录一下. 出错时候:执行Intel OpenVINO示例的是时候,出错的提示代码: 用于 .NET Framework 的 Microsoft (R) 生成引擎版本 15.9.21+g9802d43bc3 版权所有(C) Microsoft Corporation.保留所有权利. 8>C:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\inference_eng

基于Intel OpenVINO的搭建及应用,包含分类,目标检测,及分割,超分辨

PART I: 搭建环境OPENVINO+Tensorflow1.12.0 I: l_openvino_toolkit_p_2019.1.094 第一步常规安装参考链接:https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html 第二步编译Inference Engine Samples: cd /PATH/TO/deployment_tools/inference_eng

[文章分享]开始在 Intel&reg; IoT 平台上使用 Intel&reg; XDK IoT 版

摘要:[文章分享]开始在 Intel? IoT 平台上使用 Intel? XDK IoT 版 Intel? XDK是一套Intel自己的开发工具! 为了因应IOT的趋势-Intel也出了Intel? XDK Iot版! 就让我们来看看要如何使用Intel? XDK在我们的Intel? IoT 平台上吧! 文章连结:https://software.intel.com/en-us/articles/install-the-intel-xdk-iot-edition ? 原文:大专栏  [文章分享]

跨云应用部署第一步:使用IPSEC VPN连接AWS中国版和Windows Azure中国版

随着公有云的普及,越来越多的客户将关键应用迁移到云端.但是事实证明,没有哪家云服务提供商可以提供100%的SLA,无论是例行维护还是意外中断服务,对于客户的关键应用而言,都会受到不同程度的影响.此外,不同的云服务提供商,其提供的云服务也存在较大的差异化,包括云服务和产品的功能.质量.收费模式等方面.客户选择多样化的.最适合的.性价比最高的云服务和产品来满足自己的业务需求是必然的趋势.公有云应用占比越高的客户,其跨云应用部署的需求也越大——都不想把鸡蛋放在一个篮子里,或者是被某个云服务提供商绑架.

Trying out the Intel Neural Compute Stick 2 – Movidius NCS2

Trying out the Intel Neural Compute Stick 2 – Movidius NCS2 Disclaimer: The opinions in this article (and on this website in general) are entirely mine and not those of my employer Dell EMC. Testing has been done in a short period of time and may not

intel计算棒2代

anaconda删源: conda config --remove-key channels NCS2 第一步:下载OpenVINO 在树莓派上也可以使用该计算棒,先安装OpenVINO工具,再在树莓派上安装Inference Engine 下载链接:https://software.intel.com/en-us/openvino-toolkit/choose-download/free-download-linux 点击register & download,如果没有注册intel账户,则自

OpenVino的MXnet模型转换

[在使用Movidius的模型优化器转换模型之前,需要先用MXNet的deploy.py将模型转换成部署模式,然后才能用movidius的优化器转换] https://github.com/apache/incubator-mxnet/blob/master/example/ssd/deploy.py cd ~git clone https://github.com/apache/incubator-mxnet mv tmp/*-0000.params tmp/ssd_resnet50_512-

MXnet 转 OpenVino,在树莓派上做的图片推理

[在使用Movidius的模型优化器转换模型之前,需要先用MXNet的deploy.py将模型转换成部署模式,然后才能用movidius的优化器转换] https://github.com/apache/incubator-mxnet/blob/master/example/ssd/deploy.py cd ~git clone https://github.com/apache/incubator-mxnet mv tmp/*-0000.params tmp/ssd_resnet50_512-