Jetson Nano Building the Project from Source

https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md

Provided with the repo is a library of TensorRT-accelerated deep learning networks for image recognition, object detection with localization (i.e. bounding boxes), and semantic segmentation. This inferencing library (libjetson-inference) is intended to be built & run on the Jetson, and includes support for both C++ and Python.

Various pre-trained DNN models are automatically downloaded to get you up and running quickly. It‘s also setup to accept customized models that you may have trained yourself, including support for Caffe, TensorFlow UFF, and ONNX.

The latest source can be obtained from GitHub and compiled onboard Jetson Nano, Jetson TX1/TX2, and Jetson AGX Xavier once they have been flashed with JetPack or setup with the pre-populated SD card image for Jetson Nano.

Quick Reference

Here‘s a condensed form of the commands to download, build, and install the project:

$ sudo apt-get update
$ sudo apt-get install git cmake libpython3-dev python3-numpy
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../
$ make
$ sudo make install
$ sudo ldconfig

Below we will go through each step and discuss various build options along the way.

Cloning the Repo

To download the code, navigate to a folder of your choosing on the Jetson. First, make sure git and cmake are installed:

$ sudo apt-get update
$ sudo apt-get install git cmake

Then clone the jetson-inference project:

$ git clone https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ git submodule update --init

Remember to run the git submodule update --init step (or clone with the --recursive flag).

Python Development Packages

The Python functionality of this project is implemented through Python extension modules that provide bindings to the native C++ code using the Python C API. While configuring the project, the repo searches for versions of Python that have development packages installed on the system, and will then build the bindings for each version of Python that‘s present (e.g. Python 2.7, 3.6, and 3.7). It will also build numpy bindings for versions of numpy that are installed.

By default, Ubuntu comes with the libpython-dev and python-numpy packages pre-installed (which are for Python 2.7). Although the Python 3.6 interpreter is pre-installed by Ubuntu, the Python 3.6 development packages (libpython3-dev) and python3-numpy are not. These development packages are required for the bindings to build using the Python C API.

So if you want the project to create bindings for Python 3.6, install these packages before proceeding:

$ sudo apt-get install libpython3-dev python3-numpy

Installing these additional packages will enable the repo to build the extension bindings for Python 3.6, in addition to Python 2.7 (which is already pre-installed). Then after the build process, the jetson.inference and jetson.utilspackages will be available to use within your Python environments.

Configuring with CMake

Next, create a build directory within the project and run cmake to configure the build. When cmake is run, a script is launched (CMakePreBuild.sh) that will install any required dependencies and download DNN models for you.

$ cd jetson-inference    # omit if working directory is already jetson-inference/ from above
$ mkdir build
$ cd build
$ cmake ../

note: this command will launch the CMakePreBuild.sh script which asks for sudo privileges while installing some prerequisite packages on the Jetson. The script also downloads pre-trained networks from web services.

Downloading Models

The project comes with many pre-trained networks that can you can choose to have downloaded and installed through the Model Downloader tool (download-models.sh). By default, not all of the models are initially selected for download to save disk space. You can select the models you want, or run the tool again later to download more models another time.

When initially configuring the project, cmake will automatically run the downloader tool for you:

note: for users that are unable to connect to Box.com to download the models, a mirror is provided here:
             https://github.com/dusty-nv/jetson-inference/releases

To run the Model Downloader tool again later, you can use the following commands:

$ cd jetson-inference/tools
$ ./download-models.sh

Installing PyTorch

If you are using JetPack 4.2 or newer, another tool will now run that can optionally install PyTorch on your Jetson if you want to re-train networks with transfer learning later in the tutorial. This step is optional, and if you don‘t wish to do the transfer learning steps, you don‘t need to install PyTorch and can skip this step.

If desired, select the PyTorch package versions for Python 2.7 and/or Python 3.6 that you want installed and hit Enter to continue. Otherwise, leave the options un-selected, and it will skip the installation of PyTorch.

note: the automated PyTorch installation tool requires JetPack 4.2 (or newer)
          for other versions, see http://eLinux.org/Jetson_Zoo to build from source.

You can also run this tool again later if you decide that you want to install PyTorch at another time:

$ cd jetson-inference/build
$ ./install-pytorch.sh

Running these commands will prompt you with the same dialog as seen above.

Compiling the Project

Make sure you are still in the jetson-inference/build directory, created above in step #3.

Then run make followed by sudo make install to build the libraries, Python extension bindings, and code samples:

$ cd jetson-inference/build          # omit if working directory is already build/ from above
$ make
$ sudo make install
$ sudo ldconfig

The project will be built to jetson-inference/build/aarch64, with the following directory structure:

|-build
   \aarch64
      \bin             where the sample binaries are built to
         \networks     where the network models are stored
         \images       where the test images are stored
      \include         where the headers reside
      \lib             where the libraries are build to

In the build tree, you can find the binaries residing in build/aarch64/bin/, headers in build/aarch64/include/, and libraries in build/aarch64/lib/. These also get installed under /usr/local/ during the sudo make install step.

The Python bindings for the jetson.inference and jetson.utils modules also get installed during the sudo make install step under /usr/lib/python*/dist-packages/. If you update the code, remember to run it again.

Digging Into the Code

See the API Reference documentation for the vision primitives available in libjetson-inference, including imageNet for image recognition, detectNet for object localization, and segNet for semantic segmentation. Familiarize yourself with the C++ or Python versions of these objects, depending on which language you prefer to use.

C++

Below is a partial listing of the imageNet C++ class that we‘ll use in upcoming steps of the tutorial:

class imageNet : public tensorNet
{
public:
	/**
	 * Network choice enumeration.
	 */
	enum NetworkType
	{
		CUSTOM,        /**< Custom model provided by the user */
		ALEXNET,       /**< AlexNet trained on 1000-class ILSVRC12 */
		GOOGLENET,	/**< GoogleNet trained 1000-class ILSVRC12 */
		GOOGLENET_12,	/**< GoogleNet trained on 12-class subset of ImageNet ILSVRC12 from the tutorial */
		RESNET_18,	/**< ResNet-18 trained on 1000-class ILSVRC15 */
		RESNET_50,	/**< ResNet-50 trained on 1000-class ILSVRC15 */
		RESNET_101,	/**< ResNet-101 trained on 1000-class ILSVRC15 */
		RESNET_152,	/**< ResNet-50 trained on 1000-class ILSVRC15 */
		VGG_16,		/**< VGG-16 trained on 1000-class ILSVRC14 */
		VGG_19,		/**< VGG-19 trained on 1000-class ILSVRC14 */
		INCEPTION_V4,	/**< Inception-v4 trained on 1000-class ILSVRC12 */
	};

	/**
	 * Load a new network instance
	 */
	static imageNet* Create( NetworkType networkType=GOOGLENET, uint32_t maxBatchSize=DEFAULT_MAX_BATCH_SIZE,
                              precisionType precision=TYPE_FASTEST,
                              deviceType device=DEVICE_GPU, bool allowGPUFallback=true );

	/**
	 * Load a new network instance
	 * @param prototxt_path File path to the deployable network prototxt
	 * @param model_path File path to the caffemodel
	 * @param mean_binary File path to the mean value binary proto (can be NULL)
	 * @param class_labels File path to list of class name labels
	 * @param input Name of the input layer blob.
	 * @param output Name of the output layer blob.
	 * @param maxBatchSize The maximum batch size that the network will support and be optimized for.
	 */
	static imageNet* Create( const char* prototxt_path, const char* model_path,
                              const char* mean_binary, const char* class_labels,
                              const char* input=IMAGENET_DEFAULT_INPUT,
                              const char* output=IMAGENET_DEFAULT_OUTPUT,
                              uint32_t maxBatchSize=DEFAULT_MAX_BATCH_SIZE,
                              precisionType precision=TYPE_FASTEST,
                              deviceType device=DEVICE_GPU, bool allowGPUFallback=true );

	/**
	 * Determine the maximum likelihood image class.
	 * This function performs pre-processing to the image (apply mean-value subtraction and NCHW format), @see PreProcess()
	 * @param rgba float4 input image in CUDA device memory.
	 * @param width width of the input image in pixels.
	 * @param height height of the input image in pixels.
	 * @param confidence optional pointer to float filled with confidence value.
	 * @returns Index of the maximum class, or -1 on error.
	 */
	int Classify( float* rgba, uint32_t width, uint32_t height, float* confidence=NULL );

	/**
	 * Retrieve the number of image recognition classes (typically 1000)
	 */
	inline uint32_t GetNumClasses() const                            { return mOutputClasses; }

	/**
	 * Retrieve the description of a particular class.
	 */
	inline const char* GetClassDesc( uint32_t index ) const          { return mClassDesc[index].c_str(); }
};

All of the DNN objects in the repo inherit from the shared tensorNet object, which contains the common TensorRT code.

Python

Below is the abbreviated pydoc output of the Python imageNet object from the jetson.inference package:

jetson.inference.imageNet = class imageNet(tensorNet)
 |  Image Recognition DNN - classifies an image
 |
 |  __init__(...)
 |       Loads an image recognition model.
 |
 |       Parameters:
 |         network (string) -- name of a built-in network to use
 |                             values can be:  ‘alexnet‘, ‘googlenet‘, ‘googlenet-12‘, ‘resnet-18`, ect.
 |                             the default is ‘googlenet‘
 |
 |         argv (strings) -- command line arguments passed to imageNet,
 |                           for loading a custom model or custom settings
 |
 |  Classify(...)
 |      Classify an RGBA image and return the object‘s class and confidence.
 |
 |      Parameters:
 |        image  (capsule) -- CUDA memory capsule
 |        width  (int) -- width of the image (in pixels)
 |        height (int) -- height of the image (in pixels)
 |
 |      Returns:
 |        (int, float) -- tuple containing the object‘s class index and confidence
 |
 |  GetClassDesc(...)
 |      Return the class description for the given object class.
 |
 |      Parameters:
 |        (int) -- index of the class, between [0, GetNumClasses()]
 |
 |      Returns:
 |        (string) -- the text description of the object class
 |
 |  GetNumClasses(...)
 |      Return the number of object classes that this network model is able to classify.
 |
 |      Parameters:  (none)
 |
 |      Returns:
 |        (int) -- number of object classes that the model supports
----------------------------------------------------------------------

Next, we‘ll use the imageNet object to perform image recognition in Python or C++.

原文地址:https://www.cnblogs.com/cloudrivers/p/12121895.html

时间: 2024-10-30 07:04:13

Jetson Nano Building the Project from Source的相关文章

Tensorflow的安装和使用——Jetson Nano 初体验3

目录 1.安装 Jupyter notebook 和 Jupyter Lab 1.1 安装pip3 1.2 安装 Jupyter notebook 1.3 启动 notebook 服务器并配置远程访问 1.3.1 配置Jupyter 1.3.2 启动 notebook 1.3.3 关闭 Jupyter 1.3.4 保存Notebook 1.4 Notebook用法 1.4.1 Markdown单元格 1.4.2 Magic关键词 1.5 Jupyter Lab 安装 1.5.1 安装Jupyte

(转载)解决AndroidStudio导入项目在 Building gradle project info 一直卡住

源地址http://blog.csdn.net/yyh352091626/article/details/51490976 Android Studio导入项目的时候,一直卡在Building gradle project info这一步,主要原因还是因为被墙的结果.gradle官网虽然可以访问,但是速度连蜗牛都赶不上... 解决办法主要有两种,一是直接下载gradle离线包,二是修改项目的gradle-wrapper.properties里的gradle版本为自己电脑已有的版本. 离线包下载导

AndroidStudio导入项目一直卡在Building gradle project info最快速解决方案

AndroidStudio导入项目一直卡在Building gradle project info,实际上是因为你导入的这个项目使用的gradle与你已经拥有的gradle版本不一致,导致需要下载该项目需要的gradle版本,不知是被墙了还是什么原因,反正就是会一直卡住,直至下载完成(如果能下载完成的话,233) 网上也提供了方法,就是去官网下载gradle的版本,然后放到本地,我就不在这里介绍了,我的解决方法更简单一些,就是直接修改gradle-wrapper.properties文件,无需去

【转载】解决refreshing gradle project 和Building gradle project info 一直卡住\速度慢

转载: http://blog.csdn.net/xx326664162/article/details/52002616 文章出自:薛瑄的博客 分析原因: 更改Gradle的版本后,或者更新AS后,再次打开Android studio 会根据指定的url去Gradle官网下载新版文件,所需时间过长,很大可能下载不成功,一直卡在Building gradle project info或refreshing gradle project这一步(解决方法见下). 你可能会有疑问,为什么有时候更新AS

【问题】AndroidStudio导入项目一直卡在Building gradle project infod的最快速解决方案

原因 AndroidStudio导入项目一直卡在Building gradle project info的原因, 是因为导入的这个项目使用的gradle版本与本地已经拥有的gradle版本不一致,导致在项目导入as时会自动下载该项目所需要的gradle版本. 由于下载速度过慢,就会一直显示下载进度条(差不多卡住的状态),如果能下载完成,也就可以正常导入了(如果真得能下载完成的话). 解决方法一 代理下载到对应的gradle版本,然后放置在本地. 解决方法二(最快速) 修改gradle-wrapp

问题集录--Android:解决Studio新建项目时,在 Building gradle project info 一直卡住

Android Studio导入项目的时候,一直卡在Building gradle project info这一步,主要原因还是因为被墙的结果.gradle官网虽然可以访问,但是速度连蜗牛都赶不上... 解决办法主要有两种,一是直接下载gradle离线包,二是修改项目的gradle-wrapper.properties里的gradle版本为自己电脑已有的版本. 离线包下载导入方式 查看所需gradle版本:打开C:\Users\用户名\.gradle\wrapper\dists\gradle-x

Android:Building &quot; &quot; Gradle project info 问题

Android Studio新建或者打开项目的时候,一直卡在Building "" Gradle project info 进度上不动,猜测是网络原因下载gradle不成功. 两种解决方法: 1.使用本地distributionUrl:找到distributionUrl对应路径(在gradle-wrapper.properties配置文件中),手动下载gradle的压缩包.下载完之后把distributionUrl的路径换成自己的本地路径即可. 2.找一个能运行的as项目,打开grad

[Jetson Nano]Jetson Nano快速入门

NVIDIA®JetsonNano™开发套件是适用于制造商,学习者和开发人员的小型AI计算机.相比Jetson其他系列的开发板,官方报价只要99美金,可谓是相当有性价比.本文如何是一个快速入门的教程,主要介绍如何安装Ubuntu系统到Jetson Nano开发板. 需要准备的材料 TF卡 (最少16GB) 带HDMI的显示器 USB鼠标键盘 Micro-USB 接口的电源(5V?2A) USB无线网卡(可选,用网线也可以) 烧写Ubuntun镜像 从英伟达官网下载SD卡镜像(5.25GB) 将T

Jetson Nano系列教程1:烧写系统镜像

下载镜像 NVIDIA官方为Jetson Nano Developer Kit (后面统称为Jetson Nano了)提供了SD卡版本的系统镜像,并且根据JetPack版本不断得在更新.所以你可以直接到NVIDIA 下载中心,下载最新的镜像. 不过说实话,NVIDIA的下载环境对国内用户实在是不友好,所以我们准备了一个百度网盘版本(提取码:7rsi) 这个版本没有办法随时更新,所以最新版本还是以NVIDIA官网最为准. 镜像烧写 一般情况下,如果你手上的卡是新的,可以直接烧写,但是也不妨有些用户