【学习准备】Anaconda指的是一个开源的Python发行版本,其包含了conda、Python等180多个科学包及其依赖项。 因为包含了大量的科学包,Anaconda 的下载文件比较大(约 531 MB),如果只需要某些包,或者需要节省带宽或存储空间,也可以使用Miniconda这个较小的发行版(仅包含conda和 Python)。
Conda是一个开源的包、环境管理器,可以用于在同一个机器上安装不同版本的软件包及其依赖,并能够在不同的环境之间切换
Anaconda包括Conda、Python以及一大堆安装好的工具包,比如:numpy、pandas等
Miniconda包括Conda、Python
由于在安装完Python3,并在python环境里面导入openvino后,又安装了anaconda,再使用Python时,显现出了anaconda所带的python解释器。之前配置好的Python不能再通过系统直接访问了,除非再将环境变量改回来。如何将原来的python3.7.4加进anaconda中。可参考:https://blog.csdn.net/qq_43529415/article/details/100847887(超详细),按此教程,我已成功完成配置。
【动作识别Python *演示】(Action Recognition Python* Demo)
参考官方文档:https://docs.openvinotoolkit.org/latest/_demos_python_demos_action_recognition_README.html
要运行该演示,可以使用公共模型或预训练模型。要下载经过预训练的模型,请使用OpenVINO 模型下载器或访问https://download.01.org/opencv/。 注意:在使用经过训练的模型运行演示之前,请确保使用“ 模型优化器”工具将模型转换为推理引擎格式(* .xml + * .bin)。 【驾驶室内监控】 1、cmd.exe进入自己对应的目录C:\Program Files (x86)\IntelSWTools\openvino_2020.2.117\deployment_tools\open_model_zoo\demos\python_demos\action_recognition> 2、输入命令运行: python action_recognition.py -m_en C:\Users\zlzx\Documents\Intel\intel\driver-action-recognition-adas-0002-encoder\FP16-INT8\driver-action-recognition-adas-0002-encoder.xml -m_de C:\Users\zlzx\Documents\Intel\intel\driver-action-recognition-adas-0002-decoder\FP16-INT8\driver-action-recognition-adas-0002-decoder.xml -i 0 -lb driver_actions.txt
3、参数说明: usage: action_recognition.py [-h] -i INPUT -m_en M_ENCODER [-m_de M_DECODER]
[-l CPU_EXTENSION] [-d DEVICE] [--fps FPS]
[-lb LABELS] [--no_show] [-s LABEL_SMOOTHING]
[--seq DECODER_SEQ_SIZE]
[-u UTILIZATION_MONITORS]
Options:
-h, --help Show this help message and exit.
-i INPUT, --input INPUT
Required. Id of the video capturing device to open (to
open default camera just pass 0), path to a video or a
.txt file with a list of ids or video files (one
object per line)
-m_en M_ENCODER, --m_encoder M_ENCODER
Required. Path to encoder model
-m_de M_DECODER, --m_decoder M_DECODER
Optional. Path to decoder model. If not specified,
simple averaging of encoder's outputs over a time
window is applied
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. For CPU custom layers, if any. Absolute path
to a shared library with the kernels implementation.
-d DEVICE, --device DEVICE
Optional. Specify a target device to infer on. CPU,
GPU, FPGA, HDDL or MYRIAD is acceptable. The demo will
look for a suitable plugin for the device specified.
Default value is CPU
--fps FPS Optional. FPS for renderer
-lb LABELS, --labels LABELS
Optional. Path to file with label names
--no_show Optional. Don't show output
-s LABEL_SMOOTHING, --smooth LABEL_SMOOTHING
Optional. Number of frames used for output label
smoothing
--seq DECODER_SEQ_SIZE
Optional. Length of sequence that decoder takes as
input
-u UTILIZATION_MONITORS, --utilization-monitors UTILIZATION_MONITORS
Optional. List of monitors to show initially.
【演示视频】
|
|
|
|
|
|