C# USB视频人脸检测

此程序基于 虹软人脸识别进行的开发
SDK下载地址:https://ai.arcsoft.com.cn/ucenter/user/reg?utm_source=csdn1&utm_medium=referral

**前提条件**
从虹软官网下载获取ArcFace引擎应用开发包,及其对应的激活码(App_id, SDK_key)

将获取到的开发包导入到您的应用中

App_id与SDK_key是在初始化的时候需要使用
**基本类型**

所有基本类型在平台库中有定义。 定义规则是在ANSIC 中的基本类型

前加上字母“M”同时将类型的第一个字母改成大写。例如“long” 被定义成“MLong
**”数据结构与枚举**

AFR_FSDK_FACEINPUT
描述: 脸部信息
定义

typedef struct{
MRECT	rcFace;
AFR_FSDK_OrientCode lOrient;
} AFR_FSDK_FACEINPUT, *LPAFR_FSDK_FACEINPUT;

  

成员描述

rcFace脸部矩形框信息
lOrient脸部旋转角度

AFR_FSDK_FACEMODEL
描述: 脸部特征信息

定义
typedef struct{
MByte	*pbFeature;
MInt32	lFeatureSize;
} AFR_FSDK_FACEMODEL, *LPAFR_FSDK_FACEMODEL;

  

成员描述

pbFeature提取到的脸部特征

lFeatureSize特征信息长度

AFR_FSDK_VERSION

描述: 引擎版本信息

定义
typedef struct{
MInt32	lCodebase;
MInt32	lMajor;
MInt32	lMinor;
MInt32	lBuild;
MInt32 lFeatureLevel;
MPChar	Version;
MPChar	BuildDate;
MPChar CopyRight;
} AFR_FSDK_VERSION, *LPAFR_FSDK_VERSION;

  

成员描述
lCodebase代码库版本号
lMajor主版本号
lMinor次版本号
lBuild编译版本号,递增
lFeatureLevel特征库版本号
Version字符串形式的版本号
BuildDate编译时间
CopyRight版权
```
枚举
AFR_FSDK_ORIENTCODE
描述: 基于逆时针的脸部方向枚举值

  

定义

};

成员描述
AFR_FSDK_FOC_00 度
AFR_FSDK_FOC_9090度
AFR_FSDK_FOC_270270度
AFR_FSDK_FOC_180180度
AFR_FSDK_FOC_3030度
AFR_FSDK_FOC_6060度
AFR_FSDK_FOC_120120度
AFR_FSDK_FOC_150150度
AFR_FSDK_FOC_210210度
AFR_FSDK_FOC_240240度
AFR_FSDK_FOC_300300度
AFR_FSDK_FOC_330330度

  

`
支持的颜色格式
描述: 颜色格式及其对齐规则
定义
ASVL_PAF_I420 8-bit Y层,之后是8-bit的2x2 采样的U层和V层
ASVL_PAF_YUYV Y0, U0, Y1, V0
ASVL_PAF_RGB24_B8G8R8 BGR24, B8G8R8

API ReferenceAFR_FSDK_InitialEngine
描述: 初始化引擎参数

原型
MRESULT AFR_FSDK_InitialEngine(
MPChar	AppId,
MPChar	SDKKey,
Mbyte	*pMem,
MInt32	lMemSize,
MHandle	*phEngine
);

  

参数

AppId[in] 用户申请SDK时获取的App Id
SDKKey[in] 用户申请SDK时获取的SDK Key
pMem[in] 分配给引擎使用的内存地址
lMemSize[in] 分配给引擎使用的内存大小
phEngine[out] 引擎handle

  

返回值: 成功返回MOK,否则返回失败code。失败codes如下所列:
MERR_INVALID_PARAM 参数输入非法
MERR_NO_MEMORY 内存不足
AFR_FSDK_ExtractFRFeature
描述: 获取脸部特征参数

原型
MRESULT AFR_FSDK_ExtractFRFeature (
MHandle	hEngine,
LPASVLOFFSCREEN	pInputImage,
LPAFR_FSDK_FACEINPUT	pFaceRes,
LPAFR_FSDK_FACEMODEL	pFaceModels
);

  

参数

hEngine[in] 引擎handle
pInputImage[in] 输入的图像数据
pFaceRes[in] 已检测到的脸部信息
pFaceModels[out] 提取的脸部特征信息

  

返回值: 成功返回MOK,否则返回失败code。失败codes如下所列:
MERR_INVALID_PARAM 参数输入非法
MERR_NO_MEMORY 内存不足
AFR_FSDK_FacePairMatching
描述: 脸部特征比较

原型

MRESULT AFR_FSDK_FacePairMatching(
MHandle	hEngine,
AFR_FSDK_FACEMODEL	*reffeature,
AFR_FSDK_FACEMODEL	*probefeature,
MFloat *pfSimilScore
);

  

参数

hEngine[in] 引擎handle
reffeature[in] 已有脸部特征信息
probefeature[in] 被比较的脸部特征信息
pfSimilScore[out] 脸部特征相似程度数值

  

返回值: 成功返回MOK,否则返回失败code。失败codes如下所列:
MERR_INVALID_PARAM 参数输入非法
MERR_NO_MEMORY 内存不足
AFR_FSDK_UninitialEngine
描述: 销毁引擎,释放相应资源

原型
MRESULT AFR_FSDK_UninitialEngine(
MHandle hEngine
);

  

参数
hEngine[in] 引擎handle
返回值: 成功返回MOK,否则返回失败code。失败codes如下所列:
MERR_INVALID_PARAM 参数输入非法
AFR_FSDK_GetVersion

原型
const AFR_FSDK_VERSION * AFR_FSDK_GetVersion(
MHandle hEngine
);

  

相关事例代码

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ArcsoftFace
{

public struct AFD_FSDK_FACERES
{
public int nFace; // number of faces detected

public IntPtr rcFace; // The bounding box of face

public IntPtr lfaceOrient; // the angle of each face
}

}

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ArcsoftFace
{
public struct AFR_FSDK_FACEINPUT
{
public MRECT rcFace;	// The bounding box of face

public int lfaceOrient; // The orientation of face
}
}

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ArcsoftFace
{
public struct AFR_FSDK_FACEMODEL
{
public IntPtr pbFeature;	// The extracted features

public int lFeatureSize;	// The size of pbFeature
}
}

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ArcsoftFace
{
public struct AFR_FSDK_Version
{
public int lCodebase;
public int lMajor;
public int lMinor;
public int lBuild;
public int lFeatureLevel;
public IntPtr Version;
public IntPtr BuildDate;
public IntPtr CopyRight;
}
}

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.InteropServices;

namespace ArcsoftFace
{

public class AmFaceVerify
{
/**
* 初始化人脸检测引擎
* @return 初始化人脸检测引擎
*/
[DllImport("libarcsoft_fsdk_face_detection.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern int AFD_FSDK_InitialFaceEngine(string appId, string sdkKey, IntPtr pMem, int lMemSize, ref IntPtr pEngine, int iOrientPriority, int nScale, int nMaxFaceNum);

/**
* 获取人脸检测 SDK 版本信息
* @return 获取人脸检测SDK 版本信息
*/
[DllImport("libarcsoft_fsdk_face_detection.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr AFD_FSDK_GetVersion(IntPtr pEngine);

/**
* 根据输入的图像检测出人脸位置,一般用于静态图像检测
* @return 人脸位置
*/
[DllImport("libarcsoft_fsdk_face_detection.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern int AFD_FSDK_StillImageFaceDetection(IntPtr pEngine, IntPtr offline, ref IntPtr faceRes);

/**
* 初始化人脸识别引擎
* @return 初始化人脸识别引擎
*/
[DllImport("libarcsoft_fsdk_face_recognition.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern int AFR_FSDK_InitialEngine(string appId, string sdkKey, IntPtr pMem, int lMemSize, ref IntPtr pEngine);

/**
* 获取人脸识别SDK 版本信息
* @return 获取人脸识别SDK 版本信息
*/
[DllImport("libarcsoft_fsdk_face_recognition.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr AFR_FSDK_GetVersion(IntPtr pEngine);

/**
* 提取人脸特征
* @return 提取人脸特征
*/
[DllImport("libarcsoft_fsdk_face_recognition.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern int AFR_FSDK_ExtractFRFeature(IntPtr pEngine, IntPtr offline, IntPtr faceResult, IntPtr localFaceModels);

/**
* 获取相似度
* @return 获取相似度
*/
[DllImport("libarcsoft_fsdk_face_recognition.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern int AFR_FSDK_FacePairMatching(IntPtr pEngine, IntPtr faceModels1, IntPtr faceModels2, ref float fSimilScore);

#region delete
///**
// * 创建人脸检测引擎
// * @param [in] model_path 模型文件夹路径
// * @param [out] engine 创建的人脸检测引擎
// * @return =0 表示成功,<0 表示错误码。
// */
//[DllImport("AmFaceDet.dll", CallingConvention = CallingConvention.Cdecl)]
//public static extern int AmCreateFaceDetectEngine(string modelPath, ref IntPtr faceDetectEngine);

///**
// * 创建人脸识别引擎
// * @param [in] model_path 模型文件夹路径
// * @param [out] engine 创建的人脸识别引擎
// * @return =0 表示成功,<0 表示错误码。
// */
//[DllImport("AmFaceRec.dll", CallingConvention = CallingConvention.Cdecl)]
//public static extern int AmCreateFaceRecogniseEngine(string modelPath, ref IntPtr facRecogniseeEngine);

///**
// * 创建人脸比对别引擎
// * @param [in] model_path 模型文件夹路径
// * @param [out] engine 创建的人脸比对引擎
// * @return =0 表示成功,<0 表示错误码。
// */
//[DllImport("AmFaceCompare.dll", CallingConvention = CallingConvention.Cdecl)]
//public static extern int AmCreateFaceCompareEngine(ref IntPtr facCompareEngine);

///**
// * 设置人脸引擎参数
// * @param [in] engine 人脸引擎
// * @param [in] param 人脸参数
// */
//[DllImport("AmFaceDet.dll", CallingConvention = CallingConvention.Cdecl)]
//public static extern void AmSetParam(IntPtr faceDetectEngine, [MarshalAs(UnmanagedType.LPArray)] [In] TFaceParams[] setFaceParams);

///**
// * 人脸检测
// * @param [in] engine 人脸引擎
// * @param [in] bgr 图像数据,BGR格式
// * @param [in] width 图像宽度
// * @param [in] height 图像高度
// * @param [in] pitch 图像数据行字节数
// * @param [in,out] faces 人脸结构体数组,元素个数应等于期望检测人脸个数
// * @param [in] face_count 期望检测人脸个数
// * @return >=0 表示实际检测到的人脸数量,<0 表示错误码。
// */
//[DllImport("AmFaceDet.dll", CallingConvention = CallingConvention.Cdecl)]
//public static extern int AmDetectFaces(IntPtr faceDetectEngine, [MarshalAs(UnmanagedType.LPArray)] [In] byte[] image, int width, int height, int pitch, [MarshalAs(UnmanagedType.LPArray)] [In][Out] TAmFace[] faces, int face_count);

///**
// * 抽取人脸特征
// * @param [in] engine 人脸引擎
// * @param [in] bgr 图像数据,BGR格式
// * @param [in] width 图像宽度
// * @param [in] height 图像高度
// * @param [in] pitch 图像数据行字节数
// * @param [in] face 人脸结构体
// * @param [out] feature 人脸特征
// * @return =0 表示成功,<0 表示错误码。
// */
//[DllImport("AmFaceRec.dll", CallingConvention = CallingConvention.Cdecl)]
////public static extern int AmExtractFeature(IntPtr faceEngine, [MarshalAs(UnmanagedType.LPArray)] [In] byte[] image, int width, int height, int pitch, [MarshalAs(UnmanagedType.LPArray)] [In] TAmFace[] faces, ref byte[] feature);
//public static extern int AmExtractFeature(IntPtr facRecogniseeEngine, [MarshalAs(UnmanagedType.LPArray)] [In] byte[] image, int width, int height, int pitch, [MarshalAs(UnmanagedType.LPArray)] [In] TAmFace[] faces, [MarshalAs(UnmanagedType.LPArray)] [Out] byte[] feature);

///**
// * 比对两个人脸特征相似度
// * @param [in] engine 人脸引擎
// * @param [in] feature1 人脸特征1
// * @param [in] feature2 人脸特征2
// * @return 人脸相似度
// */
//[DllImport("AmFaceCompare.dll", CallingConvention = CallingConvention.Cdecl)]
//public static extern float AmCompare(IntPtr facCompareEngine, byte[] feature1, byte[] feature2);
#endregion
}
}

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.InteropServices;

namespace ArcsoftFace
{
public struct ASVLOFFSCREEN
{
public int u32PixelArrayFormat;

public int i32Width;

public int i32Height;

[MarshalAs(UnmanagedType.ByValArray, SizeConst = 4)]
public IntPtr[] ppu8Plane;

[MarshalAs(UnmanagedType.ByValArray, SizeConst = 4)]
public int[] pi32Pitch;
}
}

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ArcsoftFace
{
public struct MRECT
{
public int left;
public int top;
public int right;
public int bottom;
}
}

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using Emgu.CV.CvEnum;
using Emgu.CV;//PS:调用的Emgu dll
using Emgu.CV.Structure;
using Emgu.Util;
using Emgu.CV.UI;
using Emgu.CV.OCR;
using System.Threading;
using ArcsoftFace;
using System.Timers;
using Emgu.CV.Util;
using System.Linq;
using System.Runtime.InteropServices;
using System.Drawing.Imaging;
using System.Text;
using System.Diagnostics;
using System.Drawing.Drawing2D;

namespace ArcsoftFace
{
public partial class Form2 : Form
{

private Capture capture = new Capture(0);
private bool captureinprocess;//判断摄像头的状态

byte[] firstFeature;

byte[] secondFeature;

//人脸检测引擎
IntPtr detectEngine = IntPtr.Zero;

//人脸识别引擎
IntPtr regcognizeEngine = IntPtr.Zero;

//拖拽线程

private string haarXmlPath = "haarcascade_frontalface_alt_tree.xml";
double scale = 1.5;
// web camera
private System.Timers.Timer capture_tick;
private bool capture_flag = true;

Image<Gray, Byte> gray = null;
Image<Bgr, Byte> smallframe = null;
Mat frame = new Mat();
private int sb = 0;
Rectangle f = new Rectangle();
public Form2()
{
InitializeComponent();

capture_tick = new System.Timers.Timer();
capture_tick.Interval = 50;
capture_tick.Enabled = Enabled;
capture_tick.Stop();
capture_tick.Elapsed += new ElapsedEventHandler(processfram);
}

private byte[] getBGR(Bitmap image, ref int width, ref int height, ref int pitch)
{
//Bitmap image = new Bitmap(imgPath);

const PixelFormat PixelFormat = PixelFormat.Format24bppRgb;

BitmapData data = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, PixelFormat);

IntPtr ptr = data.Scan0;

int ptr_len = data.Height * Math.Abs(data.Stride);

byte[] ptr_bgr = new byte[ptr_len];

Marshal.Copy(ptr, ptr_bgr, 0, ptr_len);

width = data.Width;

height = data.Height;

pitch = Math.Abs(data.Stride);

int line = width * 3;

int bgr_len = line * height;

byte[] bgr = new byte[bgr_len];

for (int i = 0; i < height; ++i)
{
Array.Copy(ptr_bgr, i * pitch, bgr, i * line, line);
}

pitch = line;

image.UnlockBits(data);

return bgr;
}

private void button1_Click(object sender, EventArgs e)
{
if (capture != null)//摄像头不为空
{

if (captureinprocess)
{
imageBox1.Enabled = false;
// Application.Idle -= new EventHandler(processfram);
capture_tick.Stop();
button1.Text = "Stop";

}
else
{
// Application.Idle += new EventHandler(processfram);
imageBox1.Enabled = true;
capture_tick.Start();
button1.Text = "Start";
}
captureinprocess = !captureinprocess;
}
else//摄像头为空则通过Capture()方法调用
{
try
{
capture = new Capture(0);
}
catch (NullReferenceException excpt)
{
MessageBox.Show(excpt.Message);
}
}

}

private void processfram(object sender, EventArgs arg)
{
capture_tick.Enabled = false;
try
{
if (frame != null)
{
frame = capture.QueryFrame();
Emgu.CV.Image<Bgr, Byte> image = frame.ToImage<Bgr, Byte>();
Image<Bgr, Byte> currentFrame = image;

if (sb == 0)
{
sb += 1;
MRECT rect = detectAndExtractFeature(image.ToBitmap(), 1);

if (Math.Abs(rect.left - f.Left) > 30 || Math.Abs(rect.top - f.Top) > 30)
{
f = new Rectangle(rect.left, rect.top, rect.right - rect.left, rect.bottom - rect.top);
}

}
else if (sb >= 4)
{
sb = 0;
}
else
{
sb += 1;
}

// currentFrame.Draw(rect, new Bgr(Color.Red));
// image.Draw(detectAndExtractFeature(image.ToBitmap(),1),new Bgr(Color.Red),3);

currentFrame.Draw(f, new Bgr(Color.Red), 3);

imageBox1.Image = currentFrame.ToBitmap();
PointF pf = new PointF(50, 50);
using (Graphics g = imageBox1.CreateGraphics())
{
Font font = new Font("Arial", 12);
g.DrawString("left:" + f.Left + " top:" + f.Top, font, Brushes.Green, pf);

}
currentFrame.Dispose();
image.Dispose(); ;
}
capture_tick.Enabled = true;
}
catch (Exception e)
{
Console.WriteLine(e.Message);
capture_tick.Enabled = true;
}
}

public static void BoundingBox(Image<Gray, byte> src, Image<Bgr, byte> draw)
{
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(src, contours, null, RetrType.External,
ChainApproxMethod.ChainApproxSimple);

int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
{
Rectangle BoundingBox = CvInvoke.BoundingRectangle(contour);
CvInvoke.Rectangle(draw, BoundingBox, new MCvScalar(255, 0, 255, 255), 3);
}
}
}
}

public void CaptureProcess(object sender, EventArgs arg)
{
Mat frame1 = new Mat();
frame1 = capture.QueryFrame();

if (frame1 != null)
{

//face detection

//frame = frame.Flip(Emgu.CV.CvEnum.FLIP.HORIZONTAL);
// smallframe = frame.Resize(1 / scale, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR);//缩放摄像头拍到的大尺寸照片
gray = smallframe.Convert<Gray, Byte>(); //Convert it to Grayscale
gray._EqualizeHist();//均衡化

CascadeClassifier ccr = new CascadeClassifier(haarXmlPath);
Rectangle[] rects = ccr.DetectMultiScale(gray, 1.3, 3, new Size(20, 20), Size.Empty);
foreach (Rectangle r in rects)
{
//This will focus in on the face from the haar results its not perfect but it will remove a majoriy
//of the background noise
Rectangle facesDetected = r;
facesDetected.X += (int)(facesDetected.Height * 0.6);
facesDetected.Y += (int)(facesDetected.Width * 0.8);
facesDetected.Height += (int)(facesDetected.Height * 0.1);
facesDetected.Width += (int)(facesDetected.Width * 0.2);

// frame.Draw(facesDetected, new Bgr(Color.Red), 3);//绘制检测框
}

// imageBox_capture.Image = frame;
}
}

private MRECT detectAndExtractFeature(Image imageParam, int firstSecondFlg)
{

byte[] feature = null;
MRECT rect = new MRECT();
Bitmap bitmap = new Bitmap(imageParam);
byte[] imageData = null;
IntPtr imageDataPtr = IntPtr.Zero;
ASVLOFFSCREEN offInput = new ASVLOFFSCREEN();
AFD_FSDK_FACERES faceRes = new AFD_FSDK_FACERES();

IntPtr faceResPtr = IntPtr.Zero;
try
{

int width = 0;

int height = 0;

int pitch = 0;

imageData = getBGR(bitmap, ref width, ref height, ref pitch);

//GCHandle hObject = GCHandle.Alloc(imageData, GCHandleType.Pinned);

//IntPtr imageDataPtr = hObject.AddrOfPinnedObject();

imageDataPtr = Marshal.AllocHGlobal(imageData.Length);

Marshal.Copy(imageData, 0, imageDataPtr, imageData.Length);

offInput.u32PixelArrayFormat = 513;

offInput.ppu8Plane = new IntPtr[4];

offInput.ppu8Plane[0] = imageDataPtr;

offInput.i32Width = width;

offInput.i32Height = height;

offInput.pi32Pitch = new int[4];

offInput.pi32Pitch[0] = pitch;

IntPtr offInputPtr = Marshal.AllocHGlobal(Marshal.SizeOf(offInput));

Marshal.StructureToPtr(offInput, offInputPtr, false);

faceResPtr = Marshal.AllocHGlobal(Marshal.SizeOf(faceRes));

//Marshal.StructureToPtr(faceRes, faceResPtr, false);

//人脸检测
int detectResult = AmFaceVerify.AFD_FSDK_StillImageFaceDetection(detectEngine, offInputPtr, ref faceResPtr);

object obj = Marshal.PtrToStructure(faceResPtr, typeof(AFD_FSDK_FACERES));

faceRes = (AFD_FSDK_FACERES)obj;

for (int i = 0; i < faceRes.nFace; i++)
{
rect = (MRECT)Marshal.PtrToStructure(faceRes.rcFace + Marshal.SizeOf(typeof(MRECT)) * i, typeof(MRECT));
int orient = (int)Marshal.PtrToStructure(faceRes.lfaceOrient + Marshal.SizeOf(typeof(int)) * i, typeof(int));

if (i == 0)
{
Image image = CutFace(bitmap, rect.left, rect.top, rect.right - rect.left, rect.bottom - rect.top);

if (firstSecondFlg == 1)
{
this.pictureBox3.Image = image;
}
else if (firstSecondFlg == 2)
{
this.pictureBox4.Image = image;
}
}

}

}
catch (Exception e)
{
LogHelper.WriteErrorLog("detect", e.Message + "\n" + e.StackTrace);
}
finally
{
bitmap.Dispose();

imageData = null;

Marshal.FreeHGlobal(imageDataPtr);

offInput = new ASVLOFFSCREEN();

faceRes = new AFD_FSDK_FACERES();

}
return rect;
}

private Image DrawRectangleInPicture(Image bmp, Point p0, Point p1, Color RectColor, int LineWidth, DashStyle ds)
{

if (bmp == null) return null;

Graphics g = Graphics.FromImage(bmp);

Brush brush = new SolidBrush(RectColor);

Pen pen = new Pen(brush, LineWidth);

pen.DashStyle = ds;

g.DrawRectangle(pen, new Rectangle(p0.X, p0.Y, Math.Abs(p0.X - p1.X), Math.Abs(p0.Y - p1.Y)));

g.Dispose();

return bmp;

}

public static Bitmap CutFace(Bitmap srcImage, int StartX, int StartY, int iWidth, int iHeight)
{
if (srcImage == null)
{
return null;
}

int w = srcImage.Width;

int h = srcImage.Height;

if (StartX >= w || StartY >= h)
{
return null;
}
if (StartX + iWidth > w)
{
iWidth = w - StartX;
}
if (StartY + iHeight > h)
{
iHeight = h - StartY;
}
try
{
Bitmap bmpOut = new Bitmap(iWidth, iHeight, PixelFormat.Format24bppRgb);

Graphics g = Graphics.FromImage(bmpOut);

g.DrawImage(srcImage, new Rectangle(0, 0, iWidth, iHeight), new Rectangle(StartX, StartY, iWidth, iHeight), GraphicsUnit.Pixel);

g.Dispose();

return bmpOut;
}
catch
{
return null;
}
}

private void Form2_Load(object sender, System.EventArgs e)
{
#region 初始化人脸检测引擎

int detectSize = 40 * 1024 * 1024;

IntPtr pMem = Marshal.AllocHGlobal(detectSize);

//1-1
//string appId = "4tnYSJ68e8wztSo4Cf7WvbyMZduHwpqtThAEM3obMWbE";

//1-1
//string sdkKey = "Cgbaq34izc8PA2Px26x8qqWTQn2P5vxijaWKdUrdCwYT";

//1-n
string appId = "8b4R2gvcoFQXKbC4wGtnYcqsa9Bd3FLiN3VWDFtJqcnB";

//1-n
string sdkKey = "A5Km3QjZKGuakWRmC2pSWTuNzbNbaSCnj5fFtjBBcdxm";

//人脸检测引擎初始化

// IntPtr aaa= AFD_FSDKLibrary.AFD_FSDK_InitialFaceEngine(appId, sdkKey, pMem, detectSize, ref detectEngine, 5, 50, 1);
int retCode = AmFaceVerify.AFD_FSDK_InitialFaceEngine(appId, sdkKey, pMem, detectSize, ref detectEngine, 5, 50, 1);
//获取人脸检测引擎版本
IntPtr versionPtr = AmFaceVerify.AFD_FSDK_GetVersion(detectEngine);

AFR_FSDK_Version version = (AFR_FSDK_Version)Marshal.PtrToStructure(versionPtr, typeof(AFR_FSDK_Version));

Console.WriteLine("lCodebase:{0} lMajor:{1} lMinor:{2} lBuild:{3} Version:{4} BuildDate:{5} CopyRight:{6}", version.lCodebase, version.lMajor, version.lMinor, version.lBuild, Marshal.PtrToStringAnsi(version.Version), Marshal.PtrToStringAnsi(version.BuildDate), Marshal.PtrToStringAnsi(version.CopyRight));

//Marshal.FreeHGlobal(versionPtr);

#endregion

#region 初始化人脸识别引擎

int recognizeSize = 40 * 1024 * 1024;

IntPtr pMemDetect = Marshal.AllocHGlobal(recognizeSize);

//1-1
//string appIdDetect = "4tnYSJ68e8wztSo4Cf7WvbyMZduHwpqtThAEM3obMWbE";

//1-1
//string sdkKeyDetect = "Cgbaq34izc8PA2Px26x8qqWaaBHbPD7wWMcTU6xe8VRo";

//1-n
string appIdDetect = "8b4R2gvcoFQXKbC4wGtnYcqsa9Bd3FLiN3VWDFtJqcnB";

//1-n
string sdkKeyDetect = "A5Km3QjZKGuakWRmC2pSWTuW9zdndn5EkVDo4LceRxLU";

//人脸识别引擎初始化
retCode = AmFaceVerify.AFR_FSDK_InitialEngine(appIdDetect, sdkKeyDetect, pMemDetect, recognizeSize, ref regcognizeEngine);

//获取人脸识别引擎版本
IntPtr versionPtrDetect = AmFaceVerify.AFR_FSDK_GetVersion(regcognizeEngine);

AFR_FSDK_Version versionDetect = (AFR_FSDK_Version)Marshal.PtrToStructure(versionPtrDetect, typeof(AFR_FSDK_Version));

Console.WriteLine("lCodebase:{0} lMajor:{1} lMinor:{2} lBuild:{3} lFeatureLevel:{4} Version:{5} BuildDate:{6} CopyRight:{7}", versionDetect.lCodebase, versionDetect.lMajor, versionDetect.lMinor, versionDetect.lBuild, versionDetect.lFeatureLevel, Marshal.PtrToStringAnsi(versionDetect.Version), Marshal.PtrToStringAnsi(versionDetect.BuildDate), Marshal.PtrToStringAnsi(versionDetect.CopyRight));

#endregion
}

}
}

  

USB视频 动态画框 源码下载地址
https://download.csdn.net/download/zhang1244/10368237
运行效果地址
https://download.csdn.net/download/zhang1244/10368222
普通人脸照片进行关键点提取以及相关对比相似度

https://download.csdn.net/download/zhang1244/10368197

运行效果地址

https://download.csdn.net/download/zhang1244/10368181

相关技术交流,后期可能开发相关与身份证照片进行实名制对比。请继续关注
```

原文地址:https://www.cnblogs.com/Zzz-/p/10912536.html

时间: 2024-10-06 09:05:15

C# USB视频人脸检测的相关文章

视频人脸检测——Dlib版(六)

往期目录 视频人脸检测--Dlib版(六) OpenCV添加中文(五) 图片人脸检测--Dlib版(四) 视频人脸检测--OpenCV版(三) 图片人脸检测--OpenCV版(二) OpenCV环境搭建(一) 更多更新,欢迎访问我的github:https://github.com/vipstone/faceai 前言 Dlib的人脸识别要比OpenCV精准很多,一个是模型方面的差距,在一方面和OpenCV的定位有关系,OpenCV是一个综合性的视觉处理库,既然这么精准,那就一起赶快来看吧. 视

视频人脸检测

又到研究生毕业季,大家都在忙着论文的撰写.闲来无事,和同门一起看了下网上某外国友人共享的一个实时人脸检测的MATLAB程序,本非我专业,只不过看到那个效果确实蛮有意思的,所以特意卸载了用了好久的2012a版本,重新装了一个9G多的2014a 版本的MATLAB. 话说MATLAB是越来越大,记得大一那会老师上课时说这个软件有1G大,很占空间,当时听了好生诧异.没想到几年不到,随着存储和电脑性能的提升,这个软件也是成倍的增长.不过话说回来,功能却是越来越强大了. 首先下载一个WEBcamera的插

Python学习案例之视频人脸检测识别

前言 上一篇博文与大家分享了简单的图片人脸识别技术,其实在实际应用中,很多是通过视频流的方式进行识别,比如人脸识别通道门禁考勤系统.人脸动态跟踪识别系统等等. 案例 这里我们还是使用 opencv 中自带了 haar人脸特征分类器,通过读取一段视频来识别其中的人脸. 代码实现: # -*- coding: utf-8 -*- __author__ = "小柒" __blog__ = "https://blog.52itstyle.vip/" import cv2 i

图片人脸检测——Dlib版(四)

上几篇给大家讲了OpenCV的图片人脸检测,而本文给大家带来的是比OpenCV更加精准的图片人脸检测Dlib库. 点击查看往期: <图片人脸检测——OpenCV版(二)> <视频人脸检测——OpenCV版(三)> dlib与OpenCV对比 识别精准度:Dlib >= OpenCV Dlib更多的人脸识别模型,可以检测脸部68甚至更多的特征点 效果展示 人脸的68个特征点 安装dlib 下载地址:https://pypi.org/simple/dlib/ 选择适合你的版本,本

人脸检测和haar分类器视频讲解

刚开始学习人脸检测时,非常郁闷什么是haar分类器,一直是迷迷糊糊的,搞不清楚什么是弱分类器,什么是强分类器,什么是级联分类器,还有检测窗口是如何在待检测图片上运行的,这个小视频会生动形象的展示给你的,想必你会有个直观理解的,快快点击吧  https://vimeo.com/34631229. 点击打开链接 这个小视频是我查找资料中无意中找到的,真实很不容易的,我想如果一开始就看这个小视频的话,再看论文也好,研究opencv源代码也好,会入门更快.对于人脸检测的其他入门方面的小文章,可以参考我的

从视频中提取图片,对图片做人脸检测并截取人脸区域

环境配置:VS2013+opencv2.4.10+libface.lib 参考博客:http://blog.csdn.net/augusdi/article/details/11042329 http://www.1024do.com/?p=1296 首先给出视频处理的函数video_process.hpp #include <stdio.h> #include <opencv2/opencv.hpp> #include "facedetect-dll.h" #

图像处理项目——人脸检测—视频

人脸检测 *开发环境为visual studio2010*使用的是opencv中的Haart特征分类器,harr Cascades*检测对象为视频中的人脸 一:主要步骤 1.加载分类器,将人脸检测分类器和笑脸检测分类器放在项目目录中去 2.调用detecMutiScale()函数检测,对函数中相关的参数进行修改调整, 是检测的结果更加精确 3.打开摄像头或者视频文件,把检测到的人脸用矩形画出来 opencv中用来做目标检测的级联分类器的一个 类,其结构如下: The constructor fo

python 人脸检测 +python 二维码检测

从官网下载opencv 目录结构如图 在samples中有丰富的示例 应为我的系统中已经安装好opepncv-python,可直接运行 会得到结果: 人脸检测代码如下 #!/usr/bin/env python ''' face detection using haar cascades USAGE: facedetect.py [--cascade <cascade_fn>] [--nested-cascade <cascade_fn>] [<video_source>

人脸检测及识别python实现系列(2)——识别出人脸

人脸检测及识别python实现系列(2)--识别出人脸 从实时视频流中识别出人脸区域,从原理上看,其依然属于机器学习的领域之一,本质上与谷歌利用深度学习识别出猫没有什么区别.程序通过大量的人脸图片数据进行训练,利用数学算法建立建立可靠的人脸特征模型,如此即可识别出人脸.幸运的是,这些工作OpenCV已经帮我们做了,我们只需调用对应的API函数即可,先给出代码: #-*- coding: utf-8 -*- import cv2 import sys from PIL import Image d