@dragonfive
2015-10-29T15:58:20.000000Z
字数 12198
阅读 494
计算机视觉
共享特征点:用高效的加速程序做多级目标检测
在不同级之共用特征点来减少运算量和样本复杂度。
在单独训练的分类器里面选择的特征点是特定于某一级的,而级联分类器使用的特征点更通用。比如线和边。
现在大部分的做法是用一个窗口在图像上滑动,然后对每个这样的窗口应用一个二值分类器。这种分类器可以区分目标和背景,是由标准的机器学习的方法训练的,比如boosting或支持向量机。但是这种方法不适用于千万级别目标的识别因为每个分类器是单独训练和运行的。
本文中使用一种新的结构在多类目标里共享特征点。最基本的思想是boosting算法的扩展。我们这里把不同级的目标的分类器放在一起训练。这样我们就可以使用更少的特征点更快达到和单独训练相同的效果。
看不太懂,共享特征点的向量
关于级联分类器的分解方法是多样的,我们希望得到的是运算量最小的分解。也就是所需要的函数量最小。
我们在每一轮次的训练中,选取那些对于所有的级别有最低错误率的子集,然后最合适的弱学习器加到所有类的强学习器里面。并且它们的权重分布更新了。
法本身是改变数据分布实现的,它根据每次训练集之中的每个样本的分类是否正确,以及上次的总体分类的准确率,来确定每个样本的权值。将修改权值的新数据送给下层分类器进行训练,然后将每次训练得到的分类器融合起来,作为最后的决策分类器。
这些公式也太难看了吧。
adaboost算法是一种迭代算法,针对同一样本集,训练不同的弱分类器,然后将这些弱分类器整合起来成为一个强分类器。
是通过改变数据的分布实现的,每次判断样本集里的分类是否正确来改变样本的权重
每次分错的样本的权值会增大,这样会导致这一层的分类器的权重会减小。结果就是最最终整合成强分类器时占的权重比较小。
adaboost算法的原理与推导
AdaBoost--从原理到实现
基于haar特征的Adaboost人脸检测技术
它的核心思想是在初始的权重数据分布下训练得到一个弱分类器(2类分类器),之后通过这个弱分类器判断准确率,对那些错判(即原本标签是1的因计算得到的0,或者相反情况)的样本的加大权重,而对于分类正确的样本,降低其权重,这样被分错的样本就被突出出来,下次训练就会更多考虑这些被错分的样本,因此得到一个新的样本分布(样本权重都被更新了)。在新的分布下,再进行训练得到一个弱分类器,周而复始得到N个检测能力一般的弱检测器。这些弱检测器只比随机猜测好一点,对于二类问题来说只是比50%的猜测好一点。但是通过一定算法把这些检测能力很弱的分类器融合起来,就会得到一个分类能力很强的强分类器。使用Adaboost分类器可以把那些不重要训练数据权重减弱甚至删除,而把那些关键的数据放在训练的最上层。理论证明,只要每个简单分类器的分类能力比随机猜测好点,当简单分类器数量趋向于无穷时,强分类器的错误率也将趋于零。
基础分类器是至少有两个叶结点的决策树分类器。 Haar特征是基础分类器的输入。每个特定分类器所使用的特征用形状、感兴趣区域中的位置以及比例系数来定义。
CvHaarFeature:haar特征点的结构由3个带权值的矩形组成的结构体,如果最后一个矩形的权值是0,那么就只有两个矩形。
CvHaarClassifier:分类器中有决策树,如果特征值小于设定的树枝阈值,那么就选择左边的分支,否则就选择右边的分支。
CvHaarStageClassifier:阶段分类器,
CvHidHaarClassifierCascade :级联分类器
级联分类器的等级结构。
cvHaarDetectObjects,先将图像灰度化,根据传入参数判断是否进行canny边缘处理(默认不使用),再进行匹配。匹配后收集找出的匹配块,过滤噪声,计算相邻个数如果超过了规定值(传入的min_neighbors)就当成输出结果,否则删去。
匹配循环:将匹配分类器放大scale(传入值)倍,同时原图缩小scale倍,进行匹配,直到匹配分类器的大小大于原图,则返回匹配结果。匹配的时候调用cvRunHaarClassifierCascade来进行匹配,将所有结果存入CvSeq* Seq (可动态增长元素序列),将结果传给cvHaarDetectObjects。
cvRunHaarClassifierCascade函数整体是根据传入的图像和cascade来进行匹配。并且可以根据传入的cascade类型不同(树型、stump(不完整的树)或其他的),进行不同的匹配方式。
CV_IMPL CvSeq* cvHaarDetectObjects(const CvArr* _img, //图像CvHaarClassifierCascade* cascade,//分类器CvMemStorage* storage, double scale_factor,//窗口缩放比例;int min_neighbors, //设置的阈值,只要超过这个阈值就行。int flags, CvSize min_size ){int split_stage = 2;CvMat stub, *img = (CvMat*)_img; //CvMat多通道矩阵 *img=_img指针代换传入图CvMat *temp = 0, *sum = 0, *tilted = 0, *sqsum = 0, *norm_img = 0, *sumcanny = 0, *img_small = 0;CvSeq* seq = 0;CvSeq* seq2 = 0; //CvSeq可动态增长元素序列CvSeq* idx_seq = 0;CvSeq* result_seq = 0;CvMemStorage* temp_storage = 0;CvAvgComp* comps = 0;int i;#ifdef _OPENMPCvSeq* seq_thread[CV_MAX_THREADS] = {0};int max_threads = 0;#endifCV_FUNCNAME( “cvHaarDetectObjects” );__BEGIN__;double factor;int npass = 2, coi; //npass=2int do_canny_pruning = flags & CV_HAAR_DO_CANNY_PRUNING; //true做canny边缘处理// 判断是否是正确if( !CV_IS_HAAR_CLASSIFIER(cascade) )CV_ERROR( !cascade ? CV_StsNullPtr : CV_StsBadArg, “Invalid classifier cascade” );if( !storage )CV_ERROR( CV_StsNullPtr, “Null storage pointer” );CV_CALL( img = cvGetMat( img, &stub, &coi ));if( coi )CV_ERROR( CV_BadCOI, “COI is not supported” ); //一些出错代码if( CV_MAT_DEPTH(img->type) != CV_8U )CV_ERROR( CV_StsUnsupportedFormat, “Only 8-bit images are supported” );CV_CALL( temp = cvCreateMat( img->rows, img->cols, CV_8UC1 ));CV_CALL( sum = cvCreateMat( img->rows + 1, img->cols + 1, CV_32SC1 ));CV_CALL( sqsum = cvCreateMat( img->rows + 1, img->cols + 1, CV_64FC1 ));CV_CALL( temp_storage = cvCreateChildMemStorage( storage ));#ifdef _OPENMPmax_threads = cvGetNumThreads();for( i = 0; i < max_threads; i++ ){CvMemStorage* temp_storage_thread;CV_CALL( temp_storage_thread = cvCreateMemStorage(0)); //CV_CALL就是运行,假如出错就报错。CV_CALL( seq_thread[i] = cvCreateSeq( 0, sizeof(CvSeq), //CvSeq可动态增长元素序列sizeof(CvRect), temp_storage_thread ));}#endifif( !cascade->hid_cascade )CV_CALL( icvCreateHidHaarClassifierCascade(cascade) );if( cascade->hid_cascade->has_tilted_features )tilted = cvCreateMat( img->rows + 1, img->cols + 1, CV_32SC1 ); //多通道矩阵 图像长宽+1 4通道seq = cvCreateSeq( 0, sizeof(CvSeq), sizeof(CvRect), temp_storage ); //创建序列seq 矩形seq2 = cvCreateSeq( 0, sizeof(CvSeq), sizeof(CvAvgComp), temp_storage ); //创建序列seq2 矩形和邻近result_seq = cvCreateSeq( 0, sizeof(CvSeq), sizeof(CvAvgComp), storage ); //创建序列result_seq 矩形和邻近if( min_neighbors == 0 )seq = result_seq;if( CV_MAT_CN(img->type) > 1 ){cvCvtColor( img, temp, CV_BGR2GRAY ); //img转为灰度img = temp;}if( flags & CV_HAAR_SCALE_IMAGE ) //flag && 匹配图{CvSize win_size0 = cascade->orig_window_size; //CvSize win_size0为分类器的原始大小int use_ipp = cascade->hid_cascade->ipp_stages != 0 &&icvApplyHaarClassifier_32s32f_C1R_p != 0; //IPP相关函数if( use_ipp )CV_CALL( norm_img = cvCreateMat( img->rows, img->cols, CV_32FC1 )); //图像的矩阵化 4通道.CV_CALL( img_small = cvCreateMat( img->rows + 1, img->cols + 1, CV_8UC1 )); //小图矩阵化 单通道 长宽+1for( factor = 1; ; factor *= scale_factor ) //成scale_factor倍数匹配{int positive = 0;int x, y;CvSize win_size = { cvRound(win_size0.width*factor),cvRound(win_size0.height*factor) }; //winsize 分类器行列(扩大factor倍)CvSize sz = { cvRound( img->cols/factor ), cvRound( img->rows/factor ) }; //sz 图像行列(缩小factor倍) 三个CvsizeCvSize sz1 = { sz.width – win_size0.width, sz.height – win_size0.height }; //sz1 图像 减分类器行列CvRect rect1 = { icv_object_win_border, icv_object_win_border,win_size0.width – icv_object_win_border*2, //icv_object_win_border (int) 初始值=1win_size0.height – icv_object_win_border*2 }; //矩形框rect1CvMat img1, sum1, sqsum1, norm1, tilted1, mask1; //多通道矩阵CvMat* _tilted = 0;if( sz1.width <= 0 || sz1.height <= 0 ) //图片宽或高小于分类器–>跳出break;if( win_size.width < min_size.width || win_size.height < min_size.height ) //分类器高或宽小于给定的mini_size的高或宽–>继续continue;//CV_8UC1见定义.//#define CV_MAKETYPE(depth,cn) ((depth) + (((cn)-1) << CV_CN_SHIFT))//深度+(cn-1)左移3位 depth,depth+8,depth+16,depth+24.img1 = cvMat( sz.height, sz.width, CV_8UC1, img_small->data.ptr ); //小图的矩阵化 img1 单通道sum1 = cvMat( sz.height+1, sz.width+1, CV_32SC1, sum->data.ptr ); //长宽+1 4通道8位 多通道矩阵sqsum1 = cvMat( sz.height+1, sz.width+1, CV_64FC1, sqsum->data.ptr ); //长宽+1 4通道16位if( tilted ){tilted1 = cvMat( sz.height+1, sz.width+1, CV_32SC1, tilted->data.ptr ); //长宽+1 4通道8位_tilted = &tilted1; //长宽+1 4通道8位}norm1 = cvMat( sz1.height, sz1.width, CV_32FC1, norm_img ? norm_img->data.ptr : 0 ); //norm1 图像 减 分类器行列 4通道mask1 = cvMat( sz1.height, sz1.width, CV_8UC1, temp->data.ptr ); //mask1 灰度图cvResize( img, &img1, CV_INTER_LINEAR ); //img双线性插值 输出到img1cvIntegral( &img1, &sum1, &sqsum1, _tilted ); //计算积分图像if( use_ipp && icvRectStdDev_32s32f_C1R_p( sum1.data.i, sum1.step,sqsum1.data.db, sqsum1.step, norm1.data.fl, norm1.step, sz1, rect1 ) < 0 )use_ipp = 0;if( use_ipp ) //如果ipp=true (intel视频处理加速等的函数库){positive = mask1.cols*mask1.rows; //mask1长乘宽–>positivecvSet( &mask1, cvScalarAll(255) ); //mask1赋值为255for( i = 0; i < cascade->count; i++ ){if( icvApplyHaarClassifier_32s32f_C1R_p(sum1.data.i, sum1.step,norm1.data.fl, norm1.step, mask1.data.ptr, mask1.step,sz1, &positive, cascade->hid_cascade->stage_classifier[i].threshold,cascade->hid_cascade->ipp_stages[i]) < 0 ){use_ipp = 0; //ipp=false;break;}if( positive <= 0 )break;}}if( !use_ipp ) //如果ipp=false{cvSetImagesForHaarClassifierCascade( cascade, &sum1, &sqsum1, 0, 1. );for( y = 0, positive = 0; y < sz1.height; y++ )for( x = 0; x < sz1.width; x++ ){mask1.data.ptr[mask1.step*y + x] =cvRunHaarClassifierCascade( cascade, cvPoint(x,y), 0 ) > 0; //匹配图像.positive += mask1.data.ptr[mask1.step*y + x];}}if( positive > 0 ){for( y = 0; y < sz1.height; y++ )for( x = 0; x < sz1.width; x++ )if( mask1.data.ptr[mask1.step*y + x] != 0 ){CvRect obj_rect = { cvRound(y*factor), cvRound(x*factor),win_size.width, win_size.height };cvSeqPush( seq, &obj_rect ); //将匹配块放到seq中}}}}else //!(flag && 匹配图){cvIntegral( img, sum, sqsum, tilted );if( do_canny_pruning ){sumcanny = cvCreateMat( img->rows + 1, img->cols + 1, CV_32SC1 ); //如果 做canny边缘检测cvCanny( img, temp, 0, 50, 3 );cvIntegral( temp, sumcanny );}if( (unsigned)split_stage >= (unsigned)cascade->count ||cascade->hid_cascade->is_tree ){split_stage = cascade->count;npass = 1;}for( factor = 1; factor*cascade->orig_window_size.width < img->cols – 10 && //匹配factor*cascade->orig_window_size.height < img->rows – 10;factor *= scale_factor ){const double ystep = MAX( 2, factor );CvSize win_size = { cvRound( cascade->orig_window_size.width * factor ),cvRound( cascade->orig_window_size.height * factor )};CvRect equ_rect = { 0, 0, 0, 0 };int *p0 = 0, *p1 = 0, *p2 = 0, *p3 = 0;int *pq0 = 0, *pq1 = 0, *pq2 = 0, *pq3 = 0;int pass, stage_offset = 0;int stop_height = cvRound((img->rows – win_size.height) / ystep);if( win_size.width < min_size.width || win_size.height < min_size.height ) //超边跳出continue;cvSetImagesForHaarClassifierCascade( cascade, sum, sqsum, tilted, factor ); //匹配cvZero( temp ); //清空temp数组if( do_canny_pruning ) //canny边缘检测{equ_rect.x = cvRound(win_size.width*0.15);equ_rect.y = cvRound(win_size.height*0.15);equ_rect.width = cvRound(win_size.width*0.7);equ_rect.height = cvRound(win_size.height*0.7);p0 = (int*)(sumcanny->data.ptr + equ_rect.y*sumcanny->step) + equ_rect.x;p1 = (int*)(sumcanny->data.ptr + equ_rect.y*sumcanny->step)+ equ_rect.x + equ_rect.width;p2 = (int*)(sumcanny->data.ptr + (equ_rect.y + equ_rect.height)*sumcanny->step) + equ_rect.x;p3 = (int*)(sumcanny->data.ptr + (equ_rect.y + equ_rect.height)*sumcanny->step)+ equ_rect.x + equ_rect.width;pq0 = (int*)(sum->data.ptr + equ_rect.y*sum->step) + equ_rect.x;pq1 = (int*)(sum->data.ptr + equ_rect.y*sum->step)+ equ_rect.x + equ_rect.width;pq2 = (int*)(sum->data.ptr + (equ_rect.y + equ_rect.height)*sum->step) + equ_rect.x;pq3 = (int*)(sum->data.ptr + (equ_rect.y + equ_rect.height)*sum->step)+ equ_rect.x + equ_rect.width;}cascade->hid_cascade->count = split_stage; //分裂级for( pass = 0; pass < npass; pass++ ){#ifdef _OPENMP#pragma omp parallel for num_threads(max_threads), schedule(dynamic)#endiffor( int _iy = 0; _iy < stop_height; _iy++ ){int iy = cvRound(_iy*ystep);int _ix, _xstep = 1;int stop_width = cvRound((img->cols – win_size.width) / ystep);uchar* mask_row = temp->data.ptr + temp->step * iy;for( _ix = 0; _ix < stop_width; _ix += _xstep ){int ix = cvRound(_ix*ystep); // it really should be ystepif( pass == 0 ) //第一次循环 做{int result;_xstep = 2;if( do_canny_pruning ) //canny边缘检测{int offset;int s, sq;offset = iy*(sum->step/sizeof(p0[0])) + ix;s = p0[offset] – p1[offset] – p2[offset] + p3[offset];sq = pq0[offset] – pq1[offset] – pq2[offset] + pq3[offset];if( s < 100 || sq < 20 )continue;}result = cvRunHaarClassifierCascade( cascade, cvPoint(ix,iy), 0 ); //匹配结果存到result里if( result > 0 ){if( pass < npass – 1 )mask_row[ix] = 1;else{CvRect rect = cvRect(ix,iy,win_size.width,win_size.height);#ifndef _OPENMP //如果用OpenMPcvSeqPush( seq, &rect ); //result 放到seq中#else //如果不用OpenMPcvSeqPush( seq_thread[omp_get_thread_num()], &rect ); //result放到seq_thread里#endif}}if( result < 0 )_xstep = 1;}else if( mask_row[ix] ) //不是第一次{int result = cvRunHaarClassifierCascade( cascade, cvPoint(ix,iy),stage_offset );if( result > 0 ){if( pass == npass – 1 ) //如果是最后一次{CvRect rect = cvRect(ix,iy,win_size.width,win_size.height);#ifndef _OPENMPcvSeqPush( seq, &rect );#elsecvSeqPush( seq_thread[omp_get_thread_num()], &rect );#endif}}elsemask_row[ix] = 0;}}}stage_offset = cascade->hid_cascade->count;cascade->hid_cascade->count = cascade->count;}}}#ifdef _OPENMP// gather the results //收集结果for( i = 0; i < max_threads; i++ ){CvSeq* s = seq_thread[i];int j, total = s->total;CvSeqBlock* b = s->first;for( j = 0; j < total; j += b->count, b = b->next )cvSeqPushMulti( seq, b->data, b->count ); //结果输出到seq}#endifif( min_neighbors != 0 ){// group retrieved rectangles in order to filter out noise 收集找出的匹配块,过滤噪声int ncomp = cvSeqPartition( seq, 0, &idx_seq, is_equal, 0 );CV_CALL( comps = (CvAvgComp*)cvAlloc( (ncomp+1)*sizeof(comps[0])));memset( comps, 0, (ncomp+1)*sizeof(comps[0]));// count number of neighbors 计算相邻个数for( i = 0; i < seq->total; i++ ){CvRect r1 = *(CvRect*)cvGetSeqElem( seq, i );int idx = *(int*)cvGetSeqElem( idx_seq, i );assert( (unsigned)idx < (unsigned)ncomp );comps[idx].neighbors++;comps[idx].rect.x += r1.x;comps[idx].rect.y += r1.y;comps[idx].rect.width += r1.width;comps[idx].rect.height += r1.height;}// calculate average bounding box 计算重心for( i = 0; i < ncomp; i++ ){int n = comps[i].neighbors;if( n >= min_neighbors ){CvAvgComp comp;comp.rect.x = (comps[i].rect.x*2 + n)/(2*n);comp.rect.y = (comps[i].rect.y*2 + n)/(2*n);comp.rect.width = (comps[i].rect.width*2 + n)/(2*n);comp.rect.height = (comps[i].rect.height*2 + n)/(2*n);comp.neighbors = comps[i].neighbors;cvSeqPush( seq2, &comp ); //结果输入到seq2}}// filter out small face rectangles inside large face rectangles 在大的面块中找出小的面块for( i = 0; i < seq2->total; i++ ) //在seq2中寻找{CvAvgComp r1 = *(CvAvgComp*)cvGetSeqElem( seq2, i ); //r1指向结果int j, flag = 1;for( j = 0; j < seq2->total; j++ ){CvAvgComp r2 = *(CvAvgComp*)cvGetSeqElem( seq2, j );int distance = cvRound( r2.rect.width * 0.2 );if( i != j &&r1.rect.x >= r2.rect.x – distance &&r1.rect.y >= r2.rect.y – distance &&r1.rect.x + r1.rect.width <= r2.rect.x + r2.rect.width + distance &&r1.rect.y + r1.rect.height <= r2.rect.y + r2.rect.height + distance &&(r2.neighbors > MAX( 3, r1.neighbors ) || r1.neighbors < 3) ){flag = 0;break;}}if( flag ){cvSeqPush( result_seq, &r1 ); //添加r1到返回结果.}}}__END__;
未完