天道酬勤,学无止境

OpenCV 2.4.2 calcOpticalFlowPyrLK doesn't find any points

I am using OpenCV 2.4.2 on Linux. I am writing in C++. I want to track simple objects (e.g. black rectangle on the white background). Firstly I am using goodFeaturesToTrack and then calcOpticalFlowPyrLK to find those points on another image. The problem is that calcOpticalFlowPyrLK doesn't find those points.

I have found code that does it in C, which does not work in my case: http://dasl.mem.drexel.edu/~noahKuntz/openCVTut9.html

I have converted it into C++:

int main(int, char**) {
    Mat imgAgray = imread("ImageA.png", CV_LOAD_IMAGE_GRAYSCALE);
    Mat imgBgray = imread("ImageB.png", CV_LOAD_IMAGE_GRAYSCALE);
    Mat imgC = imread("ImageC.png", CV_LOAD_IMAGE_UNCHANGED);

    vector<Point2f> cornersA;

    goodFeaturesToTrack(imgAgray, cornersA, 30, 0.01, 30);

    for (unsigned int i = 0; i < cornersA.size(); i++) {
        drawPixel(cornersA[i], &imgC, 2, blue);
    }

    // I have no idea what does it do
//    cornerSubPix(imgAgray, cornersA, Size(15, 15), Size(-1, -1),
//            TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 20, 0.03));

    vector<Point2f> cornersB;
    vector<uchar> status;
    vector<float> error;

    // winsize has to be 11 or 13, otherwise nothing is found
    int winsize = 11;
    int maxlvl = 5;

    calcOpticalFlowPyrLK(imgAgray, imgBgray, cornersA, cornersB, status, error,
            Size(winsize, winsize), maxlvl);

    for (unsigned int i = 0; i < cornersB.size(); i++) {
        if (status[i] == 0 || error[i] > 0) {
            drawPixel(cornersB[i], &imgC, 2, red);
            continue;
        }
        drawPixel(cornersB[i], &imgC, 2, green);
        line(imgC, cornersA[i], cornersB[i], Scalar(255, 0, 0));
    }

    namedWindow("window", 1);
    moveWindow("window", 50, 50);
    imshow("window", imgC);

    cvWaitKey(0);

    return 0;
}

ImageA: http://oi50.tinypic.com/14kv05v.jpg

ImageB: http://oi46.tinypic.com/4l3xom.jpg

ImageC: http://oi47.tinypic.com/35n3uox.jpg

I have found out that it works only for winsize = 11. I have tried using it on a moving rectangle to check how far it is from the origin. It hardly ever detects all four corners.

int main(int, char**) {
    std::cout << "Compiled at " << __TIME__ << std::endl;

    Scalar white = Scalar(255, 255, 255);
    Scalar black = Scalar(0, 0, 0);
    Scalar red = Scalar(0, 0, 255);
    Rect rect = Rect(50, 100, 100, 150);

    Mat org = Mat(Size(640, 480), CV_8UC1, white);
    rectangle(org, rect, black, -1, 0, 0);

    vector<Point2f> features;
    goodFeaturesToTrack(org, features, 30, 0.01, 30);
    std::cout << "POINTS FOUND:" << std::endl;
    for (unsigned int i = 0; i < features.size(); i++) {
        std::cout << "Point found: " << features[i].x;
        std::cout << " " << features[i].y << std::endl;
    }

    bool goRight = 1;

    while (1) {

        if (goRight) {
            rect.x += 30;
            rect.y += 30;
            if (rect.x >= 250) {
                goRight = 0;
            }
        } else {
            rect.x -= 30;
            rect.y -= 30;
            if (rect.x <= 50) {
                goRight = 1;
            }
        }

        Mat frame = Mat(Size(640, 480), CV_8UC1, white);
        rectangle(frame, rect, black, -1, 0, 0);

        vector<Point2f> found;
        vector<uchar> status;
        vector<float> error;
        calcOpticalFlowPyrLK(org, frame, features, found, status, error,
                    Size(11, 11), 5);

        Mat display;
        cvtColor(frame, display, CV_GRAY2BGR);

        for (unsigned int i = 0; i < found.size(); i++) {
            if (status[i]  == 0 || error[i] > 0) {
                continue;
            } else {
                line(display, features[i], found[i], red);
            }
        }

        namedWindow("window", 1);
        moveWindow("window", 50, 50);
        imshow("window", display);

        if (cvWaitKey(300) > 0) {
            break;
        }
    }

}

OpenCV implementation of Lucas-Kanade seems to be unable to track a rectangle on a binary image. Am I doing something wrong or does this function just not work?

评论

The Lucas Kanade method estimates the motion of a region by using the gradients in that region. It is in a case a gradient descends methods. So if you don't have gradients in x AND y direction the method will fail. The second important note is that the Lucas Kanade equation

E = sum_{winsize} (Ix * u + Iy * v * It)²

is an first order taylor approximation of the intensity constancy constrain.

I(x,y,t) = I(x+u,y+v,t+1)

so an restriction of the method without level (image pyramids) is that the image needs to be a linear function. In practise this mean only small motions could be estimated, dependend from the winsize you choose. Thats why you use the levels, which linearise the images (It). So a level of 5 is a little bit to high 3 should be enough. The top level image has in your case a size of 640x480 / 2^5 = 20 x 15.

Finally the problem in your code is the line:

 if (status[i]  == 0 || error[i] > 0) {

the error you get back from the lucas kanade method is the resulting SSD that means:

error = sum(winSize) (I(x,y,0) - I(x+u,y+u,1)^2) / (winsize * winsize)

It is very unlikely that the error is 0. So finally you skip all features. I have good experiences by ignoring the error, that is just a confidence measure. There are very good alternative confidence measures as the Foreward/Backward confidence. You could also start experiments by ignoring the status flag if too much feaurtes are discard

KLT does point tracking by finding a transformation between two sets of points regarding a certain window. The window size is an area over which each point will be chased in order to match it on the other frame.

It is another algorithm based on gradient that find the good features to track.

Normally KLT uses a pyramidal approach in order to maintain tracking even with big movements. It probably uses at "maxLevel" times for the "window sized" you specified.

Never tried KLT on binary images. The problem might be on KLT implementation that begin the search in a wrong direction and then just lost the points. When you change the windows size then the search algorithm changes also. On you're picture you have only 4 interest point maximum and only on 1 pixel.

These are parameters you're interested in :

winSize – Size of the search window at each pyramid level
maxLevel – 0-based maximal pyramid level number. If 0, pyramids are not used (single level), if 1, two levels are used etc.
criteria – Specifies the termination criteria of the iterative search algorithm (after the specified maximum number of iterations criteria.maxCount or when the search window moves by less than criteria.epsilon

Suggestion :

  • Did you try with natural pictures ? (two photos for instance), you'll have much more features to track. 4 or less is quite hard to keep. I would try this first

受限制的 HTML

  • 允许的HTML标签:<a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • 自动断行和分段。
  • 网页和电子邮件地址自动转换为链接。

相关推荐
  • OpenCV tracking using optical flow
    I use this to functions as a base of my tracking algorithm. //1. detect the features cv::goodFeaturesToTrack(gray_prev, // the image features, // the output detected features max_count, // the maximum number of features qlevel, // quality level minDist); // min distance between two features // 2. track features cv::calcOpticalFlowPyrLK( gray_prev, gray, // 2 consecutive images points_prev, // input point positions in first im points_cur, // output point positions in the 2nd status, // tracking success err); // tracking error cv::calcOpticalFlowPyrLK takes vector of points from the previous
  • 【OpenCV3编程入门学习笔记】——第2章 启程前的认知准备
    第二章 启程前的认知准备 文章目录 第二章 启程前的认知准备前言2.1 OpenCV官方例程引导与赏析2.1.1 彩色目标追踪:Camshift2.1.2 光流:optical flow2.1.3 点追踪:Ikdemo2.1.4 人脸识别:objectDetection 2.2 开源的魅力:编译OpenCV源代码2.2.1 下载安装CMake2.2.2 使用CMake生成OpenCV源码工程的解决方案2.2.3 编译OpenCV源代码 2.3 “opencv.hpp”头文件认知2.4 命名规范约定2.4.1 本书范例的命名规范2.4.2 匈牙利命名法 2.5 argc与argv参数解惑2.5.1 初识main函数中的argc和argv2.5.2 argc与argv的具体含义2.5.3 Visual Studio中的main函数的几种写法说明2.5.4 总结 2.6 格式输出函数printf()简析2.6.1 格式输出:printf()函数示例程序:printf函数的用法示例 2.7 智能显示当前使用的OpenCV版本2.8 本章小结 前言 笔记系列 参考书籍:OpenCV3编程入门 作者:毛星云 版权方:电子工业出版社 出版日期:2015-02 笔记仅供本人参考使用,不具备共通性 笔记中代码均是OpenCV+Qt的代码,并非用vs开发,请勿混淆 2.1
  • Background subtraction and Optical flow for tracking object in OpenCV C++
    I am working on a project to detect object of interest using background subtraction and track them using optical flow in OpenCV C++. I was able to detect the object of interest using background subtraction. I was able to implement OpenCV Lucas Kanade optical flow on separate program. But, I am stuck at how to these two program in a single program. frame1 holds the actual frame from the video, contours2are the selected contours from the foreground object. To summarize, how do I feed the forground object obtained from Background subtraction method to the calcOpticalFlowPyrLK? Or, help me if my
  • OpenCV Error: (-215)size.width>0 && size.height>0 in function imshow
    I am trying to make a face tracker that combines Haar Cascade Classification with Lucas Kanade good feature detection. However, I keep getting an error that I cannot figure out what it means nor how to solve it. Can anyone help me here? Error: line 110, in <module> cv2.imshow('frame',img) error: /build/buildd/opencv-2.4.8+dfsg1/modules/highgui/src/window.cpp:269: error: (-215)size.width>0 && size.height>0 in function imshow Code: from matplotlib import pyplot as plt import numpy as np import cv2 face_classifier = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_default.xml') cap =
  • 使用OpenCV进行视频稳定(Video Stabilization with OpenCV)
    问题 我有一个视频提要,该视频提要是用移动的摄像机拍摄的,其中包含移动的对象。 我想稳定视频,以便所有静止的对象在视频源中保持静止。 如何使用OpenCV做到这一点? 即,例如,如果我有两个图像prev_frame和next_frame,该如何转换next_frame以使摄像机看起来静止不动? 回答1 我可以建议以下解决方案之一: 使用本地高级功能:OpenCV包含SURF,因此:对于每个帧,提取SURF功能。 然后构建要素Kd-Tree(也在OpenCV中),然后匹配每两个连续的帧以查找成对的对应要素。 将这些对输入cvFindHomography以计算这些帧之间的单应性。 根据(组合)单应性使翘曲框架稳定。 据我所知,这是一种非常健壮和完善的方法,但是SURF提取和匹配可能会很慢如果希望两个框架之间只有很小的运动,则可以尝试使用“较不健壮”的功能来完成上述操作,例如,使用Harris角点检测并在两个框架中构建彼此最接近的角对,然后如上所述将其馈送到cvFindHomography。 可能更快,但不够健壮。 如果将移动限制在翻译范围内,则可以用更...简单的方法替换cvFindHomography,以便仅获得要素对之间的翻译(例如,平均值) 如果期望仅在两个帧之间进行转换,请使用相位相关(请参阅http://en.wikipedia.org/wiki/Phase
  • OpenCV错误:函数imshow中的(-215)size.width> 0 && size.height> 0(OpenCV Error: (-215)size.width>0 && size.height>0 in function imshow)
    问题 我正在尝试制作一个结合了Haar Cascade分类和Lucas Kanade良好特征检测功能的人脸跟踪器。 但是,我不断收到一个错误,我无法弄清楚这意味着什么,也无法解决。 有人能帮我一下吗? 错误: line 110, in <module> cv2.imshow('frame',img) error: /build/buildd/opencv-2.4.8+dfsg1/modules/highgui/src/window.cpp:269: error: (-215)size.width>0 && size.height>0 in function imshow 代码: from matplotlib import pyplot as plt import numpy as np import cv2 face_classifier = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_default.xml') cap = cv2.VideoCapture(0) # params for ShiTomasi corner detection feature_params = dict( maxCorners = 200, qualityLevel = 0.01, minDistance = 10
  • OpenCV的安装、环境配置和例程运行详解
    目录 概述安装运行环境配置例程运行 4.1 修改debug模式 4.2 添加包含目录 4.3 添加库目录 4.4 添加附加依赖项 4.5 运行结果问题处理附件 6.1 guanliufa.cpp源代码参考文献 1.概述 OpenCV是一个基于BSD许可(开源)发行的跨平台计算机视觉库,可以运行在Linux、Windows、Android和Mac OS操作系统上。它轻量级而且高效——由一系列 C 函数和少量 C++ 类构成,同时提供了Python、Ruby、MATLAB等语言的接口,实现了图像处理和计算机视觉方面的很多通用算法。 OpenCV用C++语言编写,它的主要接口也是C++语言,但是依然保留了大量的C语言接口。该库也有大量的Python、Java and MATLAB/OCTAVE(版本2.5)的接口。这些语言的API接口函数可以通过在线文档获得。如今也提供对于C#、Ch、Ruby,GO的支持。 2. 安装 由于OpenCV库是使用C\C++语言编写的源码,因此OpenCV库在各种支持C\C++编译的开发环境(Windows下的Visual Studio,Linux下的Eclipse等)下都可以运行,OpenCV库的使用方法也特别简单,只需要在项目属性中把OpenCV库包含进来,再在头文件中include对应的文件(根据实际需要用到的功能包含对应的头文件)即可
  • 视觉SLAM十四讲 第八讲 视觉里程计2 8.3 使用LK光流 代码解析
    整体思路 单层图像的光流 对第一帧图像提取FAST角点对提取到的角点提取光流特征检测及描述子生成,调用opencv的GFTTDetector对特征点,假定一个初始的运动量dx=0,dy=0。采用高斯牛顿法,最小化光度误差,求解运动量 雅可比矩阵dx/dt,dy/dt即u,v。由u+1的像素值减u-1的像素值,得到dx/dt,同理得到dy/dt。由雅可比矩阵构建 矩阵和 ,代价值为像素点x和运动后的x+dx点所在的灰度差。由LDLT分解求解dx,dy,即更新量。当代价值或更新量小于阈值时,停止,否则重复1-4步骤。 多层图像的光流 对第一帧图像提取FAST角点对提取到的角点提取光流特征检测及描述子生成,调用opencv的GFTTDetector构建4层图像金字塔由底层的图像的特征点坐标,映射到顶层的特征点坐标,构建特征点的金字塔。遍历每一层金字塔,对每层金字塔求光流 从分辨率最低的图像求LK光流初始值 最顶层,即分辨率最低的层,没有初始值,给一个初始的运动量dx=0,dy=0。从第二层到最底层,有初始值,dx=kp2[i].pt.x-kp.pt.x,同理初始化dy。 采用高斯牛顿法,最小化光度误差,求解运动量 雅可比矩阵dx/dt,dy/dt即u,v。由u+1的像素值减u-1的像素值,得到dx/dt,同理得到dy/dt。由雅可比矩阵构建 H=J^TJ 矩阵和 b=-J^Tf(x)
  • opencv python 常用方法
    点击:OpenCV--Python 基本使用 一、基本方法 1、cv2.imread() 读入图像;第一个参数为图像路径;第二个为cv2.IMREAD_COLOR:读入彩色图像;cv2.IMREAD_GRAYSCALE:读入灰度图像。 import cv2 import matplotlib.pyplot as plt from PIL import Image img_bgr = cv2.imread('../taobao.png') h = 800 w = int(h*1.0/img_bgr.shape[0] * img_bgr.shape[1]) img_bgr2 = cv2.resize(img_bgr,dsize=None,fx=1.5,fy=1.5,interpolation=cv2.INTER_CUBIC) """ def resize(src, dsize, fx=None, fy=None, interpolation=None): src: 原图像的array dsize:tuple数据类型,(fx 轴像素,fy 轴像素), 换句话来讲,第一个元素为宽度,第二个元素才是高度 fx=None, fy=None : 与 dsize不同时使用,需要指定 dsize=None,指定源图像按 x轴 y轴 缩放的比例 """ img_bgr3 = cv2.resize
  • 视频序列中车牌的局部增强(Local enhancing of license plate in video sequence)
    问题 我的目标是从给定的图像序列中创建一个具有更易读的车牌号的增强图像,其中驾驶汽车的车牌难以区分,如下面的序列。 如您所见,车牌号在大多数情况下是无法区分的。 我正在研究增强多帧超分辨率的实现(正如我在本文中研究的那样:http://users.soe.ucsc.edu/~milanfar/publications/journal/SRfinal.pdf)。 我对 OpenCV 有一些经验,我正在寻求帮助以采取什么方向,或者超分辨率是否真的是解决此类问题的可行选择。 回答1 相反,图像之间的偏移大于一个像素并不妨碍子像素精度,即图像可以向右偏移 3.3 个像素,等等。 您仍然需要亚像素精度开始,以估计帧之间的位移,在以下几行: cornerSubPix( imgA, cornersA, Size( win_size, win_size ), Size( -1, -1 ), TermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03 ) ); cornerSubPix( imgB, cornersB, Size( win_size, win_size ), Size( -1, -1 ), TermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03 ) ); [...]
  • 查找轮廓的OpenCV示例代码:向量释放问题(OpenCV example code for find contours: vector deallocation issue)
    问题 我正在尝试开始使用OpenCV 2.4.2中的轮廓检测。 为此,我为OpenCV建立了一个项目,并复制了文档中的整个示例代码。 供以后参考,下面是代码: #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> #include <stdio.h> #include <stdlib.h> using namespace cv; using namespace std; Mat src; Mat src_gray; int thresh = 100; int max_thresh = 255; RNG rng(12345); /// Function header void thresh_callback(int, void* ); /** @function main */ int main( int argc, char** argv ) { /// Load source image and convert it to gray src = imread( argv[1], 1 ); /// Convert image to gray and blur it cvtColor( src, src_gray, CV_BGR2GRAY )
  • OpenCV-Python (官方)中文教程(部分二)
    [部分一]见:https://mp.csdn.net/postedit/103956799 第五章.特征提取与 描 述 29.理解图像特征 大多数人都玩过拼图游戏。首先你们拿到一张图片的一堆碎片,要做的就是把这些碎片以正确的方式排列起来从而重建这幅图像。问题是怎样做到呢?如果把做游戏的原理写成计算机程序,那计算机就也会玩拼图游戏了。如果计算机可以玩拼图,我们就可以给计算机一大堆自然图片, 然后就可以让计算机把它拼成一张大图了。如果计算机可以自动拼接自然图片, 那我们是不是可以给计算机关于一个建筑的的大量图片,然后让计算机给我们创建一个3D 的的模型呢?问题和联想可以无边无际。但是所有的这些问题都是建立在一个基础问题之上的。这个问题就是:我们是如何玩拼图的?我们是如何把一堆碎片拼在一 起的?我们有时如何把一个个自然场景拼接成一个单独图像的?答案就是:我们要寻找一些唯一的特征,这些特征要适于被跟踪,容易被比较。如果我们要定义这样一种特征,虽然我们知道它是什么但很难用语言来描述。如果让你找出一个可以在不同图片之间相互比较的好的特征,你肯定能搞定。这就是为什么小孩子也会玩拼图的原因。我们在一副图像中搜索这样的特征,我们能找到它们,而且也能在其它图像中找到这些特征,然后再把它们拼接到一块(在拼图游戏中,我们更注重的是图片之间的连续性)。我们的这些能力都是天生的。
  • opencv3/4入门学习笔记(几万字长篇,慎入)
    欢迎交流,weixin见博客签名,坐标:重庆 入门时连续三个多月的学习笔记,一句话:坚持就对了。 持续更新,,,,,, opencv安装参考链接 安装脚本链接:https://github.com/milq/milq/blob/master/scripts/bash/install-opencv.sh 资源参考:https://www.learnopencv.com/ 1.读图: Mat src = imread("D:/vcprojects/images/test.png", IMREAD_GRAYSCALE); // 读取图像返回灰度图 opencv版本3.x : CV_LOAD_IMAGE_GRAYSCALE opencv版本4.x : cv::IMREAD_GRAYSCALE 显示图像:imshow("input", src); 2.色彩空间转换函数- cvtColor和图像保存 - imwrite cvtColor(src, gray, COLOR_BGR2GRAY); //BGR转灰度图 imwrite("D:/gray.png", gray); // 将图像gray保存到路径 3.OpenCV中图像对象创建与赋值 创建一个和src对象大小和类型相同的空白图片对象 Mat m4 = Mat::zeros(src.size(), src.type()); 4
  • OpenCV example code for find contours: vector deallocation issue
    I'm trying to get started with contour detection in OpenCV 2.4.2. To this end, I set up a project for OpenCV and copied the whole example code found in the documentation. For future reference, here is the code: #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> #include <stdio.h> #include <stdlib.h> using namespace cv; using namespace std; Mat src; Mat src_gray; int thresh = 100; int max_thresh = 255; RNG rng(12345); /// Function header void thresh_callback(int, void* ); /** @function main */ int main( int argc, char** argv ) { /// Load source
  • 使用 opencv 和 ffmpeg 制作视频。 如何找到正确的颜色格式?(Making a video with opencv and ffmpeg. How to find the right color format?)
    问题 我有一个用 python、opencv 和 ffmpeg 构建的网络摄像头录像机程序 它工作正常,只是视频的颜色比现实更蓝。 问题似乎来自图像的颜色格式。 似乎 OpenCv 正在提供 BGR 图像,而 ffmpeg+libx264 正在期待 YUV420p。 我读过 YUV420p 对应于 YCbCr。 opencv 没有从 BGR 到 YCbCr 的转换。 它只有到 YCrCb 的转换。 我进行了一些搜索并尝试了不同的替代方法来尝试将 opencv 图像转换为适合 ffmpeg+libx264 的图像。 没有一个在工作。 在这一点上,我有点迷茫,我很感激任何可以帮助我解决这个颜色问题的指针。 回答1 你是对的,OpenCV 的默认像素格式是BGR 。 ffmpeg 端的等效格式是BGR24 ,因此如果您不想,则无需将其转换为 YUV420p。 这篇文章展示了如何使用 python 应用程序从网络摄像头捕获帧并将帧写入stdout 。 目的是在 cmd 行调用此应用程序并将结果直接传送到 ffmpeg 应用程序,该应用程序将帧存储在磁盘上。 确实很聪明! 捕获.py : import cv, sys cap = cv.CaptureFromCAM(0) if not cap: sys.stdout.write("failed CaptureFromCAM") while
  • 计算光流
    // // Created by Xiang on 2017/12/19. // #include <opencv2/opencv.hpp> #include < string> #include < chrono> #include <Eigen/Core> #include <Eigen/Dense> using namespace std; using namespace cv; string file_1 = “./LK1.png”; // first image string file_2 = “./LK2.png”; // second image /// Optical flow tracker and interface class OpticalFlowTracker { public: OpticalFlowTracker( const Mat &img1_, const Mat &img2_, const vector< KeyPoint> &kp1_, vector< KeyPoint> &kp2_, vector< bool> &success_, bool inverse_ = true, bool has_initial_ = false) : img1(img1_), img2(img2_), kp1(kp1_), kp2(kp2_)
  • OpenCV-二进制图像中所有非零像素的位置(OpenCV - locations of all non-zero pixels in binary image)
    问题 如何找到二进制图像(cv :: Mat)中所有非零像素的位置? 我是否必须扫描图像中的每个像素,或者是否可以使用高级OpenCV功能? 输出应该是点的矢量(像素位置)。 例如,可以在Matlab中简单地完成以下操作: imstats = regionprops(binary_image, 'PixelList'); locations = imstats.PixelList; 或者,甚至更简单 [x, y] = find(binary_image); locations = [x, y]; 编辑:换句话说,如何查找cv :: Mat中所有非零元素的坐标? 回答1 如@AbidRahmanK所建议,在OpenCV 2.4.4版中有一个函数cv::findNonZero 。 用法: cv::Mat binaryImage; // input, binary image cv::Mat locations; // output, locations of non-zero pixels cv::findNonZero(binaryImage, locations); 它完成了工作。 此功能在OpenCV 2.4.4版中引入(例如,在2.4.2版中不可用)。 另外,由于某种原因,到目前为止, findNonZero不在文档中。 回答2
  • 视觉SLAM十四讲从理论到实践第二版源码调试笔记(实践应用7-14章)
    视觉SLAM十四讲从理论到实践第二版源码调试笔记(理论基础1-6章) 第七章和第八章:视觉里程计 1+2 使用示例,需要OpenCV4,报错如下: ROS:~/SLAM/slambook2/ch8/build$ cmake .. -- The C compiler identification is GNU 7.4.0 -- The CXX compiler identification is GNU 7.4.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info --
  • OpenCV 2.4.9 for Python, cannot find chessboard (camera calibration tutorial)
    I am trying to calibrate camera using OpenCV tools according to the following this guide. The problem is that function findChessboardCorners cannot find any chessboard on images I tried. I used a lot of them - even just plain chessboard pattern. In any case, nothing was detected. Here is the code (almost the same as from link above): import numpy as np import cv2 import glob # termination criteria criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) objp = np.zeros((6*7,3), np.float32) objp[:,:2] = np
  • 使用 OpenCV 检测停车位(Using OpenCV to detect parking spots)
    问题 我正在尝试使用 opencv 自动查找和定位空停车场中的所有停车位。 目前,我有一个代码可以对图像进行阈值处理,应用精明的边缘检测,然后使用概率霍夫线找到标记每个停车位的线。 然后程序绘制线条和组成线条的点 这是代码: #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> using namespace cv; using namespace std; int threshold_value = 150; int threshold_type = 0;; int const max_value = 255; int const max_type = 4; int const max_BINARY_value = 255; int houghthresh = 50; char* trackbar_value = "Value"; char* window_name = "Find Lines"; int main(int argc, char** argv) { const char* filename = argc >= 2 ? argv[1] : "pic1.jpg"; VideoCapture cap(0); Mat src