天道酬勤,学无止境

有限帧率 picamera v2(limited framerate picamera v2)

问题

关于 picamera v2 帧率的问题:根据 picamera 的文档,以下帧率对于该硬件是可行的:

Resolution  Aspect Ratio    Framerates  Video   Image   FoV     Binning
1   1920x1080   16:9    0.1-30fps   x       Partial     None
2   3280x2464   4:3     0.1-15fps   x   x   Full    None
3   3280x2464   4:3     0.1-15fps   x   x   Full    None
4   1640x1232   4:3     0.1-40fps   x       Full    2x2
5   1640x922    16:9    0.1-40fps   x       Full    2x2
6   1280x720    16:9    40-90fps    x       Partial     2x2
7   640x480     4:3     40-90fps    x       Partial     2x2

然而,当使用 capture_sequence 方法(在文档中被称为最快的方法)收集图像时,我并没有接近这些数字。

对于 1280x720 速率,它的最高速度为 25 fps,而在 640x480 时,它的最高速度接近 60。

我正在执行的计算无关紧要,即将它们注释掉并没有什么区别(计算速度足够快,不会成为问题的原因)。 如果有人会在我尝试做的事情中看到一些缺陷并解决增加帧率的问题......

import io
import time
import picamera
#import multiprocessing
from multiprocessing.pool import ThreadPool
#import threading
import cv2
#from PIL import Image
from referenceimage import ReferenceImage
from detectobject_stream import detectobject_stream
from collections import deque
from datawriter import DataWriter


backgroundimage=ReferenceImage()
threadn = cv2.getNumberOfCPUs()
pool = ThreadPool(processes = threadn)
pending = deque()
Numberofimages=500

starttime=time.time()

#datawrite=DataWriter()
#datawrite.start()

def outputs():
    stream = io.BytesIO()
    Start=True
    global backgroundimage
    for i in range(Numberofimages):
        yield stream

        #print time.time()-starttime
        #start = time.time()
        while len(pending) > 0 and pending[0].ready():
            timestamps = pending.popleft().get()
            #print timestamps   

        if len(pending)<threadn:
            stream.seek(0)
            task = pool.apply_async(detectobject_stream, (stream.getvalue(),backgroundimage,Start,0))
        pending.append(task)   

        Start=False

        stoptime = time.time()

        print stoptime-start

        stream.seek(0)
        stream.truncate()
        #print i

with picamera.PiCamera() as camera:
    #camera.resolution = (640, 480)
    camera.resolution = (1280, 720)
    camera.framerate = 60
    camera.start_preview()

    time.sleep(2)
    start = time.time()
    camera.capture_sequence(outputs(),format='bgr',use_video_port=True)
    finish = time.time()
    print('Captured images at %.2ffps' % (Numberofimages / (finish - start)))

提前致谢

受限制的 HTML

  • 允许的HTML标签:<a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • 自动断行和分段。
  • 网页和电子邮件地址自动转换为链接。

相关推荐
  • limited framerate picamera v2
    Question about framerates on the picamera v2: According to the documentation of picamera , the following framerates are feasible for this hardware: Resolution Aspect Ratio Framerates Video Image FoV Binning 1 1920x1080 16:9 0.1-30fps x Partial None 2 3280x2464 4:3 0.1-15fps x x Full None 3 3280x2464 4:3 0.1-15fps x x Full None 4 1640x1232 4:3 0.1-40fps x Full 2x2 5 1640x922 16:9 0.1-40fps x Full 2x2 6 1280x720 16:9 40-90fps x Partial 2x2 7 640x480 4:3 40-90fps x Partial 2x2 However, when gathering images with the capture_sequence method (which in the documentation is referred to as the fastest
  • python人脸检测树莓派与picamera(python face detection raspberry pi with picamera)
    问题 我是 python 和 opencv 的新手,我正在尝试用树莓派构建一个人脸检测项目。 我收到此错误,这是我的代码 回溯(最近一次调用最后一次): File "/home/pi/Desktop/picamera-code/FaceDetection1.0", line 19, in <module> for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True): 代码: import numpy as np import cv2 from picamera.array import PiRGBArray from picamera import PiCamera import time camera = PiCamera() camera.resolution = (640, 480) camera.framerate = 32 rawCapture = PiRGBArray(camera, size=(640, 480)) time.sleep(0.1) face_cascade = cv2.CascadeClassifier('/home/pi/Downloads/haarcascade_frontalface_default.xml') for frame in
  • python face detection raspberry pi with picamera
    I am a newbie with python and opencv i am trying to build a face detection project with raspberry pi. i am getting this error and here is my code Traceback (most recent call last): File "/home/pi/Desktop/picamera-code/FaceDetection1.0", line 19, in <module> for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True): Code: import numpy as np import cv2 from picamera.array import PiRGBArray from picamera import PiCamera import time camera = PiCamera() camera.resolution = (640, 480) camera.framerate = 32 rawCapture = PiRGBArray(camera, size=(640, 480)) time.sleep(0.1)
  • 从命令提示符运行脚本时,PiCamera 无法初始化为类成员(PiCamera cannot be initialized as a class member when the script is run from command prompt)
    问题 在我的 Raspberry Pi 上,我遇到了关于使用 PiCamera 模块的奇怪行为。 从 IDLE (F5) 或命令提示符 ($python test.py) 启动时,以下代码运行顺畅 import picamera if __name__ == "__main__": camera=picamera.PiCamera() camera.close() 但是当我将相机对象放入一个类中时,代码只会在从 IDLE (F5) 启动时运行: import picamera class VF: def __init__(self): self.camera = picamera.PiCamera() def __del__(self): self.camera.close() if __name__ == "__main__": myvf = VF() 当我从命令提示符启动上述代码时,我收到以下错误消息: mmal:mmal_vc_component_enable:无法启用组件:ENOSPC 回溯(最近一次调用最后一次):文件“test.py”,第 14 行,在 myvf = VF() 文件“test.py”,第 6 行,在init self.camera = picamera.PiCamera() 文件“/usr/lib/python2.7/dist-packages
  • PiCamera cannot be initialized as a class member when the script is run from command prompt
    on my Raspberry Pi, I encounter a strange behaviour regarding the use of the PiCamera module. The following code runs smoothly when either started from IDLE (F5) or from the command prompt ($python test.py) import picamera if __name__ == "__main__": camera=picamera.PiCamera() camera.close() But when I put the camera object into a class the code will run only when started from IDLE (F5): import picamera class VF: def __init__(self): self.camera = picamera.PiCamera() def __del__(self): self.camera.close() if __name__ == "__main__": myvf = VF() When I start the above code from the command prompt
  • 树莓派4学习记录(4)-摄像头
    树莓派4学习记录(4) 树莓派摄像头的玩法1. 使能摄像头外设2. 简单拍照3. http + vlc获取视频流3.1 服务器端(树莓派)3.2 本地 4. opencv获取视频并播放4.1 安装opencv4.2 使用opencv获取视频流并播放出来 5. opencv + UDP传输 树莓派摄像头的玩法 1. 使能摄像头外设 依旧是: sudo raspi-config 选择5 interfacing options: 选择P1 camera: 选择yes: 使能摄像头结束。 2. 简单拍照 直接上代码: raspistill -o new.jpg 等待几秒钟,然后保存一个图片到当前目录 详细的参数可以参考: raspistill 详细参数设置 3. http + vlc获取视频流 3.1 服务器端(树莓派) 直接上命令: sudo raspivid -o - -rot 180 -t 0 -w 640 -h 480 -fps 30|cvlc -vvv stream:///dev/stdin --sout '#standard{access=http,mux=ts,dst=:8090}' :demux=h264 -rot: 图像旋转180(我添加了这个,自己看情况是否添加); -t:延时 -w:输出视频宽度 -h:输出视频高度 -fps:输出视频帧数 access
  • No module named ‘picamera'
    I followed this website (https://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/) to set my picamera, but I have a problem with the picamera module. I did install the picamera module, and this picture is from pip freeze. https://imgur.com/a/3y5b2rO You can see I have picamera 1.13 already, but when I try test_image.py, it says "No module named ‘picamera'". https://imgur.com/a/XMEXwXJ I have uninstalled and installed many times, but the error still exist. How do I fix this? test_image.py # import the necessary packages from picamera.array import
  • PiCameraValueError:分辨率为 1920x1080 的缓冲区长度不正确(PiCameraValueError: Incorrect buffer length for resolution 1920x1080)
    问题 这是我检测圆/球的代码。 我想从 Pi 相机检测足球。 (足球可以是任何颜色。)我遇到了一些 PiCameraValueError 问题。 这段代码有什么问题。 我使用的是 Raspberry Pi 2、Python2 和 OpenCV。 from picamera.array import PiRGBArray from picamera import PiCamera import time import cv2 import sys import imutils import cv2.cv as cv # initialize the camera and grab a reference to the raw camera capture camera = PiCamera() rawCapture = PiRGBArray(camera) # capture frames from the camera for frame in camera.capture_continuous(rawCapture, format="bgr"): # grab the raw NumPy array representing the image, then initialize the timestamp # and occupied/unoccupied text image
  • 树莓派相机。 资源不足(Raspberry Pi camera. Out of resources)
    问题 尝试使用运动传感器启动我的相机。 像这样工作正常: import RPi.GPIO as GPIO import time import picamera import datetime import os def getFileName(): return datetime.datetime.now().strftime("%Y-%m-%d_%H.%M.%S.h264") pin = 4 GPIO.setmode(GPIO.BCM) GPIO.setup(pin, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) prevState = False currState = False camera = picamera.PiCamera() while True: time.sleep(0.1) prevState = currState currState = GPIO.input(pin) if currState != prevState: newState = "HIGH" if currState else "LOW" print ("GPIO pin %s is %s" % (pin, newState)) if currState: fileName = getFileName() print ("Starting Recording
  • Raspberry Pi camera. Out of resources
    Trying to launch my camera with motion sensor. Works fine like this: import RPi.GPIO as GPIO import time import picamera import datetime import os def getFileName(): return datetime.datetime.now().strftime("%Y-%m-%d_%H.%M.%S.h264") pin = 4 GPIO.setmode(GPIO.BCM) GPIO.setup(pin, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) prevState = False currState = False camera = picamera.PiCamera() while True: time.sleep(0.1) prevState = currState currState = GPIO.input(pin) if currState != prevState: newState = "HIGH" if currState else "LOW" print ("GPIO pin %s is %s" % (pin, newState)) if currState: fileName =
  • PiCameraValueError: Incorrect buffer length for resolution 1920x1080
    This is my code to detect circle/balls. I want to detect soccer ball from Pi camera. (soccer ball could be in any color.) I am having trouble with some PiCameraValueError. What is wrong with this code. I am using Raspberry Pi 2, Python2, and OpenCV. from picamera.array import PiRGBArray from picamera import PiCamera import time import cv2 import sys import imutils import cv2.cv as cv # initialize the camera and grab a reference to the raw camera capture camera = PiCamera() rawCapture = PiRGBArray(camera) # capture frames from the camera for frame in camera.capture_continuous(rawCapture, format
  • 在PyGame中将Raspberry Pi Camera的IO流显示为视频(Display IO Stream from Raspberry Pi Camera as video in PyGame)
    问题 我正在做一个需要取景器(条形码扫描仪)的项目。 我正在使用picamera python模块的Raspberry Pi摄像头模块进行此picamera ,并且我已经完成了整个检测和未编程的操作。 现在,我需要弄清楚如何在PyGame电影模块中显示来自Pi的Camera模块的预览。 (如果有更好的方法可以在PyGame中显示来自IO流的视频,请告诉我。) 我需要在PyGame中显示它的原因是,我需要在视频上方叠加控件,并能够从触摸屏获取输入,而触摸屏将用作Pi /项目的取景器/屏幕。 据我从pygame.movi​​e文档中看到的, pygame.movie仅从文件加载。 有没有一种方法可以将流转换为类似文件的对象,并从中播放PyGame? 基本上,我需要一种方法来获取在此示例代码中创建的io.BytesIO流,并将其显示在PyGame中。 回答1 据我了解,您需要从相机模块到屏幕进行即时和无限的预览。 我有办法解决。 首先,您必须安装官方V4L2驱动程序。 sudo modprobe bcm2835-v4l2 参考https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=62364 然后,您应该创建一个python文件来对此进行编译和编码 import sys import pygame import pygame
  • Scanning QR Code via zbar and Raspicam module
    I want to use my raspi cam modul to scan QR codes. For detecting and decoding qr codes I want to use zbar. My current code: import io import time import picamera import zbar import Image if len(argv) < 2: exit(1) # Create an in-memory stream my_stream = io.BytesIO() with picamera.PiCamera() as camera: camera.start_preview() # Camera warm-up time time.sleep(2) camera.capture(my_stream, 'jpeg') scanner = zbar.ImageScanner() scanner.parse_config('enable') pil = Image.open(argv[1]).convert('L') width, height = pil.size raw = pil.tostring() my_stream = zbar.Image(width, height, 'Y800', raw) scanner
  • Python Embedded C++
    I have read few tutorial on python embedded c++. I had refer back to python object. https://docs.python.org/3/c-api/function.html Python script: import picamera from pylibdmtx.pylibdmtx import decode from time import sleep import cv2 def test(): camera = picamera.PiCamera() camera.start_preview() sleep(10) camera.stop_preview() camera.capture('image3.png') camera.close() data = decode(cv2.imread('/home/pi/image3.png')) return(data) C++ Script #include<Python.h> #include<string> int main(){ String data2; Py_Initialize(); ***Doing Some Stuff TO GET data from test() function in python script and
  • Python 嵌入式 C++(Python Embedded C++)
    问题 我已经阅读了一些关于 python 嵌入式 C++ 的教程。 我已经参考了 python 对象。 https://docs.python.org/3/c-api/function.html 蟒蛇脚本: import picamera from pylibdmtx.pylibdmtx import decode from time import sleep import cv2 def test(): camera = picamera.PiCamera() camera.start_preview() sleep(10) camera.stop_preview() camera.capture('image3.png') camera.close() data = decode(cv2.imread('/home/pi/image3.png')) return(data) C++ 脚本 #include<Python.h> #include<string> int main(){ String data2; Py_Initialize(); ***Doing Some Stuff TO GET data from test() function in python script and store in variable data2 Py_Finalize(); }
  • Python - resize image
    I'm using the code below to detect faces: import io import picamera import cv2 import numpy import PIL from PIL import Image from resizeimage import resizeimage #Load a cascade file for detecting faces face_cascade = cv2.CascadeClassifier('/usr/share/opencv/haarcascades/haarcascade_frontalface_alt.xml') #Create a memory stream so photos doesn't need to be saved in a file stream = io.BytesIO() #Get the picture (low resolution, so it should be quite fast) #Here you can also specify other parameters (e.g.:rotate the image) with picamera.PiCamera() as camera: camera.resolution = (640, 480) camera
  • Python - 调整图像大小(Python - resize image)
    问题 我正在使用下面的代码来检测人脸: import io import picamera import cv2 import numpy import PIL from PIL import Image from resizeimage import resizeimage #Load a cascade file for detecting faces face_cascade = cv2.CascadeClassifier('/usr/share/opencv/haarcascades/haarcascade_frontalface_alt.xml') #Create a memory stream so photos doesn't need to be saved in a file stream = io.BytesIO() #Get the picture (low resolution, so it should be quite fast) #Here you can also specify other parameters (e.g.:rotate the image) with picamera.PiCamera() as camera: camera.resolution = (640, 480) camera.vflip = False camera
  • 使用 FFMPEG 通过命令行 Android 从选定图像创建视频(creating video from selected images using FFMPEG through command Line Android)
    问题 我正在尝试在 android 中使用 ffmpeg 从命令行中选择的图像制作视频 使用这个项目作为我的来源我正在尝试制作视频 这是我尝试创建视频的命令 String[] ffmpegCommand = {ffmpegBin, "-y", "-qscale", "1", "-r", "" + frameRate, "-i", image1.getAbsolutePath(), "-t", "" + (((4) * 30) + 4), //"-s",heightwidth, "-i", image2.getAbsolutePath(), "-t", "" + (((4) * 30) + 4), //"-s",heightwidth, "-i", image3.getAbsolutePath(), "-t", "" + (((4) * 30) + 4), //"-s",heightwidth, "-i", image4.getAbsolutePath(), "-t", "" + (((4) * 30) + 4), //"-s",heightwidth, "-vcodec", "libx264", "-s", "640x480", outputFile.getAbsolutePath()}; 但创建的视频仅显示第一张图像,视频创建时间不到一秒钟 这个声明有什么问题?
  • OpenCV Python,从命名管道读取视频(OpenCV Python, reading video from named pipe)
    问题 我正在尝试实现视频中显示的结果(使用 netcat 的方法 3)https://www.youtube.com/watch?v=sYGdge3T30o 关键是将视频从 raspberry pi 流式传输到 ubuntu PC 并使用 openCV 和 python 处理它。 我使用命令 raspivid -vf -n -w 640 -h 480 -o - -t 0 -b 2000000 | nc 192.168.0.20 5777 将视频流式传输到我的 PC,然后在 PC 上我创建了名称管道“fifo”并重定向了输出 nc -l -p 5777 -v > fifo 然后我试图读取管道并在 python 脚本中显示结果 import cv2 import sys video_capture = cv2.VideoCapture(r'fifo') video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 640); video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480); while True: # Capture frame-by-frame ret, frame = video_capture.read() if ret == False: pass cv2.imshow('Video', frame)
  • 无帧速率限制的移动C ++ SFML(Movement Without Framerate Limit C++ SFML)
    问题 我目前正在使用SFML制作游戏,但是我一直在不受帧率限制的情况下陷入困境。 现在,我弄清楚如何在所有计算机上获得一致帧率的唯一方法是使用 window.setFramerateLimit(30); 我想找到一种没有帧速率上限的方法,以便在更好的计算机上看起来更好,这样,如果有人计算机速度非常慢,他们仍然可以玩游戏。 什么是做到这一点的最佳方法。 回答1 您应该将自上一帧以来经过的时间传递给需要绘制的对象,然后计算该对象必须移动的空间,如下所示: sf::Clock clock; int speed = 300; //Draw func that should be looped void Draw() { sf::Time elapsedTime = clock.restart(); float tempSpeed = elapsedTime.asSeconds() * speed; drawnObject.move(tempSpeed, 0); drawnObject.draw(window); } 这样,无论FPS大小如何,“ drawnObject”都会每秒向右移动300(像素?) 回答2 @Waty的答案是正确的,但是您可能要使用固定的时间步长。 看一下SFML Game开发书的源代码。 这是有趣的代码段: const sf::Time Game