天道酬勤,学无止境

core-image

Adaptive Threshold CIKernel/CIFilter iOS

I have researched all over in order to find a kernel that performs adaptive thresholding on iOS. Unfortunately I do not understand the kernel language or the logic behind it. Below, I have found a routine that performs thresholding (https://gist.github.com/xhruso00/a3f8a9c8ae7e33b8b23d) static NSString * const kKernelSource = @"kernel vec4 thresholdKernel(sampler image)\n" "{\n" " float inputThreshold = 0.05;\n" " float pass = 1.0;\n" " float fail = 0.0;\n" " const vec4 vec_Y = vec4( 0.299, 0.587, 0.114, 0.0 );\n" " vec4 src = unpremultiply( sample(image, samplerCoord(image)) );\n" " float Y =

2021-11-28 08:38:35    分类:问答    ios   core-image   cifilter   adaptive-threshold   cikernel

有没有办法创建 SKSpriteNode 的 CGPath 匹配轮廓?(Is there a way to create a CGPath matching outline of a SKSpriteNode?)

问题 我的目标是创建一个与 SKSpriteNode 的轮廓相匹配的 CGPath。 这对于创建 SKSpriteNodes 的发光/轮廓以及物理路径非常有用。 有人认为我有过,但我并没有真正使用 CIImage,所以我不知道是否有一种方法可以在像素级别访问/修改图像。 那么也许我可以将这样的东西移植到 Objective-C 中: http://www.sakri.net/blog/2009/05/28/detecting-edge-pixels-with-marching-squares-algorithm/ 对其他使这个过程自动化的方法也非常开放,而不是我为我为物理或轮廓/发光效果制作的每个精灵创建形状路径。 回答1 您正在寻找的称为轮廓跟踪算法。 Moore 邻居追踪很流行,适用于图像和瓦片地图。 但请务必查看替代方案,因为它们可能更适合您的目的。 AFAIK 行进方块和轮廓跟踪是密切相关的,如果不是相同的(类别)算法。 Kobold Kit 中包含了瓷砖贴图(从瓷砖创建物理形状)的实现。 该算法的主体在 KKTilemapLayerContourTracer.m 的 traceContours 方法中。 它看起来比实际更复杂,另一方面,它需要一段时间才能理解它,因为它是一种“行走”算法,这意味着在当前步骤中使用先前步骤的结果来做出决策。 KK

2021-11-27 07:31:30    分类:技术分享    ios   core-image   sprite-kit

将 CIImage 转换为 NSImage(Converting CIImage Into NSImage)

问题 我正在使用 Core Image 框架。 据我了解,如果我有一个图像( NSImage ),首先需要将其转换为CIImage 。 我可以做到。 NSImage *im1 = [[NSImage alloc] initWithContentsOfFile:imagepath]; NSRect rect1;rect1.size.width = img1.size.width; rect1.size.height = img1.size.height; CGImageRef imageRef1 = [img1 CGImageForProposedRect:&rect1 context:[NSGraphicsContext currentContext] hints:nil]; CIImage *ciimage = [CIImage imageWithCGImage:imageRef1]; 我有一个函数可以将 Core Image 过滤器应用于我想要测试的核心图像 ( CIImage )。 我想将输出图像作为子视图添加到窗口中。 所以我需要 NSImage。 如何将此核心图像转换回 NSImage? 如果我问谷歌,我不会得到好的结果。 感谢您的帮助。 回答1 我还没有测试过,但我认为应该这样做: CIImage *ciImage = ...; NSCIImageRep *rep =

2021-11-27 00:15:37    分类:技术分享    objective-c   macos   cocoa   core-image   nsimage

iOS:核心图像和多线程应用程序(iOS: Core image and multi threaded apps)

问题 我正在尝试以最有效的方式运行一些核心图像过滤器。 试图避免渲染大图像时出现的内存警告和崩溃。 我正在查看 Apple 的 Core Image Programming Guide。 关于多线程,它说: “每个线程必须创建自己的 CIFilter 对象。否则,您的应用程序可能会出现意外行为。” 这是什么意思? 我实际上试图在后台线程上运行我的过滤器,所以我可以在主线程上运行一个 HUD(见下文)。 这在 coreImage 的上下文中有意义吗? 我认为核心映像固有地使用 GCD。 //start HUD code here, on main thread // Get a concurrent queue form the system dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_async(concurrentQueue, ^{ //Effect image using Core Image filter chain on a background thread dispatch_async(dispatch_get_main_queue(), ^{ //dismiss HUD and add fitered

2021-11-26 00:51:43    分类:技术分享    ios   multithreading   grand-central-dispatch   core-image

Apply visual effect to images pixel by pixel in Swift

I have an university's assignment to create visual effect and apply them to video frames captured through the devices camera. I currently can get the image and display but can't change the pixel color values. I transform the sample buffer to the imageRef variable and if I transform it to UIImage everything is alright. But now I want to take that imageRef an change its color's values pixel by pixel, in this example change to negative colors (I have to do more complicated stuff so I can't use CIFilters) but when I execute the commented part it crashed due to bad access. import UIKit import

2021-11-25 06:03:57    分类:问答    ios   iphone   camera   swift   core-image

AVCaptureVideoPreviewLayer add overlays and capture photo in iOS

The initial idea was to start a camera stream via AVCaptureSession, find faces in that raw CMSampleBuffer and then add some images as layers on AVCaptureVideoPreviewLayer and then take a screenshot. After completing that, found out later that the UIGraphicsGetImageFromCurrentImageContext won't work with AVCaptureVideoPreviewLayer, so taking screenshot would not solve my purpose here. So I used Metal and MTKView instead to perform some live rendering and the results are good with the combination of CoreImage Filters and Metal. I already know how to detect faces and alter that part of the face

2021-11-24 14:16:29    分类:问答    ios   swift   avfoundation   metal   core-image

iOS: Core image and multi threaded apps

I am trying to run some core image filters in the most efficient way possible. Trying to avoid memory warnings and crashes, which I am getting when rendering large images. I am looking at Apple's Core Image Programming Guide. Regarding multi-threading it says: "each thread must create its own CIFilter objects. Otherwise, your app could behave unexpectedly." What does this mean? I am in fact attempting to run my filters on a background thread, so I can run an HUD on the main thread (see below). Does this make sense in the context of coreImage? I gather that core image inherently uses GCD. /

2021-11-22 01:26:34    分类:问答    ios   multithreading   grand-central-dispatch   core-image

Converting CIImage Into NSImage

I'm playing with the Core Image framework. As I understand, if I have an image (NSImage), it needs to be converted into CIImage, first. I can do that. NSImage *im1 = [[NSImage alloc] initWithContentsOfFile:imagepath]; NSRect rect1;rect1.size.width = img1.size.width; rect1.size.height = img1.size.height; CGImageRef imageRef1 = [img1 CGImageForProposedRect:&rect1 context:[NSGraphicsContext currentContext] hints:nil]; CIImage *ciimage = [CIImage imageWithCGImage:imageRef1]; I have a function that applies a Core Image filter to a core image (CIImage), which I want to test. And I want to add output

2021-11-21 12:39:50    分类:问答    objective-c   macos   cocoa   core-image   nsimage

Remove background from Image & take only Image part for save in iOS

This is what i need to achieve: Take image from camera or Gallery Remove background from image & save it Background should be anything black or white Also need to remove shadow along with the background Result Example: Original Image Result Image This is what i have tried: CGFloat colorMasking[6]={222,255,222,255,222,255}; CGImageRef imageRef = CGImageCreateWithMaskingColors([IMG CGImage], colorMasking); UIImage *resultThumbImage = [UIImage imageWithCGImage:imageRef scale:ThumbImage.scale orientation:IMG.imageOrientation]; Its only work on white background. Its not more effective. I need to

2021-11-21 07:13:55    分类:问答    ios   opencv   uiimage   core-image   chromakey

Is there a way to create a CGPath matching outline of a SKSpriteNode?

My goal is to create a CGPath that matches the outline of a SKSpriteNode. This would be useful in creating glows/outlines of SKSpriteNodes as well as a path for physics. One thought I have had, but I have not really worked much at all with CIImage, so I don't know if there is a way to access/modify images on a pixel level. Then maybe I would be able to port something like this to Objective-C : http://www.sakri.net/blog/2009/05/28/detecting-edge-pixels-with-marching-squares-algorithm/ Also very open to other approaches that make this process automated as opposed to me creating shape paths for

2021-11-20 04:55:51    分类:问答    ios   core-image   sprite-kit