像素颜色匹配估计(Pixel color matching estimate)

出于图像扫描的目的,我想要一个像素(我可以从UIImage获得)来匹配(一定百分比)到预先设定的颜色。

说粉红色。 当我扫描图像中的粉红色像素时,我想要一个函数来返回像素中RGB值看起来像我预设RGB值的百分比。 通过这种方式,我希望所有(好的,大多数)粉红色像素对我来说都是“可见的”,而不仅仅是完全匹配。

有人熟悉这种方法吗? 你会怎么做这样的事情?

提前致谢。

更新:到目前为止,谢谢大家的答案。 我接受了Damien Pollet的答案,因为它进一步帮助了我,我得出的结论是,计算两种RGB颜色之间的矢量差异对我来说是完美的(此时此刻)。 它可能需要一些调整,但现在我使用以下(在目标c):

float difference = pow( pow((red1 - red2), 2) + pow((green1 - green2), 2) + pow((blue1 - blue2), 2), 0.5 );

如果这个差异低于85,我接受颜色作为我的目标颜色。 由于我的算法不需要精度,我对这个解决方案没问题:)

更新2:在我搜索更多内容时,如果您正在寻找类似的东西,我发现以下网址可能非常(低估)对您有用。

http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios

For image scanning purposes, I'd like a pixel (which I can get from a UIImage) to match (for a certain percentage) to a pre-set color.

Say pink. When I scan the image for pixels that are pink, I want a function to return a percentage of how much the RGB value in the pixel looks like my pre-set RGB value. This way I'd like all (well, most) pink pixels to become 'visible' to me, not just exact matches.

Is anyone familiar with such an approach? How would you do something like this?

Thanks in advance.

UPDATE: thank you all for your answers so far. I accepted the answer from Damien Pollet because it helped me further and I came to the conclusion that calculating the vector difference between two RGB colors does it perfectly for me (at this moment). It might need some tweaking over time but for now I use the following (in objective c):

float difference = pow( pow((red1 - red2), 2) + pow((green1 - green2), 2) + pow((blue1 - blue2), 2), 0.5 );

If this difference is below 85, I accept the color as my target color. Since my algorithm needs no precision, I'm ok with this solution :)

UPDATE 2: on my search for more I found the following URL which might be quite (understatement) useful for you if you are looking for something similar.

http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios

最满意答案

我会说只计算你的目标颜色的矢量差异,并检查它的规范是否小于某个阈值。 我怀疑一些颜色空间在这方面比其他颜色空间更好,可能是HSL或L * ab,因为它们将亮度与色调本身分开,因此可能通过较小的颜色矢量表示小的感知差异......

另外,请参阅此相关问题

I would say just compute the vector difference to your target color, and check that it's norm is less than some threshold. I suspect some color spaces are better than others at this, maybe HSL or L*ab, since they separate the brightness from the color hue itself, and so might represent a small perceptual difference by a smaller color vector...

Also, see this related question

更多推荐