Image edge detection
Edge detection is a fundamental tool in image processing, machine vision and computer vision.
In this post we're going to take a look at a very basic edge detection algorithm, which takes into account a predefined tolerance value that can be adjusted to detect arbitrary fine details in a given image.
The comparison is done by taking the absolute difference of each two opposite neighbors and mapping it to a percentage, which is then compared to the value of the tolerance:
In code, this can be written as:
Bonus, not quite an edge detector, but something similar, using a different method:
In this post we're going to take a look at a very basic edge detection algorithm, which takes into account a predefined tolerance value that can be adjusted to detect arbitrary fine details in a given image.
# Algorithm
The algorithm is extremely simple; it begins by iterating over each pixel in the image, then, for each pixel, it checks its neighbors.
+-------+-------+-------+
| | | |
| A | B | C |
| | | |
+-------+-------+-------+
| | | |
| D | | E |
| | | |
+-------+-------+-------+
| | | |
| F | G | H |
| | | |
+-------+-------+-------+
If any two opposite neighbors differ in intensity by more than a predefined tolerance, then it marks the current pixel as part of an edge.| | | |
| A | B | C |
| | | |
+-------+-------+-------+
| | | |
| D | | E |
| | | |
+-------+-------+-------+
| | | |
| F | G | H |
| | | |
+-------+-------+-------+
The comparison is done by taking the absolute difference of each two opposite neighbors and mapping it to a percentage, which is then compared to the value of the tolerance:
abs( - ) / 255 * 100 > abs( - ) / 255 * 100 > abs( - ) / 255 * 100 > abs( - ) / 255 * 100 >
In code, this can be written as:
abs([ ][-1] - [ ][+1]) / 255 * 100 > or abs([-1][ ] - [+1][ ]) / 255 * 100 > or abs([-1][-1] - [+1][+1]) / 255 * 100 > or abs([-1][+1] - [+1][-1]) / 255 * 100 >
where "x" and "y" are the positions of the current pixel, and "t" is the tolerance value (ranging from 0 to 100).
If either one of the above statements is true, then an edge will be set at the current position:
[][] = true
Optionally, after this step, a noise removal algorithm can be used to remove unwanted edges, such as isolated pixels that are not connected to anything:
if ([][] == true) { if (![ ][+1] and ![ ][-1] and ![-1][-1] and ![-1][ ] and ![-1][+1] and ![+1][-1] and ![+1][ ] and ![+1][+1]) { [][] = false } }
# Conclusion
The described algorithm is simple, beautiful and easy to implement. Because it makes decisions based on a given tolerance, is easier to adjust it to a particular need where, for example, when a large amount of detail is required.
# Implementation
An implementation of the described algorithm can be found bellow:
- Perl (using GD): edge_detector.pl
Bonus, not quite an edge detector, but something similar, using a different method:
- Perl: diff_negative.pl
Comments
Post a Comment