top of page
Search

Computer Vision and Issue Detection

In the realm of structural maintenance, most moisture detection systems currently rely directly on water (or wet material) physically coming into contact with a sensor. This is an intuitive method that simply indicates the presence or absence of water. Such systems, while functional, offer limited information and can sometimes be slow to alert operators to potential issues, especially in cases where small amounts of water are present. Wet spots on walls and trickles of water can erode the concrete in an elevator pit, even before large enough amounts of water reach these traditional detection systems to trigger an alarm. Because of these shortcomings, Tenera is leveraging Compute Vision to improve water and damage detection far beyond the capabilities of traditional systems. By analyzing changes in a scene over time, libraries like OpenCV enable us to detect signs of water damage and other issues, long before they come within reach of a physical detector. In this blog post, we'll explore how we have been able to use OpenCV as a first line of detection in hard-to-reach spaces.

Computer vision and Issue Detection

OpenCV, or Open-Source Computer Vision Library, is a library designed for real-time image processing, i.e. ability to recognize objects and patterns within images. We can use this to detect more subtle signs of water, such as concrete discoloration, small puddles, trickles of water, as soon as it would be visible to the naked eye. This presents some significant advantages over traditional moisture detection systems, as smaller amounts of water might not come into contact and trigger them. Such conventional systems may fail to identify gradual water damage, which commonly presents as visible wet patches where degrading concrete on a wall slowly expands, potentially leading to severe leaks before ever coming into contact with contact-based detectors.

 

Leveraging computer vision for issue detection, however, introduces its own set of challenges. To accurately identify changes indicative of moisture or water damage, one needs to account for variables that could affect image analysis. This includes adjustments for lighting variations that might obscure or exaggerate the appearance of moisture, recognizing and discounting compression artifacts that could be misinterpreted as damage, and distinguishing between normal, expected changes in a scene and those that signify potential problems. By addressing these factors, computer vision technology can provide a more nuanced and proactive approach to identifying water damage, marking a significant leap forward from the binary, contact-based systems traditionally used.



Measuring changes in two images

Let's demonstrate with a concrete example. Using OpenCV and Python, we will show how to recognize and highlight differences between two image frames, and work around some common pitfalls mentioned above. If you want to follow along, this section assumes a level of familiarity with Python, such as running scripts and installing libraries using pip.

 

To begin with, let's find two similar frames to use as a test case. For best results, use a stationary camera on a tripod mount:



We'll save these as Image1 and Image2 respectively in our work folder.

 

Next, lets start our script by importing the libraries we'll need: OpenCV (cv2) and numpy for calculations:

 

import cv2

import numpy as np

 

Next we load our images using OpenCV:

 

image1 = cv2.imread('image1.png')

image2 = cv2.imread('image2.png')

 

Next, we can calculate the raw difference between these images using cv2.absdiff, which simply compares the two images pixel by pixel:

 

difference = cv2.absdiff(image1, image2)

 

Now, if we were to simply take the absolute difference between these images, we'd see the following:



You can see the obvious missing coffee cup, but there's a whole lot of other noise that is being marked as "changes" even though the only changes are lighting and focus. First, let’s try to address the lighting issue by normalizing the histograms of each picture. This process takes a grayscale image and normalizes the brightness and contrast of it, reducing some of the ambient changes and noise between the two scenes:

 

gray1 = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)

gray2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)

equalized1 = cv2.equalizeHist(gray1)

equalized2 = cv2.equalizeHist(gray2)

 

The result is a significant noise reduction.



As we can see, lots of the noise on the table and walls has been cleared up by this process. However, there is still some remaining. We can try to clear up small differences between the two images by applying a blur to the input images using cv2's Gaussian Blur method, hoping to obscure noise and artifacts:

 

blurred1 = cv2.GaussianBlur(gray1, (11, 11), 0)

blurred2 = cv2.GaussianBlur(gray2, (11, 11), 0)




Again, we're seeing a good reduction in noise! Now, to further improve on this, we can use cv2.threshold to define how different an individual pixel must be from one frame to the next to be considered "Different". This requires some careful benchmarking, setting this threshold too low may cause noise to be interpreted as real differences, setting it too high may cause subtle differences (think wet spots!) to be ignored. Arbitrarily, let’s start with a threshold value of 20 (of 255 maximum):

 

difference = cv2.absdiff(equalized1, equalized2)

, thresh = cv2.threshold(difference, 20, 255, cv2.THRESHBINARY)


The result is underwhelming:



(Black sections are seen as "same" while colored sections are seen as "different" by the machine.)

 

So ,with a small threshold of 20, it is detecting the coffee cup, but it is also detecting smaller changes that don't warrant notifying an operator about, such as subtle lighting differences in the walls and table. While the histogram normalization reduced noise, the absolute final difference is still non-zero! Let’s try again with a higher threshold, say 80:



Now we see the background changes in the scene have been largely ignored, but also large portions of the cup have been as well (after all, the cup's color isn't far off from the gray wall behind it! That said, we can goldilocks this result a bit, say threshold = 50:



The threshold to be used for your scene will likely be different, depending on the camera, lighting, and subject of interest. Don't be afraid to adjust the blur and threshold values to suit your use case! Below is the full script we've used including the visualizations of the output:

import cv2

import numpy as np

# Load the images

image1 = cv2.imread('image1.jpg')

image2 = cv2.imread('image2.jpg')

# Convert to grayscale

gray1 = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)

gray2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)

# Apply Gaussian blur to both images

blurred1 = cv2.GaussianBlur(gray1, (21, 21), 0)  # Kernel size 5x5, sigmaX=0 (auto calculate)

blurred2 = cv2.GaussianBlur(gray2, (21, 21), 0)  # Adjust kernel size and sigmaX as needed

# Equalize histograms of the blurred images

equalized1 = cv2.equalizeHist(blurred1)

equalized2 = cv2.equalizeHist(blurred2)

# Compute the absolute difference

difference = cv2.absdiff(equalized1, equalized2)

# Apply a threshold

, thresh = cv2.threshold(difference, 50, 255, cv2.THRESHBINARY)

# Find contours in the thresholded image

contours, = cv2.findContours(thresh, cv2.RETREXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# Overlay contours on the original image

overlay_image = image1.copy()

cv2.drawContours(overlay_image, contours, -1, (0, 0, 255), 2) # Draw in red

# Optionally, create a mask for the differences

mask = np.zeros_like(image1)

cv2.drawContours(mask, contours, -1, (255, 255, 255), -1) # Fill the contours

# Create an image that shows the differences in a specific color (e.g., red)

diff_color = cv2.bitwise_and(image1, mask)

# Show the images

cv2.imshow('Before', image1)

cv2.imshow('After', image2)

cv2.imshow('Differences', diff_color)

cv2.imshow('Overlay', overlay_image)

cv2.waitKey(0)

cv2.destroyAllWindows()

 

 

In action, this technology can be used for the case originally mentioned, detecting discoloration due to wet spots and water damage:



 Conclusion

In conclusion, this blog post demonstrates a simple case of using computer vision to detect changes, such as moisture or damage in concrete structures. It's an area ripe with potential, already showing promising results. However, there are pitfalls to be avoided and it's likely not a catch-all solution. We expect this to work well with and enhance, not replace, existing moisture detection systems.


Tenera is excited about the potential of this technology as AI continues to evolve. We're looking at a future where we can not only detect issues but also predict, categorize, and identify the severity of them. Stay tuned on our blog for future iterations, and if you're keen to get involved with innovative projects like this, keep an eye on Tenera's job listings.

コメント


bottom of page