Hello. I’m K10-K10. This time, I’d like to summarize the program for detecting red and the 7-segment LED for judging the status of the robot.
Program
The function for detecting red.
def detect_red_marks(orig_image, blackline_image):
image = orig_image.copy()
global red_marks, red_black_detected
hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
lower_red1 = np.array([0, 40, 0])
upper_red1 = np.array([30, 255, 255])
lower_red2 = np.array([100, 40, 0])
upper_red2 = np.array([180, 255, 255])
red_mask1 = cv2.inRange(hsv, lower_red1, upper_red1)
red_mask2 = cv2.inRange(hsv, lower_red2, upper_red2)
red_mask = cv2.bitwise_or(red_mask1, red_mask2)
kernel = np.ones((3, 3), np.uint8)
red_mask = cv2.erode(red_mask, kernel, iterations=2)
red_mask = cv2.dilate(red_mask, kernel, iterations=2)
if DEBUG_MODE:
time_str = str(time.time())
cv2.imwrite(f"bin/{time_str}_red_mask1.jpg", red_mask1)
cv2.imwrite(f"bin/{time_str}_red_mask2.jpg", red_mask2)
cv2.imwrite(f"bin/{time_str}_red_mask.jpg", red_mask)
contours, _ = cv2.findContours(red_mask, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
red_marks = []
red_black_detected = []
for contour in contours:
if cv2.contourArea(contour) > min_red_area:
x, y, w, h = cv2.boundingRect(contour)
center_x = x + w // 2
center_y = y + h // 2
red_marks.append((center_x, center_y, w, h))
if DEBUG_MODE:
cv2.line(image, (x, y), (x + w, y + h), (0, 0, 255),
2)
cv2.line(image, (x + w, y), (x, y + h), (0, 0, 255), 2)
cv2.circle(image, (center_x, center_y), 5, (0, 0, 255), -1)
The noise removal and the mark on the image are the same as the green in the previous article.
Explanation
lower_red1 = np.array([0, 40, 0])
upper_red1 = np.array([30, 255, 255])
lower_red2 = np.array([100, 40, 0])
upper_red2 = np.array([180, 255, 255])
The red HSV (hue) is divided into 0-60 degrees and 300-360 degrees. When converted to OpenCV (0~180), it becomes 0-30 150-180 degrees. The values are different, but we will adjust them from now on.
red_mask1 = cv2.inRange(hsv, lower_red1, upper_red1)
red_mask2 = cv2.inRange(hsv, lower_red2, upper_red2)
red_mask = cv2.bitwise_or(red_mask1, red_mask2)
We filter with the color value range we just mentioned.
red_mask1
is the one with the lower color value
red_mask2
is the one with the higher color value.
This way, the red is white, and the others are black.
We apply bitwise_or
to the two images, and get the final red_mask
.
bitwise_or
returns the same result as the logical operator or
to the image.
画像1 | 画像2 | 出力 |
---|---|---|
黒 | 黒 | 黒 |
黒 | 白 | 白 |
白 | 黒 | 白 |
白 | 白 | 白 |
This way, the red is white, and the others are black. This way, we can perform mask processing even with red with two color values.
7-segment LED
I want to be able to judge the operation of the robot with the LED lamp, so I introduced it. I didn’t know which pin corresponds to which (the one that comes up in the search was different from the one I was using), so I left a note here.
Pin
1—— | ——5 |
2—— | ——6 |
GND- | -GND |
3—— | ——7 |
4—— | ——8 |
LED
---5--- | | 2 6 | | ---1--- | | 3 7 | | ---4--- 8 (dot)
Finally
I wanted to show the image that could be detected, but I forgot to take it. Also, the camera’s field of view for line tracing is too narrow in the current robot, so we need to fix it. The base error was also found, so I’m worried about whether it will be fixed in time.
Also, when connecting the stabilized power supply, please make sure to check the decimal point position. Please don’t flow 33v thinking it’s 3.3v.
Thank you for reading~