前言
人工智能的浪潮正在席卷全球。一個(gè)已經(jīng)被談?wù)摿藥资甑母拍睿缃襁@幾年,相關(guān)技術(shù)的發(fā)展速度越來(lái)越快。機(jī)器學(xué)習(xí)、深度學(xué)習(xí)、計(jì)算機(jī)視覺(jué)等名詞逐漸走進(jìn)人們的生活,它們同屬于人工智能的范疇之中。
計(jì)算機(jī)視覺(jué)是人工智能領(lǐng)域的一個(gè)分支計(jì)算機(jī)視覺(jué)實(shí)際上是一個(gè)跨領(lǐng)域的交叉學(xué)科,包括計(jì)算機(jī)科學(xué),數(shù)學(xué),工程學(xué),物理學(xué),生物學(xué)和心理學(xué)等領(lǐng)域。許多科學(xué)家認(rèn)為,計(jì)算機(jī)視覺(jué)為人工智能的發(fā)展開(kāi)拓了道路。
簡(jiǎn)單來(lái)說(shuō),計(jì)算機(jī)視覺(jué)就是賦予計(jì)算機(jī)一雙觀察世界的眼睛,再使用計(jì)算機(jī)優(yōu)秀的大腦快速的計(jì)算,服務(wù)人類(lèi)。
今天我們將深入淺出,簡(jiǎn)單介紹Python/ target=_blank class=infotextkey>Python計(jì)算機(jī)視覺(jué)中的手勢(shì)識(shí)別方法,識(shí)別手勢(shì)——數(shù)字(一、二、三、四、五和大拇指的贊賞)。
前期準(zhǔn)備
本篇我們將會(huì)用到Python的OpenCV模塊和手部模型模塊mediapipe,在Python的pip安裝方法中,安裝方法如下:
opencv是常用的圖像識(shí)別模塊
mediapipe是谷歌開(kāi)發(fā)并開(kāi)源的多媒體機(jī)器學(xué)習(xí)模型應(yīng)用框架。
pip install opencv-python
pip install mediapipe
如果你的電腦裝有Anaconda,建議選擇在Anaconda的環(huán)境命令行中進(jìn)行相應(yīng)模塊的安裝,以此構(gòu)建更具體的機(jī)器學(xué)習(xí)環(huán)境
當(dāng)你安裝好OpenCV和mediapipe模塊以后,你可以在Python代碼中寫(xiě)入
import cv2
import mediapipe as mp
如果運(yùn)行成功,那么你的Opencv-python模塊即為安裝成功,那么我們現(xiàn)在就開(kāi)始進(jìn)入今天的正題吧!
識(shí)別手部模型
既然要做手勢(shì)識(shí)別,那么就要去找到我們傳入圖像的手部信息。本處我們將使用mediapipe模型去找到手部模型,并完成手部模型的識(shí)別模塊,并命名,我們將在后續(xù)手勢(shì)識(shí)別內(nèi)容中將其作為模塊引入
HandTrackingModule.py
# -*- coding:utf-8 -*-
"""
CODE >>> SINCE IN CAIXYPROMISE.
MOTTO >>> STRIVE FOR ExcelLENT.
CONSTANTLY STRIVING FOR SELF-IMPROVEMENT.
@ By: CaixyPromise
@ Date: 2021-10-17
"""
import cv2
import mediapipe as mp
class HandDetector:
"""
使用mediapipe庫(kù)查找手。導(dǎo)出地標(biāo)像素格式。添加了額外的功能。
如查找方式,許多手指向上或兩個(gè)手指之間的距離。而且提供找到的手的邊界框信息。
"""
def __init__(self, mode=False, maxHands=2, detectionCon=0.5, minTrackCon=0.5):
"""
:param mode: 在靜態(tài)模式下,對(duì)每個(gè)圖像進(jìn)行檢測(cè)
:param maxHands: 要檢測(cè)的最大手?jǐn)?shù)
:param detectionCon: 最小檢測(cè)置信度
:param minTrackCon: 最小跟蹤置信度
"""
self.mode = mode
self.maxHands = maxHands
self.detectionCon = detectionCon
self.minTrackCon = minTrackCon
self.mpHands = mp.solutions.hands
self.hands = self.mpHands.Hands(self.mode, self.maxHands,
self.detectionCon, self.minTrackCon)
self.mpDraw = mp.solutions.drawing_utils
self.tipIds = [4, 8, 12, 16, 20]
self.fingers = []
self.lmList = []
def findHands(self, img, draw=True):
"""
從圖像(BRG)中找到手部。
:param img: 用于查找手的圖像。
:param draw: 在圖像上繪制輸出的標(biāo)志。
:return: 帶或不帶圖形的圖像
"""
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # 將傳入的圖像由BGR模式轉(zhuǎn)標(biāo)準(zhǔn)的Opencv模式——RGB模式,
self.results = self.hands.process(imgRGB)
if self.results.multi_hand_landmarks:
for handLms in self.results.multi_hand_landmarks:
if draw:
self.mpDraw.draw_landmarks(img, handLms,
self.mpHands.HAND_CONNECTIONS)
return img
def findPosition(self, img, handNo=0, draw=True):
"""
查找單手的地標(biāo)并將其放入列表中像素格式。還可以返回手部周?chē)倪吔缈颉? :param img: 要查找的主圖像
:param handNo: 如果檢測(cè)到多只手,則為手部id
:param draw: 在圖像上繪制輸出的標(biāo)志。(默認(rèn)繪制矩形框)
:return: 像素格式的手部關(guān)節(jié)位置列表;手部邊界框
"""
xList = []
yList = []
bbox = []
bboxInfo =[]
self.lmList = []
if self.results.multi_hand_landmarks:
myHand = self.results.multi_hand_landmarks[handNo]
for id, lm in enumerate(myHand.landmark):
h, w, c = img.shape
px, py = int(lm.x * w), int(lm.y * h)
xList.Append(px)
yList.append(py)
self.lmList.append([px, py])
if draw:
cv2.circle(img, (px, py), 5, (255, 0, 255), cv2.FILLED)
xmin, xmax = min(xList), max(xList)
ymin, ymax = min(yList), max(yList)
boxW, boxH = xmax - xmin, ymax - ymin
bbox = xmin, ymin, boxW, boxH
cx, cy = bbox[0] + (bbox[2] // 2),
bbox[1] + (bbox[3] // 2)
bboxInfo = {"id": id, "bbox": bbox,"center": (cx, cy)}
if draw:
cv2.rectangle(img, (bbox[0] - 20, bbox[1] - 20),
(bbox[0] + bbox[2] + 20, bbox[1] + bbox[3] + 20),
(0, 255, 0), 2)
return self.lmList, bboxInfo
def fingersUp(self):
"""
查找列表中打開(kāi)并返回的手指數(shù)。會(huì)分別考慮左手和右手
:return:豎起手指的數(shù)組(列表),數(shù)組長(zhǎng)度為5,
其中,由大拇指開(kāi)始數(shù),立起標(biāo)為1,放下為0。
"""
if self.results.multi_hand_landmarks:
myHandType = self.handType()
fingers = []
# Thumb
if myHandType == "Right":
if self.lmList[self.tipIds[0]][0] > self.lmList[self.tipIds[0] - 1][0]:
fingers.append(1)
else:
fingers.append(0)
else:
if self.lmList[self.tipIds[0]][0] < self.lmList[self.tipIds[0] - 1][0]:
fingers.append(1)
else:
fingers.append(0)
# 4 Fingers
for id in range(1, 5):
if self.lmList[self.tipIds[id]][1] < self.lmList[self.tipIds[id] - 2][1]:
fingers.append(1)
else:
fingers.append(0)
return fingers
def handType(self):
"""
檢查傳入的手部是左還是右
:return: "Right" 或 "Left"
"""
if self.results.multi_hand_landmarks:
if self.lmList[17][0] < self.lmList[5][0]:
return "Right"
else:
return "Left"
識(shí)別視頻輸入方法
完成手部模型的獲取與識(shí)別,現(xiàn)在我們就要將內(nèi)容傳入到計(jì)算機(jī)當(dāng)中,使其能進(jìn)行手部的識(shí)別以及手勢(shì)的識(shí)別。本處我們將使用OpenCV進(jìn)行內(nèi)容的輸入流,開(kāi)啟計(jì)算機(jī)的攝像頭獲取內(nèi)容,并使用剛剛我們寫(xiě)的HandTrackingModule模塊作為手部的識(shí)別模塊。
Main.py
# -*- coding:utf-8 -*-
"""
CODE >>> SINCE IN CAIXYPROMISE.
MOTTO >>> STRIVE FOR EXCELLENT.
CONSTANTLY STRIVING FOR SELF-IMPROVEMENT.
@ By: CaixyPromise
@ Date: 2021-10-17
"""
import cv2
from HandTrackingModule import HandDetector
class Main:
def __init__(self):
self.camera = cv2.VideoCapture(0,cv2.CAP_DSHOW) # 以視頻流傳入
self.camera.set(3, 1280) # 設(shè)置分辨率
self.camera.set(4, 720)
def Gesture_recognition(self):
while True:
self.detector = HandDetector()
frame, img = self.camera.read()
img = self.detector.findHands(img) # 找到你的手部
lmList, bbox = self.detector.findPosition(img) # 獲取你手部的方位
cv2.imshow("camera", img)
if cv2.getWindowProperty('camera', cv2.WND_PROP_VISIBLE) < 1:
break
# 通過(guò)關(guān)閉按鈕退出程序
cv2.waitKey(1)
# if cv2.waitKey(1) & 0xFF == ord("q"):
# break # 按下q退出
現(xiàn)在,當(dāng)我們運(yùn)行程序后,程序會(huì)運(yùn)行你的計(jì)算機(jī)默認(rèn)攝像頭,當(dāng)你露出你的手時(shí),會(huì)傳出圖像圈住你的手部,并且繪制出你的手部主要關(guān)節(jié)點(diǎn)。
其中,你的手部主要關(guān)節(jié)點(diǎn)已經(jīng)標(biāo)好序號(hào),你的手部分為了21個(gè)關(guān)節(jié)點(diǎn),指尖分別為4 8 12 16 20
具體關(guān)節(jié)分為:
手勢(shì)識(shí)別方法
通過(guò)前面的講解,我們完成了手部獲取與識(shí)別、識(shí)別內(nèi)容的輸入,那么我們現(xiàn)在就來(lái)開(kāi)始寫(xiě)我們的手勢(shì)識(shí)別方法。這里,我們用到識(shí)別模塊里的fingersUp()方法。
找到我們剛剛寫(xiě)的Main.py文件(識(shí)別內(nèi)容輸入方法),當(dāng)我們找到并繪制出我們的手部位置以后,此時(shí)的findPosition()方法會(huì)得到你的手部具體方位,其中l(wèi)mList是關(guān)節(jié)位置方位(type:list),bbox是邊框方位(type:dict),當(dāng)未識(shí)別到內(nèi)容時(shí)兩者均為空。所以,我們只需要寫(xiě)當(dāng)數(shù)組中存在數(shù)據(jù)(非空),進(jìn)行手指判斷即可,那么我們可以寫(xiě)成
# -*- coding:utf-8 -*-
"""
CODE >>> SINCE IN CAIXYPROMISE.
MOTTO >>> STRIVE FOR EXCELLENT.
CONSTANTLY STRIVING FOR SELF-IMPROVEMENT.
@ By: CaixyPromise
@ Date: 2021-10-17
"""
def Gesture_recognition(self):
while True:
self.detector = HandDetector()
frame, img = self.camera.read()
img = self.detector.findHands(img)
lmList, bbox = self.detector.findPosition(img)
if lmList:
x1, x2, x3, x4, x5 = self.detector.fingersUp()
上面我們fingersUp()方法談到,fingersUp()方法會(huì)傳回從大拇指開(kāi)始數(shù)的長(zhǎng)度為5的數(shù)組,立起的手指標(biāo)記為1,放下標(biāo)記為0。
本次我們的目的是寫(xiě)一個(gè)識(shí)別我們生活常見(jiàn)的數(shù)字手勢(shì)以及一個(gè)贊揚(yáng)大拇指的手勢(shì)。結(jié)合我們生活,識(shí)別你的手勢(shì)可以寫(xiě)為
# -*- coding:utf-8 -*-
"""
CODE >>> SINCE IN CAIXYPROMISE.
MOTTO >>> STRIVE FOR EXCELLENT.
CONSTANTLY STRIVING FOR SELF-IMPROVEMENT.
@ By: CaixyPromise
@ Date: 2021-10-17
"""
def Gesture_recognition(self):
while True:
self.detector = HandDetector()
frame, img = self.camera.read()
img = self.detector.findHands(img)
lmList, bbox = self.detector.findPosition(img)
if lmList:
x1, x2, x3, x4, x5 = self.detector.fingersUp()
if (x2 == 1 and x3 == 1) and (x4 == 0 and x5 == 0 and x1 == 0):
# TWO
elif (x2 == 1 and x3 == 1 and x4 == 1) and (x1 == 0 and x5 == 0):
# THREE
elif (x2 == 1 and x3 == 1 and x4 == 1 and x5 == 1) and (x1 == 0):
# FOUR
elif x1 == 1 and x2 == 1 and x3 == 1 and x4 == 1 and x5 == 1:
# FIVE
elif x2 == 1 and (x1 == 0, x3 == 0, x4 == 0, x5 == 0):
# ONE
elif x1 and (x2 == 0, x3 == 0, x4 == 0, x5 == 0):
# NICE_GOOD
完成基本的識(shí)別以后,我們要把內(nèi)容表達(dá)出來(lái)。這里我們結(jié)合bbox返回的手部方框方位,再使用opencv里的putText方法,實(shí)現(xiàn)識(shí)別結(jié)果的輸出。
# -*- coding:utf-8 -*-
"""
CODE >>> SINCE IN CAIXYPROMISE.
MOTTO >>> STRIVE FOR EXCELLENT.
CONSTANTLY STRIVING FOR SELF-IMPROVEMENT.
@ By: CaixyPromise
@ Date: 2021-10-17
"""
def Gesture_recognition(self):
while True:
self.detector = HandDetector()
frame, img = self.camera.read()
img = self.detector.findHands(img)
lmList, bbox = self.detector.findPosition(img)
if lmList:
x_1, y_1 = bbox["bbox"][0], bbox["bbox"][1]
x1, x2, x3, x4, x5 = self.detector.fingersUp()
if (x2 == 1 and x3 == 1) and (x4 == 0 and x5 == 0 and x1 == 0):
cv2.putText(img, "2_TWO", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
elif (x2 == 1 and x3 == 1 and x4 == 1) and (x1 == 0 and x5 == 0):
cv2.putText(img, "3_THREE", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
elif (x2 == 1 and x3 == 1 and x4 == 1 and x5 == 1) and (x1 == 0):
cv2.putText(img, "4_FOUR", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
elif x1 == 1 and x2 == 1 and x3 == 1 and x4 == 1 and x5 == 1:
cv2.putText(img, "5_FIVE", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
elif x2 == 1 and (x1 == 0, x3 == 0, x4 == 0, x5 == 0):
cv2.putText(img, "1_ONE", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
elif x1 and (x2 == 0, x3 == 0, x4 == 0, x5 == 0):
cv2.putText(img, "GOOD!", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
cv2.imshow("camera", img)
if cv2.getWindowProperty('camera', cv2.WND_PROP_VISIBLE) < 1:
break
cv2.waitKey(1)
現(xiàn)在,我們已經(jīng)完成手勢(shì)的識(shí)別與結(jié)果輸出,我們把完整代碼運(yùn)行一下,即可驗(yàn)證出我們的代碼效果。
完整代碼
完整代碼如下
# -*- coding:utf-8 -*-
"""
CODE >>> SINCE IN CAIXYPROMISE.
STRIVE FOR EXCELLENT.
CONSTANTLY STRIVING FOR SELF-IMPROVEMENT.
@ by: caixy
@ date: 2021-10-1
"""
import cv2
from HandTrackingModule import HandDetector
class Main:
def __init__(self):
self.camera = cv2.VideoCapture(0,cv2.CAP_DSHOW)
self.camera.set(3, 1280)
self.camera.set(4, 720)
def Gesture_recognition(self):
while True:
self.detector = HandDetector()
frame, img = self.camera.read()
img = self.detector.findHands(img)
lmList, bbox = self.detector.findPosition(img)
if lmList:
x_1, y_1 = bbox["bbox"][0], bbox["bbox"][1]
x1, x2, x3, x4, x5 = self.detector.fingersUp()
if (x2 == 1 and x3 == 1) and (x4 == 0 and x5 == 0 and x1 == 0):
cv2.putText(img, "2_TWO", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
elif (x2 == 1 and x3 == 1 and x4 == 1) and (x1 == 0 and x5 == 0):
cv2.putText(img, "3_THREE", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
elif (x2 == 1 and x3 == 1 and x4 == 1 and x5 == 1) and (x1 == 0):
cv2.putText(img, "4_FOUR", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
elif x1 == 1 and x2 == 1 and x3 == 1 and x4 == 1 and x5 == 1:
cv2.putText(img, "5_FIVE", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
elif x2 == 1 and (x1 == 0, x3 == 0, x4 == 0, x5 == 0):
cv2.putText(img, "1_ONE", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
elif x1 and (x2 == 0, x3 == 0, x4 == 0, x5 == 0):
cv2.putText(img, "GOOD!", (x_1, y_1), cv2.FONT_HERSHEY_PLAIN, 3,
(0, 0, 255), 3)
cv2.imshow("camera", img)
if cv2.getWindowProperty('camera', cv2.WND_PROP_VISIBLE) < 1:
break
cv2.waitKey(1)
# if cv2.waitKey(1) & 0xFF == ord("q"):
# break
if __name__ == '__main__':
Solution = Main()
Solution.Gesture_recognition()
效果一目了然,計(jì)算機(jī)成功識(shí)別了你的手勢(shì)并把內(nèi)容輸出。快去試試吧!