About a year ago, I created a Wall-E robot that does object and face recognition. It uses Arduino as the controller and need to communicate with a computer that runs the face detection program to track the target. Raspberry Pi face recognition has become very popular recently.
With the powerful processor on Raspberry Pi, I can connect it with the Arduino using i2c on the robot and run the object recognition program on-board. It could become a truly independent, intelligent Wall-E robot!
However, building such a robot will be a project for near future. In this article, I will be showing you how to do basic Object Recognition on the Raspberry Pi using Python and OpenCV. I will also show you a simple open loop face tracking application using pan-tilt servos to turn the camera around.
Note: Please be careful about the indentation in the Python codes, sometimes my blog decides to mess this up randomly.
I wrote an article on how to use SSH and VNC to control and monitor the Raspberry Pi, and that’s what I will be using in this project.
Installing OpenCV For Python
To install OpenCV for Python, all you have to do is use apt-get like below:
sudo apt-get install python-opencv
To test the installation of OpenCV, run this Python script, it will switch on your camera for video streaming if it is working.
[sourcecode language=”python”]
import cv
cv.NamedWindow(“w1”, cv.CV_WINDOW_AUTOSIZE)
camera_index = 0
capture = cv.CaptureFromCAM(camera_index)
def repeat():
global capture #declare as globals since we are assigning to them now
global camera_index
frame = cv.QueryFrame(capture)
cv.ShowImage(“w1″, frame)
c = cv.WaitKey(10)
if(c==”n”): #in “n” key is pressed while the popup window is in focus
camera_index += 1 #try the next camera index
capture = cv.CaptureFromCAM(camera_index)
if not capture: #if the next camera index didn’t work, reset to 0.
camera_index = 0
capture = cv.CaptureFromCAM(camera_index)
while True:
repeat()
[/sourcecode]
Simple Example of Raspberry Pi Face Recognition
This example is a demonstration for Raspberry Pi face recognition using haar-like features. it finds faces in the camera and puts a red square around it. I am surprised how fast the detection is given the limited capacity of the Raspberry Pi (about 3 to 4 fps). Although it’s still much slower than a laptop, but it would still be useful in some robotics applications.
You will need to download this trained face file:
http://stevenhickson-code.googlecode.com/svn/trunk/AUI/Imaging/face.xml
[sourcecode language=”python”]
#!/usr/bin/python
The program finds faces in a camera image or video stream and displays a red box around them.
import sys
import cv2.cv as cv
from optparse import OptionParser
min_size = (20, 20)
image_scale = 2
haar_scale = 1.2
min_neighbors = 2
haar_flags = 0
def detect_and_draw(img, cascade):
# allocate temporary images
gray = cv.CreateImage((img.width,img.height), 8, 1)
small_img = cv.CreateImage((cv.Round(img.width / image_scale),
cv.Round (img.height / image_scale)), 8, 1)
# convert color input image to grayscale
cv.CvtColor(img, gray, cv.CV_BGR2GRAY)
# scale input image for faster processing
cv.Resize(gray, small_img, cv.CV_INTER_LINEAR)
cv.EqualizeHist(small_img, small_img)
if(cascade):
t = cv.GetTickCount()
faces = cv.HaarDetectObjects(small_img, cascade, cv.CreateMemStorage(0),
haar_scale, min_neighbors, haar_flags, min_size)
t = cv.GetTickCount() – t
print “time taken for detection = %gms” % (t/(cv.GetTickFrequency()*1000.))
if faces:
for ((x, y, w, h), n) in faces:
# the input to cv.HaarDetectObjects was resized, so scale the
# bounding box of each face and convert it to two CvPoints
pt1 = (int(x * image_scale), int(y * image_scale))
pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
cv.Rectangle(img, pt1, pt2, cv.RGB(255, 0, 0), 3, 8, 0)
cv.ShowImage(“video”, img)
if __name__ == ‘__main__’:
parser = OptionParser(usage = “usage: %prog [options] [filename|camera_index]”)
parser.add_option(“-c”, “–cascade”, action=”store”, dest=”cascade”, type=”str”, help=”Haar cascade file, default %default”, default = “../data/haarcascades/haarcascade_frontalface_alt.xml”)
(options, args) = parser.parse_args()
cascade = cv.Load(options.cascade)
if len(args) != 1:
parser.print_help()
sys.exit(1)
input_name = args[0]
if input_name.isdigit():
capture = cv.CreateCameraCapture(int(input_name))
else:
capture = None
cv.NamedWindow(“video”, 1)
#size of the video
width = 160
height = 120
if width is None:
width = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH))
else:
cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_WIDTH,width)
if height is None:
height = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT))
else:
cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_HEIGHT,height)
if capture:
frame_copy = None
while True:
frame = cv.QueryFrame(capture)
if not frame:
cv.WaitKey(0)
break
if not frame_copy:
frame_copy = cv.CreateImage((frame.width,frame.height),
cv.IPL_DEPTH_8U, frame.nChannels)
if frame.origin == cv.IPL_ORIGIN_TL:
cv.Copy(frame, frame_copy)
else:
cv.Flip(frame, frame_copy, 0)
detect_and_draw(frame_copy, cascade)
if cv.WaitKey(10) >= 0:
break
else:
image = cv.LoadImage(input_name, 1)
detect_and_draw(image, cascade)
cv.WaitKey(0)
cv.DestroyWindow(“video”)
[/sourcecode]
To run this program, type in this command in your VNC Viewer’s terminal:
python facedetect.py --cascade=face.xml 0
The number at the end represents the number of your video device.
Face Tracking in Raspberry Pi with pan-tilt Servos
In this example I will be using the Wall-E Robot‘s camera and pan-tilt servo head.
The idea is simple. Raspberry Pi detects the position of the face, sends a command to the Arduino. Arduino will convert the command into servo position and turn the camera. I am using i2c to connect Raspberry Pi and Arduino.
Note: I am still trying to optimize the code for this example, so the result is still not great, but it gives you the idea how it works. I will come back and update this post as soon as I am happy with the result.
Arduino Source Code
Here is the Arduino code. Note that this code uses a very dummy and basic open loop control method, I only use this because of its simplicity. For a more optimal control method, please see Color Tracking Using PID.
In this example it basically waits for commands from the Raspberry Pi and turn the head around. The commands are expected to be integer 1, 2, 3 or 4, each represents a direction that it should turn the camera to. While it’s turning, the variable ‘state’ will be set to zero, so Raspberry Pi will stop detecting or sending any more command to avoid turning the camera too far, because of the delays.
Like I mentioned at the beginning, this is an open loop control system, and still has a lot of room to improve. I made it so simple just for some people to pick up more easily.
[sourcecode language=”cpp”]
#include <Wire.h>
#define SLAVE_ADDRESS 0x04
byte command;
byte state;
// Servo code
#include <Servo.h>
Servo servoNeckX;
Servo servoNeckY;
const byte servoNeckX_pin = 3;
const byte servoNeckY_pin = 4;
const int lrServoMax = 2300; // looking right
const int lrServoMin = 700;
const int udServoMax = 2100; // looking down
const int udServoMin = 750; // looking up
int posX = 1500;
int posY = 1300;
// End of Servo code
void setup() {
servoNeckX.attach(servoNeckX_pin);
servoNeckY.attach(servoNeckY_pin);
servoNeckX.writeMicroseconds(posX);
delay(100);
servoNeckY.writeMicroseconds(posY);
delay(100);
// initialize i2c as slave
Wire.begin(SLAVE_ADDRESS);
// define callbacks for i2c communication
Wire.onReceive(receiveData);
Wire.onRequest(sendData);
Serial.begin(9600); // start serial for output
Serial.println(“Ready!”);
state = 1;
}
void loop() {
delay(20);
}
// callback for received data
void receiveData(int byteCount){
while(Wire.available()) {
state = 0; // moving servos
command = Wire.read();
Serial.print(“command received: “);
Serial.println(command);
switch (command){
case 1:
// lift head (-Y)
posY = constrain(posY-20, udServoMin, udServoMax);
servoNeckY.writeMicroseconds(posY);
break;
case 2:
// lower head (+Y)
posY = constrain(posY+20, udServoMin, udServoMax);
servoNeckY.writeMicroseconds(posY);
break;
case 3:
// turn head left (+X)
posX = constrain(posX+20, lrServoMin, lrServoMax);
servoNeckX.writeMicroseconds(posX);
break;
case 4:
// turn head right (-X)
posX = constrain(posX-20, lrServoMin, lrServoMax);
servoNeckX.writeMicroseconds(posX);
break;
}
state = 1; // finished moving servos
}
}
// callback for sending data
void sendData(){
Wire.write(state);
}
[/sourcecode]
The Python Code is similar to the first example. I added some code necessary for i2c communication and a few lines after a face is detected, so it sends commands to the Arduino.
[sourcecode language=”python”]
#!/usr/bin/python
import sys
import cv2.cv as cv
from optparse import OptionParser
min_size = (20, 20)
image_scale = 2
haar_scale = 1.2
min_neighbors = 2
haar_flags = 0
def detect_and_draw(img, cascade):
# allocate temporary images
gray = cv.CreateImage((img.width,img.height), 8, 1)
small_img = cv.CreateImage((cv.Round(img.width / image_scale), cv.Round (img.height / image_scale)), 8, 1)
# convert color input image to grayscale
cv.CvtColor(img, gray, cv.CV_BGR2GRAY)
# scale input image for faster processing
cv.Resize(gray, small_img, cv.CV_INTER_LINEAR)
cv.EqualizeHist(small_img, small_img)
if(cascade):
t = cv.GetTickCount()
faces = cv.HaarDetectObjects(small_img, cascade, cv.CreateMemStorage(0),
haar_scale, min_neighbors, haar_flags, min_size)
t = cv.GetTickCount() – t
print “time taken for detection = %gms” % (t/(cv.GetTickFrequency()*1000.))
if faces:
for ((x, y, w, h), n) in faces:
# the input to cv.HaarDetectObjects was resized, so scale the
# bounding box of each face and convert it to two CvPoints
pt1 = (int(x * image_scale), int(y * image_scale))
pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
cv.Rectangle(img, pt1, pt2, cv.RGB(255, 0, 0), 3, 8, 0)
cv.ShowImage(“video”, img)
if __name__ == ‘__main__’:
parser = OptionParser(usage = “usage: %prog [options] [filename|camera_index]”)
parser.add_option(“-c”, “–cascade”, action=”store”, dest=”cascade”, type=”str”, help=”Haar cascade file, default %default”, default = “../data/haarcascades/haarcascade_frontalface_alt.xml”)
(options, args) = parser.parse_args()
cascade = cv.Load(options.cascade)
if len(args) != 1:
parser.print_help()
sys.exit(1)
input_name = args[0]
if input_name.isdigit():
capture = cv.CreateCameraCapture(int(input_name))
else:
capture = None
cv.NamedWindow(“video”, 1)
#size of the video
width = 160
height = 120
if width is None:
width = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH))
else:
cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_WIDTH,width)
if height is None:
height = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT))
else:
cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_HEIGHT,height)
if capture:
frame_copy = None
while True:
frame = cv.QueryFrame(capture)
if not frame:
cv.WaitKey(0)
break
if not frame_copy:
frame_copy = cv.CreateImage((frame.width,frame.height),
cv.IPL_DEPTH_8U, frame.nChannels)
if frame.origin == cv.IPL_ORIGIN_TL:
cv.Copy(frame, frame_copy)
else:
cv.Flip(frame, frame_copy, 0)
detect_and_draw(frame_copy, cascade)
if cv.WaitKey(10) >= 0:
break
else:
image = cv.LoadImage(input_name, 1)
detect_and_draw(image, cascade)
cv.WaitKey(0)
cv.DestroyWindow(“video”)
[/sourcecode]
Possible Raspberry Pi Face Recognition Improvement
For face recognition on an embedded system, I think LBP is a better choice, because it does all the calculations in integers. Haar uses floats, whick is a killer for embedded/mobile. LBP is a few times faster, but about 10-20% less accurate than Haar.
65 comments
Does anyone know how to run a c++ program using OpenCV and a RaspiCam? After cmake . and make, I type ./folderName and nothing happens. I am new to this, so please help.
Hi oscar , I am not able to get the XML file downloaded , and also i get error while running the code to turn on the camera
File “cam_on.py”, line 3
SyntaxError: Non-ASCII character ‘\xe2’ in file cam_on.py on line 3, but no encoding declared; see python.org/dev/peps/pep-0263/ for details
This is the error i get . kIndly do let me know where to download the XML file
hi oscar
i keep getting a syntax error when running python facedectect.py –cascade=face.xml 0 on line 83 ‘break outside loop’
can you let me know how to fix this error, than you
rodrigo
I am trying to implement face recognition using python and Open Cv. I have successfully implemented face detection using python by following few tutorials available and its working fine.
Now what i am trying to do is to do face recognition i have followed few tutorials but none of them is working for me.
I have followed your tutorial which was clear enough but the code here is giving syntex error.
i tried to run this code
import cv
cv.NamedWindow(“w1”, cv.CV_WINDOW_AUTOSIZE)
camera_index = 0
capture = cv.CaptureFromCAM(camera_index)
def repeat():
global capture #declare as globals since we are assigning to them now
global camera_index
frame = cv.QueryFrame(capture)
cv.ShowImage(“w1″, frame)
c = cv.WaitKey(10)
if(c==”n”): #in “n” key is pressed while the popup window is in focus
camera_index += 1 #try the next camera index
capture = cv.CaptureFromCAM(camera_index)
if not capture: #if the next camera index didn’t work, reset to 0.
camera_index = 0
capture = cv.CaptureFromCAM(camera_index)
while True:
repeat()
but i am getting following error in line number 6
There’s an error in your program:expected an intended block.
I tried my best to solve it but nothing worked.
As i am a newbie to raspberry pi and python any help will be appreciated.Thanks in advance.
gray = cv.CreateImage((img.width,img.height), 8, 1)
^
IndentationError: expected an indented block
while running the code geting thiserror what should i do?
can u give me a code which can detect a face in realt ime video
Hi Oscar,
Instead of a red box, Is it possible to have a red circle or an oval?
If so, how do I accomplish this??
github.com/cymplecy/teachertrack/blob/master/dev/facedetect.py
Don’t waste your time to format it :)
Just tried this on raspberry pi 2 running ubuntu 14.04 LTS and getting 30mS detection time. Thats 30 frames per second, impressive!!
Because of the missing i2c part in your code, how the f€ck do I get the facedetect.py script to parse the coordinates of the detected face??? You suggest looking at your “I2c to Arduino” page, but I haven’t figured it out yet.
Thanks for your great work
Figured out some more myself, please correct me if I’m wrong:
After line 36 in facedetect.py add this line:
pt3 = (int((x + w/2) * image_scale), int((y + h/2) * image_scale))
The cvpoint “pt3” is the origin of the bounding box + the half height and the half width, i.e. the center of the box.
The x,y integers of pt3 then needs to be scaled to the range of the pan tilt servos, put into a string and sent over I2c to the Arduino.
Hey,
Could you help me in converting it to co-ordinates? I could only figure out on sending it as string,
output = “X{0:d}Y{1:d}Z”.format(xx, yy)
print “output = ‘” + output + “‘”
serialConnection.write(output)
hello, nice article but I ve a problem running it.
THis line; frame = cv.QueryFrame(capture)
returns always none. How to solve it? I need some drivers for raspberry pi camera?
For a Raspberry Pi Camera you need to install picamera, run this command:
sudo apt-get install pip && sudo pip install picamera
I am not sure if the picamera works like a regular webcam for opencv, you might want to read up about that…
For the python code you need to import picamera by:
import picamera
Hi Oscar,
I did see you are joining raspberry Pi and Arduino. We do run a nice little project with an addon board for the raspberry Pi. We are also testing an amazing replacement board for raspberry Pi which has a 1.5GHz quad processor.
Are you interested in a CoPiino review?
facebook.com/CoPiino.Electronics
CoPiino.cc
Best
– tswaehn
Hi tswaehn
Just had a quick look at your website, very interesting idea :)
About reviewing the board, I can post something brief about the board, and what it can do etc.
It might take me sometime until I can actually do something with it as I am just in the middle of several things.
let me know if that’s okay with you.
thanks
Oscar
can i knw the minimum and maximum size of the buffered image in opencv2
Good write up buddy, however this is face DETECTION not RECOGNITION as you have put in the title.
Just saves people getting the wrong thing from their google search.
Cheers
Hello..
Do you have any suggestion of other trained files? (instead of face, is there any file for objects??)
Sorry, I am very new to object detection.
hi Oscar
I want to add eye detection in your code . would you please help to add this item to this code . thanks alot
hi oscar… im jonny . can u give me reference if the face detection save extension .jpg save directory and i want sent to twitter or whatsapp or anything with mobile messenger .. thank you :)
Hey i am doing a project for the University of Central Florida and would like to use a picture you have in your article is it ok if i use the picture for my report? thank you for your time
Mishal Patel
Sure that’s fine.
Great work!
I would like to start the face recognition directly after the login, without to use the OS GUI.
When I try to start the program issuing the command “python facedetect.py –cascade=face.xml 0”
I get the following error:
(result:2752): Gtk-WARNING **: cannot open display:
if I run “startx” and I start the program in the LXTerminal it works ok.
What’s the problem?
Any suggestion?
Thank You
Antonio
Thanks for the great post. I have a weird problem Im hoping you know how to solve. When I run the simple script I just get a grey box with a w1 in the heading. Ive tried to press n to change video displays but that does nothing. I am using the Pi camera. Google shows me that there are issues with some programs and the pi camera because I cannot find it under the dev folder. raspstill works perfectly so I am assuming the camera is working. Any ideas?
Thanks
My brother recommended І mɑy liκe this blog.
He ѡaѕ once entirеly right. This submit
aϲtually mɑde my day. Yοu can not believe simpy ɦow ѕo much time I hadd spent fߋr this
information! Thanks!
can’t see the sendata function on the Face Recognition python code
Search for “sendData()” on the webpage.
i don’t see it, can you give the link please thanks
There is no link, and I copied and pasted the all the code on this page.
Search for “sendData()” on the webpage.
hey can you send me this code with the i2c communication thanks
[email protected]
see my last reply.
Can anyone help me please. I got this error when I try to run the code in my PI
xlib: extension “RANDR” missing on display “:1.0”
The code is running but the display window appears to be gray.
Hi, sorry I am not sure what the error is about as I have never seen it myself. Hope google helps you!
sorry the link about face.xml is not work ,can you fix it ,thanks lot
How is the servo mounted? is there a special position? and is it mounted to the raspberry or are you using an arduino
Can anyone please help me understand where I am supposed to type the command
python facedetect.py –cascade=face.xml 0
What do you mean by a VNC terminal? I don’t see it when my RPi is connected. I find a problem only with facedetect.py, videodetect and other programs are working fine.
Hello there,
Any ideia how to stream the video over internet?
Your tutorial about face recognition with OpenCV and Raspberry Pi help me and others to get started with Pi. And because I want to help many more hobbyists to start building robots with Pi, I share this tutorial on my post. link-removed Thank you!
Wow, that’s what I was looking for, what a data! existing here at this webpage,
thanks admin of this website.
Hey Oscar, haven’t bugged you in awhile ;) I finally got the speech recognition working very well, and the facial recognition working, but the facial recognition program will not open through my ssh I encounter Gtkerror cannot open display, any thoughts?
Hi Oscar,
Thanks for your work ! I try and it’s ok on my ubuntu .
I have a robot aisoy and is work in raspi ( raspbian os ) , I can streamer the camera of aisoy bt how do to analyse video stream of the aisoy ? But i have not “startx” ..
Have you a idea ?
you might not need startx to do face recognition, but i still haven’t found a way to do this yet only under command line. I will keep this updated as soon as I know how.
Does it require modifying example code ?
If I try running it from command line, it crashes with the “open display” error
Sorry I was being arbitrary, I thought I could be done, but it doesn’t seem to work without startx. Well, at least I still haven’t found a way to make it work only under command line.
I will delete my previous command before it mislead more people.
Okay, so I now have the face detection program running, I was running it in the newer version of python, but once I switched the call up command to sudo python2 facedetect.py –cascade=face.xml it seems to be working. Now to tackle the I2C connections, and we should be almost there. Question, can you tell me what type of webcame you are using with your rpi speech like siri module? I am using a logitech c905, webcam works very well, but it doesn’t seem to be picking up my mic.
When I try to run facedetect.py I recieve the following error
“HIGHGUI ERROR: libv4l unable to ioctl VIDIOCSPICT”
When I try to run the opencv test script all I get is a little grey box?
Any suggestions would be greatly appreciated. I am trying to combine this with your siri like voice application with wolfram alpha to make an educational/social robot for kids.
I have resolved my HIGHGUI error, I forgot that I had previously installed Motion which was running in the background, now however I am having a new issue,
“(video:3589): Gtk-WARNING **: cannot open display:”
Opencv is functioning perfectly with the test script, I only receive the error when I run facedetect.
python facedetect.py –cascade=face.xml 0
You stated the number at the end represents the number of the video device, is 0 perhaps not the correct number for my device? How can I check?
haha, okay, so I have it working if I call up the program directly from the RPi, but when I use VNC to call facedetect.py I am still recieving the “(video:3589): Gtk-WARNING **: cannot open display:” error. Also I am not getting the redbox in the webcam display… so close….
hay oscar its great man,but i get error
File “/home/newbieilmu/Documents/Programming/Python/Dev/facerecog/facedetect.py”, line 49, in
cascade = cv.Load(options.cascade)
TypeError: OpenCV returned NULL
Execution Successful!
can you help me?
I think you need to change the name and path to XML file with trained faces in one of the parameters to that command. Something like “face.xml”
Hi Oscar, thanks for your great write-ups. I managed to set up my pi for remote access ran the python-opencv install successfully, as suggested:
sudo apt-get install python-opencv
when i try to run the python test script i get the following error:
ImportError: no module named cv
I am fairly new to the Pi and any help will be greatly appreciated!
Found the reason: I was trying to run this on python3.x. Works fine on python2.x. Rookie mistake!
You present an exciting Project.
I have just started on the Raspberry Pi and have done a few Image Tracking programs in Python.
But, I have discovered some problems.
On my PI, All Python Code examples, and my own codes; using CreateCameraCapture() etc
But they always display a large active Camera Image, and tiny Images ONLY 64×64 if I use cv.ShowImage.
I have tried both cv.Resize() and cv.SetCaptureProperty() CV_CAP_PROP_FRAME_WIDTH / HEIGHT.
But these always cause Not Compatible Image Errors.
I made sure that I installed all RasPI, Python, Camera updates.
But, I still have the same results :
LV4 Driver seems the only solution that I have. But, it ONLY presents large active Camera Image, and tiny Images 64×64 if I use cv.ShowImage. Also frustrated that I can Not Disable the Large Active Image.
Why could this be ?
Hi, Oscar. Thanks for the example. Actually I done first example, but couldn’t do with trained face code.
In a trained face.xml code, ‘ /usr/bin/python ‘ is it a location of this file ? Actually usr/bin/python is not a directory.
How could i link opencv with face.xml ?
Thanks.
did you download the face.xml file? There is a link in the post. You should put it in the same directory as where your source code is (facedetect.py)
you can link the face.xml with the facedetect.py with this command:
python facedetect.py –cascade=face.xml 0
Thank-you for the examples, it should prove very useful once I get my RPi. However, I was wondering if it would also be possible to zoom into the face while tracking it?
i think so. but that would require more advanced OpenCV programming.
Hi Oscar,
I cannot find how you do the I2C connection in the Raspberry Pi Python code. The two Python code examples look identical but the second one should be sending coordinates to the Arduino through I2C.
Lots of great information here and in your other tutorials, keep up the good work!
yes I’m noticing that too- any chance you could post the code you used please Oscar?
Hi Oscar,
Great info for me newbie! Could you please share the code necessary for i2c communication in the Python scipt after detecting a face? Thanks for sharing!
sorry for some reason I posted the wrong code, I don’t know if i can still find the original code for that, when I do i will update it asap.
for now, take a look at an i2c example for RPi and Arduino: https://oscarliang.com/raspberry-pi-arduino-connected-i2c/
I have follow your steps but when I go to run the sample code to test Open-CV. I immediately get ImportError: No module named cv. Any ideas?
hi oscar,
great write-up, I managed to get a py script running on my rpi thnx to ur sample script. I was wondering though if u managed to switch to using LBP rather than Haar, if there is a significant increase in speed and if u would be so kind to show us… :)
Why is that your code has almost no comments?
Thanks for your comment, although it’s useful to have comments in the code, but the focus of this project is to demonstrate the possibility and performance of doing Face recognition on the Pi, but not teaching people how Face recognition works. With the comment, I would probably have to run a post that is double of the current post in length.
so no, i will try to keep it short.
python face8.py
Traceback (most recent call last):
File “face8.py”, line 46, in
cascade = cv.Load(options.cascade)
TypeError: OpenCV returned NULL
What have I done, I had it working at one point. Can you assist?