Raspberry Pi Face Recognition Using OpenCV

by Oscar

About a year ago, I created a Wall-E robot that does object and face recognition. It uses Arduino as the controller and need to communicate with a computer that runs the face detection program to track the target. Raspberry Pi face recognition has become very popular recently.

Some of the links on this page are affiliate links. I receive a commission (at no extra cost to you) if you make a purchase after clicking on one of these affiliate links. This helps support the free content for the community on this website. Please read our Affiliate Link Policy for more information.

 

With the powerful processor on Raspberry Pi, I can connect it with the Arduino using i2c on the robot and run the object recognition program on-board. It could become a truly independent, intelligent Wall-E robot!

However, building such a robot will be a project for near future. In this article, I will be showing you how to do basic Object Recognition on the Raspberry Pi using Python and OpenCV. I will also show you a simple open loop face tracking application using pan-tilt servos to turn the camera around.

Note: Please be careful about the indentation in the Python codes, sometimes my blog decides to mess this up randomly.

I wrote an article on how to use SSH and VNC to control and monitor the Raspberry Pi, and that’s what I will be using in this project.

Installing OpenCV For Python

To install OpenCV for Python, all you have to do is use apt-get like below:

sudo apt-get install python-opencv

To test the installation of OpenCV, run this Python script, it will switch on your camera for video streaming if it is working.

[sourcecode language=”python”]
import cv

cv.NamedWindow(“w1”, cv.CV_WINDOW_AUTOSIZE)
camera_index = 0
capture = cv.CaptureFromCAM(camera_index)

def repeat():
global capture #declare as globals since we are assigning to them now
global camera_index
frame = cv.QueryFrame(capture)
cv.ShowImage(“w1″, frame)
c = cv.WaitKey(10)
if(c==”n”): #in “n” key is pressed while the popup window is in focus
camera_index += 1 #try the next camera index
capture = cv.CaptureFromCAM(camera_index)
if not capture: #if the next camera index didn’t work, reset to 0.
camera_index = 0
capture = cv.CaptureFromCAM(camera_index)

while True:
repeat()
[/sourcecode]

Simple Example of Raspberry Pi Face Recognition

This example is a demonstration for Raspberry Pi face recognition using haar-like features. it finds faces in the camera and puts a red square around it. I am surprised how fast the detection is given the limited capacity of the Raspberry Pi (about 3 to 4 fps). Although it’s still much slower than a laptop, but it would still be useful in some robotics applications.

Raspberry-Pi-Face-Recognition-and-Object-Detection-Using-OpenCV

You will need to download this trained face file:

http://stevenhickson-code.googlecode.com/svn/trunk/AUI/Imaging/face.xml

[sourcecode language=”python”]
#!/usr/bin/python

The program finds faces in a camera image or video stream and displays a red box around them.

import sys
import cv2.cv as cv
from optparse import OptionParser

min_size = (20, 20)
image_scale = 2
haar_scale = 1.2
min_neighbors = 2
haar_flags = 0

def detect_and_draw(img, cascade):
# allocate temporary images
gray = cv.CreateImage((img.width,img.height), 8, 1)
small_img = cv.CreateImage((cv.Round(img.width / image_scale),
cv.Round (img.height / image_scale)), 8, 1)

# convert color input image to grayscale
cv.CvtColor(img, gray, cv.CV_BGR2GRAY)

# scale input image for faster processing
cv.Resize(gray, small_img, cv.CV_INTER_LINEAR)
cv.EqualizeHist(small_img, small_img)

if(cascade):
t = cv.GetTickCount()
faces = cv.HaarDetectObjects(small_img, cascade, cv.CreateMemStorage(0),
haar_scale, min_neighbors, haar_flags, min_size)
t = cv.GetTickCount() – t
print “time taken for detection = %gms” % (t/(cv.GetTickFrequency()*1000.))
if faces:
for ((x, y, w, h), n) in faces:
# the input to cv.HaarDetectObjects was resized, so scale the
# bounding box of each face and convert it to two CvPoints
pt1 = (int(x * image_scale), int(y * image_scale))
pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
cv.Rectangle(img, pt1, pt2, cv.RGB(255, 0, 0), 3, 8, 0)

cv.ShowImage(“video”, img)

if __name__ == ‘__main__’:

parser = OptionParser(usage = “usage: %prog [options] [filename|camera_index]”)
parser.add_option(“-c”, “–cascade”, action=”store”, dest=”cascade”, type=”str”, help=”Haar cascade file, default %default”, default = “../data/haarcascades/haarcascade_frontalface_alt.xml”)
(options, args) = parser.parse_args()

cascade = cv.Load(options.cascade)

if len(args) != 1:
parser.print_help()
sys.exit(1)

input_name = args[0]
if input_name.isdigit():
capture = cv.CreateCameraCapture(int(input_name))
else:
capture = None

cv.NamedWindow(“video”, 1)

#size of the video
width = 160
height = 120

if width is None:
width = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH))
else:
cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_WIDTH,width)

if height is None:
height = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT))
else:
cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_HEIGHT,height)

if capture:
frame_copy = None
while True:

frame = cv.QueryFrame(capture)
if not frame:
cv.WaitKey(0)
break
if not frame_copy:
frame_copy = cv.CreateImage((frame.width,frame.height),
cv.IPL_DEPTH_8U, frame.nChannels)

if frame.origin == cv.IPL_ORIGIN_TL:
cv.Copy(frame, frame_copy)
else:
cv.Flip(frame, frame_copy, 0)

detect_and_draw(frame_copy, cascade)

if cv.WaitKey(10) >= 0:
break
else:
image = cv.LoadImage(input_name, 1)
detect_and_draw(image, cascade)
cv.WaitKey(0)

cv.DestroyWindow(“video”)

[/sourcecode]

To run this program, type in this command in your VNC Viewer’s terminal:

python facedetect.py --cascade=face.xml 0

The number at the end represents the number of your video device.

Face Tracking in Raspberry Pi with pan-tilt Servos

In this example I will be using the Wall-E Robot‘s camera and pan-tilt servo head.

The idea is simple. Raspberry Pi detects the position of the face, sends a command to the Arduino. Arduino will convert the command into servo position and turn the camera. I am using i2c to connect Raspberry Pi and Arduino.

Note: I am still trying to optimize the code for this example, so the result is still not great, but it gives you the idea how it works. I will come back and update this post as soon as I am happy with the result.

Raspberry Pi Face Recognition and Object Detection Using OpenCV

Raspberry Pi Face Recognition and Object Detection Using OpenCV

Arduino Source Code

Here is the Arduino code. Note that this code uses a very dummy and basic open loop control method, I only use this because of its simplicity. For a more optimal control method, please see Color Tracking Using PID.

In this example it basically waits for commands from the Raspberry Pi and turn the head around. The commands are expected to be integer 1, 2, 3 or 4, each represents a direction that it should turn the camera to. While it’s turning, the variable ‘state’ will be set to zero, so Raspberry Pi will stop detecting or sending any more command to avoid turning the camera too far, because of the delays.

Like I mentioned at the beginning, this is an open loop control system, and still has a lot of room to improve. I made it so simple just for some people to pick up more easily.

[sourcecode language=”cpp”]
#include <Wire.h>
#define SLAVE_ADDRESS 0x04
byte command;
byte state;

// Servo code
#include <Servo.h>

Servo servoNeckX;
Servo servoNeckY;

const byte servoNeckX_pin = 3;
const byte servoNeckY_pin = 4;

const int lrServoMax = 2300; // looking right
const int lrServoMin = 700;
const int udServoMax = 2100; // looking down
const int udServoMin = 750; // looking up

int posX = 1500;
int posY = 1300;

// End of Servo code

void setup() {

servoNeckX.attach(servoNeckX_pin);
servoNeckY.attach(servoNeckY_pin);

servoNeckX.writeMicroseconds(posX);
delay(100);
servoNeckY.writeMicroseconds(posY);
delay(100);

// initialize i2c as slave
Wire.begin(SLAVE_ADDRESS);

// define callbacks for i2c communication
Wire.onReceive(receiveData);
Wire.onRequest(sendData);

Serial.begin(9600); // start serial for output
Serial.println(“Ready!”);

state = 1;
}

void loop() {
delay(20);
}

// callback for received data
void receiveData(int byteCount){

while(Wire.available()) {
state = 0; // moving servos
command = Wire.read();
Serial.print(“command received: “);
Serial.println(command);

switch (command){

case 1:
// lift head (-Y)
posY = constrain(posY-20, udServoMin, udServoMax);
servoNeckY.writeMicroseconds(posY);
break;

case 2:
// lower head (+Y)
posY = constrain(posY+20, udServoMin, udServoMax);
servoNeckY.writeMicroseconds(posY);
break;

case 3:
// turn head left (+X)
posX = constrain(posX+20, lrServoMin, lrServoMax);
servoNeckX.writeMicroseconds(posX);
break;

case 4:
// turn head right (-X)
posX = constrain(posX-20, lrServoMin, lrServoMax);
servoNeckX.writeMicroseconds(posX);
break;
}
state = 1; // finished moving servos

}
}

// callback for sending data

void sendData(){
Wire.write(state);
}

[/sourcecode]

The Python Code is similar to the first example. I added some code necessary for i2c communication and a few lines after a face is detected, so it sends commands to the Arduino.

[sourcecode language=”python”]

#!/usr/bin/python

import sys
import cv2.cv as cv
from optparse import OptionParser

min_size = (20, 20)
image_scale = 2
haar_scale = 1.2
min_neighbors = 2
haar_flags = 0

def detect_and_draw(img, cascade):
# allocate temporary images
gray = cv.CreateImage((img.width,img.height), 8, 1)
small_img = cv.CreateImage((cv.Round(img.width / image_scale), cv.Round (img.height / image_scale)), 8, 1)

# convert color input image to grayscale
cv.CvtColor(img, gray, cv.CV_BGR2GRAY)

# scale input image for faster processing
cv.Resize(gray, small_img, cv.CV_INTER_LINEAR)
cv.EqualizeHist(small_img, small_img)

if(cascade):
t = cv.GetTickCount()
faces = cv.HaarDetectObjects(small_img, cascade, cv.CreateMemStorage(0),
haar_scale, min_neighbors, haar_flags, min_size)
t = cv.GetTickCount() – t
print “time taken for detection = %gms” % (t/(cv.GetTickFrequency()*1000.))
if faces:
for ((x, y, w, h), n) in faces:
# the input to cv.HaarDetectObjects was resized, so scale the
# bounding box of each face and convert it to two CvPoints
pt1 = (int(x * image_scale), int(y * image_scale))
pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
cv.Rectangle(img, pt1, pt2, cv.RGB(255, 0, 0), 3, 8, 0)

cv.ShowImage(“video”, img)

if __name__ == ‘__main__’:

parser = OptionParser(usage = “usage: %prog [options] [filename|camera_index]”)
parser.add_option(“-c”, “–cascade”, action=”store”, dest=”cascade”, type=”str”, help=”Haar cascade file, default %default”, default = “../data/haarcascades/haarcascade_frontalface_alt.xml”)
(options, args) = parser.parse_args()

cascade = cv.Load(options.cascade)

if len(args) != 1:
parser.print_help()
sys.exit(1)

input_name = args[0]
if input_name.isdigit():
capture = cv.CreateCameraCapture(int(input_name))
else:
capture = None

cv.NamedWindow(“video”, 1)

#size of the video
width = 160
height = 120

if width is None:
width = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH))
else:
cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_WIDTH,width)

if height is None:
height = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT))
else:
cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_HEIGHT,height)

if capture:
frame_copy = None

while True:

frame = cv.QueryFrame(capture)
if not frame:
cv.WaitKey(0)
break
if not frame_copy:
frame_copy = cv.CreateImage((frame.width,frame.height),
cv.IPL_DEPTH_8U, frame.nChannels)

if frame.origin == cv.IPL_ORIGIN_TL:
cv.Copy(frame, frame_copy)
else:
cv.Flip(frame, frame_copy, 0)

detect_and_draw(frame_copy, cascade)

if cv.WaitKey(10) >= 0:
break
else:
image = cv.LoadImage(input_name, 1)
detect_and_draw(image, cascade)
cv.WaitKey(0)

cv.DestroyWindow(“video”)

[/sourcecode]

Possible Raspberry Pi Face Recognition Improvement

For face recognition on an embedded system, I think LBP is a better choice, because it does all the calculations in integers. Haar uses floats, whick is a killer for embedded/mobile. LBP is a few times faster, but about 10-20% less accurate than Haar.

Leave a Comment

By using this form, you agree with the storage and handling of your data by this website. Note that all comments are held for moderation before appearing.

65 comments

Sophie 23rd February 2017 - 3:17 am

Does anyone know how to run a c++ program using OpenCV and a RaspiCam? After cmake . and make, I type ./folderName and nothing happens. I am new to this, so please help.

Reply
priya 15th September 2016 - 10:50 am

Hi oscar , I am not able to get the XML file downloaded , and also i get error while running the code to turn on the camera

File “cam_on.py”, line 3
SyntaxError: Non-ASCII character ‘\xe2’ in file cam_on.py on line 3, but no encoding declared; see python.org/dev/peps/pep-0263/ for details

This is the error i get . kIndly do let me know where to download the XML file

Reply
rodrigo calvo 24th June 2016 - 7:18 am

hi oscar
i keep getting a syntax error when running python facedectect.py –cascade=face.xml 0 on line 83 ‘break outside loop’
can you let me know how to fix this error, than you
rodrigo

Reply
Hardeep Sharma 23rd May 2016 - 12:58 pm

I am trying to implement face recognition using python and Open Cv. I have successfully implemented face detection using python by following few tutorials available and its working fine.

Now what i am trying to do is to do face recognition i have followed few tutorials but none of them is working for me.

I have followed your tutorial which was clear enough but the code here is giving syntex error.

i tried to run this code

import cv
cv.NamedWindow(“w1”, cv.CV_WINDOW_AUTOSIZE)
camera_index = 0
capture = cv.CaptureFromCAM(camera_index)
def repeat():
global capture #declare as globals since we are assigning to them now
global camera_index
frame = cv.QueryFrame(capture)
cv.ShowImage(“w1″, frame)
c = cv.WaitKey(10)
if(c==”n”): #in “n” key is pressed while the popup window is in focus
camera_index += 1 #try the next camera index
capture = cv.CaptureFromCAM(camera_index)
if not capture: #if the next camera index didn’t work, reset to 0.
camera_index = 0
capture = cv.CaptureFromCAM(camera_index)

while True:
repeat()
but i am getting following error in line number 6

There’s an error in your program:expected an intended block.
I tried my best to solve it but nothing worked.

As i am a newbie to raspberry pi and python any help will be appreciated.Thanks in advance.

Reply
kush gupta 7th March 2016 - 6:39 pm

gray = cv.CreateImage((img.width,img.height), 8, 1)
^
IndentationError: expected an indented block

while running the code geting thiserror what should i do?
can u give me a code which can detect a face in realt ime video

Reply
Jerome GIdeon 27th February 2016 - 10:17 am

Hi Oscar,

Instead of a red box, Is it possible to have a red circle or an oval?
If so, how do I accomplish this??

Reply
Jaso 21st November 2015 - 10:48 pm

github.com/cymplecy/teachertrack/blob/master/dev/facedetect.py

Don’t waste your time to format it :)

Reply
Tore Lund 10th March 2015 - 8:58 pm

Just tried this on raspberry pi 2 running ubuntu 14.04 LTS and getting 30mS detection time. Thats 30 frames per second, impressive!!

Because of the missing i2c part in your code, how the f€ck do I get the facedetect.py script to parse the coordinates of the detected face??? You suggest looking at your “I2c to Arduino” page, but I haven’t figured it out yet.

Thanks for your great work

Reply
Tore Lund 10th March 2015 - 10:02 pm

Figured out some more myself, please correct me if I’m wrong:

After line 36 in facedetect.py add this line:
pt3 = (int((x + w/2) * image_scale), int((y + h/2) * image_scale))

The cvpoint “pt3” is the origin of the bounding box + the half height and the half width, i.e. the center of the box.
The x,y integers of pt3 then needs to be scaled to the range of the pan tilt servos, put into a string and sent over I2c to the Arduino.

Reply
Robert 30th March 2015 - 8:17 am

Hey,

Could you help me in converting it to co-ordinates? I could only figure out on sending it as string,

output = “X{0:d}Y{1:d}Z”.format(xx, yy)
print “output = ‘” + output + “‘”
serialConnection.write(output)

Reply
Alessandro 22nd February 2015 - 11:27 am

hello, nice article but I ve a problem running it.
THis line; frame = cv.QueryFrame(capture)

returns always none. How to solve it? I need some drivers for raspberry pi camera?

Reply
Kyle Pfromer 10th March 2015 - 5:47 pm

For a Raspberry Pi Camera you need to install picamera, run this command:
sudo apt-get install pip && sudo pip install picamera
I am not sure if the picamera works like a regular webcam for opencv, you might want to read up about that…
For the python code you need to import picamera by:
import picamera

Reply
Tswaehn 31st January 2015 - 8:37 pm

Hi Oscar,
I did see you are joining raspberry Pi and Arduino. We do run a nice little project with an addon board for the raspberry Pi. We are also testing an amazing replacement board for raspberry Pi which has a 1.5GHz quad processor.
Are you interested in a CoPiino review?

facebook.com/CoPiino.Electronics
CoPiino.cc

Best
– tswaehn

Reply
Oscar 1st February 2015 - 5:54 pm

Hi tswaehn
Just had a quick look at your website, very interesting idea :)
About reviewing the board, I can post something brief about the board, and what it can do etc.
It might take me sometime until I can actually do something with it as I am just in the middle of several things.
let me know if that’s okay with you.

thanks
Oscar

Reply
sai priya 16th January 2015 - 3:40 pm

can i knw the minimum and maximum size of the buffered image in opencv2

Reply
Aphire 7th January 2015 - 4:09 pm

Good write up buddy, however this is face DETECTION not RECOGNITION as you have put in the title.
Just saves people getting the wrong thing from their google search.

Cheers

Reply
Vanessa 15th September 2014 - 7:16 pm

Hello..
Do you have any suggestion of other trained files? (instead of face, is there any file for objects??)
Sorry, I am very new to object detection.

Reply
monzavi 13th September 2014 - 10:13 am

hi Oscar
I want to add eye detection in your code . would you please help to add this item to this code . thanks alot

Reply
jonny 28th August 2014 - 7:04 pm

hi oscar… im jonny . can u give me reference if the face detection save extension .jpg save directory and i want sent to twitter or whatsapp or anything with mobile messenger .. thank you :)

Reply
Mishal Patel 30th July 2014 - 6:27 am

Hey i am doing a project for the University of Central Florida and would like to use a picture you have in your article is it ok if i use the picture for my report? thank you for your time
Mishal Patel

Reply
Oscar 30th July 2014 - 1:54 pm

Sure that’s fine.

Reply
Antonio 16th June 2014 - 2:38 pm

Great work!
I would like to start the face recognition directly after the login, without to use the OS GUI.
When I try to start the program issuing the command “python facedetect.py –cascade=face.xml 0”
I get the following error:
(result:2752): Gtk-WARNING **: cannot open display:
if I run “startx” and I start the program in the LXTerminal it works ok.
What’s the problem?
Any suggestion?
Thank You
Antonio

Reply
Marc 11th June 2014 - 2:16 pm

Thanks for the great post. I have a weird problem Im hoping you know how to solve. When I run the simple script I just get a grey box with a w1 in the heading. Ive tried to press n to change video displays but that does nothing. I am using the Pi camera. Google shows me that there are issues with some programs and the pi camera because I cannot find it under the dev folder. raspstill works perfectly so I am assuming the camera is working. Any ideas?

Thanks

Reply
Alyssa 9th June 2014 - 3:07 pm

My brother recommended І mɑy liκe this blog.
He ѡaѕ once entirеly right. This submit
aϲtually mɑde my day. Yοu can not believe simpy ɦow ѕo much time I hadd spent fߋr this
information! Thanks!

Reply
jav 31st May 2014 - 6:39 pm

can’t see the sendata function on the Face Recognition python code

Reply
Oscar 1st June 2014 - 12:20 pm

Search for “sendData()” on the webpage.

Reply
DEv 30th May 2014 - 11:16 pm

i don’t see it, can you give the link please thanks

Reply
Oscar 1st June 2014 - 12:19 pm

There is no link, and I copied and pasted the all the code on this page.
Search for “sendData()” on the webpage.

Reply
dev 30th May 2014 - 8:31 pm

hey can you send me this code with the i2c communication thanks
[email protected]

Reply
Oscar 30th May 2014 - 10:19 pm

see my last reply.

Reply
Abaya 11th May 2014 - 1:16 pm

Can anyone help me please. I got this error when I try to run the code in my PI
xlib: extension “RANDR” missing on display “:1.0”
The code is running but the display window appears to be gray.

Reply
Oscar 12th May 2014 - 10:08 am

Hi, sorry I am not sure what the error is about as I have never seen it myself. Hope google helps you!

Reply
zchky 20th October 2016 - 9:33 am

sorry the link about face.xml is not work ,can you fix it ,thanks lot

Reply
Amaury 21st April 2014 - 9:56 pm

How is the servo mounted? is there a special position? and is it mounted to the raspberry or are you using an arduino

Reply
Anokhi 15th March 2014 - 1:19 pm

Can anyone please help me understand where I am supposed to type the command
python facedetect.py –cascade=face.xml 0

What do you mean by a VNC terminal? I don’t see it when my RPi is connected. I find a problem only with facedetect.py, videodetect and other programs are working fine.

Reply
Paulo 24th February 2014 - 3:14 am

Hello there,
Any ideia how to stream the video over internet?

Reply
Dragos 24th January 2014 - 7:13 am

Your tutorial about face recognition with OpenCV and Raspberry Pi help me and others to get started with Pi. And because I want to help many more hobbyists to start building robots with Pi, I share this tutorial on my post. link-removed Thank you!

Reply
Shana 20th December 2013 - 3:43 pm

Wow, that’s what I was looking for, what a data! existing here at this webpage,
thanks admin of this website.

Reply
Tyler Swain 12th December 2013 - 4:41 pm

Hey Oscar, haven’t bugged you in awhile ;) I finally got the speech recognition working very well, and the facial recognition working, but the facial recognition program will not open through my ssh I encounter Gtkerror cannot open display, any thoughts?

Reply
Gigasi 27th November 2013 - 4:58 am

Hi Oscar,

Thanks for your work ! I try and it’s ok on my ubuntu .

I have a robot aisoy and is work in raspi ( raspbian os ) , I can streamer the camera of aisoy bt how do to analyse video stream of the aisoy ? But i have not “startx” ..

Have you a idea ?

Reply
Oscar 27th November 2013 - 4:27 pm

you might not need startx to do face recognition, but i still haven’t found a way to do this yet only under command line. I will keep this updated as soon as I know how.

Reply
Evgeni 10th January 2014 - 1:53 am

Does it require modifying example code ?
If I try running it from command line, it crashes with the “open display” error

Reply
Oscar 10th January 2014 - 9:59 am

Sorry I was being arbitrary, I thought I could be done, but it doesn’t seem to work without startx. Well, at least I still haven’t found a way to make it work only under command line.
I will delete my previous command before it mislead more people.

Reply
Tyler Swain 19th November 2013 - 10:30 pm

Okay, so I now have the face detection program running, I was running it in the newer version of python, but once I switched the call up command to sudo python2 facedetect.py –cascade=face.xml it seems to be working. Now to tackle the I2C connections, and we should be almost there. Question, can you tell me what type of webcame you are using with your rpi speech like siri module? I am using a logitech c905, webcam works very well, but it doesn’t seem to be picking up my mic.

Reply
Tyler Swain 16th November 2013 - 10:49 pm

When I try to run facedetect.py I recieve the following error
“HIGHGUI ERROR: libv4l unable to ioctl VIDIOCSPICT”
When I try to run the opencv test script all I get is a little grey box?
Any suggestions would be greatly appreciated. I am trying to combine this with your siri like voice application with wolfram alpha to make an educational/social robot for kids.

Reply
Tyler Swain 17th November 2013 - 10:34 pm

I have resolved my HIGHGUI error, I forgot that I had previously installed Motion which was running in the background, now however I am having a new issue,
“(video:3589): Gtk-WARNING **: cannot open display:”
Opencv is functioning perfectly with the test script, I only receive the error when I run facedetect.
python facedetect.py –cascade=face.xml 0
You stated the number at the end represents the number of the video device, is 0 perhaps not the correct number for my device? How can I check?

Reply
Tyler Swain 18th November 2013 - 5:57 pm

haha, okay, so I have it working if I call up the program directly from the RPi, but when I use VNC to call facedetect.py I am still recieving the “(video:3589): Gtk-WARNING **: cannot open display:” error. Also I am not getting the redbox in the webcam display… so close….

Reply
anugrahbsoe 25th September 2013 - 7:46 am

hay oscar its great man,but i get error

File “/home/newbieilmu/Documents/Programming/Python/Dev/facerecog/facedetect.py”, line 49, in
cascade = cv.Load(options.cascade)
TypeError: OpenCV returned NULL

Execution Successful!

can you help me?

Reply
Evgeni 21st December 2013 - 5:55 pm

I think you need to change the name and path to XML file with trained faces in one of the parameters to that command. Something like “face.xml”

Reply
stan 25th September 2013 - 6:40 am

Hi Oscar, thanks for your great write-ups. I managed to set up my pi for remote access ran the python-opencv install successfully, as suggested:

sudo apt-get install python-opencv

when i try to run the python test script i get the following error:
ImportError: no module named cv

I am fairly new to the Pi and any help will be greatly appreciated!

Reply
Stan 25th September 2013 - 9:17 pm

Found the reason: I was trying to run this on python3.x. Works fine on python2.x. Rookie mistake!

Reply
jlc 17th September 2013 - 12:45 pm

You present an exciting Project.
I have just started on the Raspberry Pi and have done a few Image Tracking programs in Python.
But, I have discovered some problems.

On my PI, All Python Code examples, and my own codes; using CreateCameraCapture() etc
But they always display a large active Camera Image, and tiny Images ONLY 64×64 if I use cv.ShowImage.
I have tried both cv.Resize() and cv.SetCaptureProperty() CV_CAP_PROP_FRAME_WIDTH / HEIGHT.
But these always cause Not Compatible Image Errors.

I made sure that I installed all RasPI, Python, Camera updates.
But, I still have the same results :

LV4 Driver seems the only solution that I have. But, it ONLY presents large active Camera Image, and tiny Images 64×64 if I use cv.ShowImage. Also frustrated that I can Not Disable the Large Active Image.

Why could this be ?

Reply
John-Seo 17th September 2013 - 12:09 am

Hi, Oscar. Thanks for the example. Actually I done first example, but couldn’t do with trained face code.
In a trained face.xml code, ‘ /usr/bin/python ‘ is it a location of this file ? Actually usr/bin/python is not a directory.
How could i link opencv with face.xml ?
Thanks.

Reply
Oscar 17th September 2013 - 8:25 am

did you download the face.xml file? There is a link in the post. You should put it in the same directory as where your source code is (facedetect.py)

you can link the face.xml with the facedetect.py with this command:
python facedetect.py –cascade=face.xml 0

Reply
John-Paul 22nd August 2013 - 2:25 pm

Thank-you for the examples, it should prove very useful once I get my RPi. However, I was wondering if it would also be possible to zoom into the face while tracking it?

Reply
Oscar 24th August 2013 - 11:47 am

i think so. but that would require more advanced OpenCV programming.

Reply
geoffhall 15th August 2013 - 2:18 pm

Hi Oscar,
I cannot find how you do the I2C connection in the Raspberry Pi Python code. The two Python code examples look identical but the second one should be sending coordinates to the Arduino through I2C.

Lots of great information here and in your other tutorials, keep up the good work!

Reply
spen 21st January 2014 - 7:29 pm

yes I’m noticing that too- any chance you could post the code you used please Oscar?

Reply
nightbug 26th January 2014 - 4:41 pm

Hi Oscar,

Great info for me newbie! Could you please share the code necessary for i2c communication in the Python scipt after detecting a face? Thanks for sharing!

Reply
Oscar 29th January 2014 - 1:23 pm

sorry for some reason I posted the wrong code, I don’t know if i can still find the original code for that, when I do i will update it asap.
for now, take a look at an i2c example for RPi and Arduino: https://oscarliang.com/raspberry-pi-arduino-connected-i2c/

Reply
Kane 20th July 2013 - 1:23 am

I have follow your steps but when I go to run the sample code to test Open-CV. I immediately get ImportError: No module named cv. Any ideas?

Reply
nwbie 9th July 2013 - 8:22 am

hi oscar,

great write-up, I managed to get a py script running on my rpi thnx to ur sample script. I was wondering though if u managed to switch to using LBP rather than Haar, if there is a significant increase in speed and if u would be so kind to show us… :)

Reply
robert 5th July 2013 - 8:35 am

Why is that your code has almost no comments?

Reply
Oscar 5th July 2013 - 9:03 am

Thanks for your comment, although it’s useful to have comments in the code, but the focus of this project is to demonstrate the possibility and performance of doing Face recognition on the Pi, but not teaching people how Face recognition works. With the comment, I would probably have to run a post that is double of the current post in length.

so no, i will try to keep it short.

Reply
festrada007 26th September 2013 - 3:40 pm

python face8.py
Traceback (most recent call last):
File “face8.py”, line 46, in
cascade = cv.Load(options.cascade)
TypeError: OpenCV returned NULL

What have I done, I had it working at one point. Can you assist?

Reply