Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dlib
Commits
f72305b2
Commit
f72305b2
authored
Feb 23, 2014
by
Davis King
Browse files
Added python object detection examples
parent
4a9be7bb
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
124 additions
and
2 deletions
+124
-2
python_examples/face_detector.py
python_examples/face_detector.py
+23
-2
python_examples/train_object_detector.py
python_examples/train_object_detector.py
+101
-0
No files found.
python_examples/face_detector.py
View file @
f72305b2
#!/usr/bin/python
#!/usr/bin/python
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
#
#
# This example program shows how to find frontal human faces in an image. In
# particular, this program shows how you can take a list of images from the
# command line and display each on the screen with red boxes overlaid on each
# human face.
#
# The examples/faces folder contains some jpg images of people. You can run
# this program on them and see the detections by executing the following command:
# ./face_detector.py ../examples/faces/*.jpg
#
# This face detector is made using the now classic Histogram of Oriented
# Gradients (HOG) feature combined with a linear classifier, an image
# pyramid, and sliding window detection scheme. This type of object detector
# is fairly general and capable of detecting many types of semi-rigid objects
# in addition to human faces. Therefore, if you are interested in making
# your own object detectors then read the train_object_detector.py example
# program.
#
#
# COMPILING THE DLIB PYTHON INTERFACE
# COMPILING THE DLIB PYTHON INTERFACE
# Dlib comes with a compiled python interface for python 2.7 on MS Windows. If
# Dlib comes with a compiled python interface for python 2.7 on MS Windows. If
# you are using another python version or operating system then you need to
# you are using another python version or operating system then you need to
...
@@ -19,13 +37,16 @@ win = dlib.image_window()
...
@@ -19,13 +37,16 @@ win = dlib.image_window()
for
f
in
sys
.
argv
[
1
:]:
for
f
in
sys
.
argv
[
1
:]:
print
"processing file: "
,
f
print
"processing file: "
,
f
img
=
io
.
imread
(
f
)
img
=
io
.
imread
(
f
)
# The 1 in the second argument indicates that we should upsample the image
# 1 time. This will make everything bigger and allow us to detect more
# faces.
dets
=
detector
(
img
,
1
)
dets
=
detector
(
img
,
1
)
print
"number of faces detected: "
,
len
(
dets
)
print
"number of faces detected: "
,
len
(
dets
)
for
d
in
dets
:
print
" detection position left,top,right,bottom:"
,
d
.
left
(),
d
.
top
(),
d
.
right
(),
d
.
bottom
()
win
.
clear_overlay
()
win
.
clear_overlay
()
win
.
set_image
(
img
)
win
.
set_image
(
img
)
win
.
add_overlay
(
dets
)
win
.
add_overlay
(
dets
)
raw_input
(
"Hit enter to continue"
)
raw_input
(
"Hit enter to continue"
)
python_examples/train_object_detector.py
0 → 100755
View file @
f72305b2
#!/usr/bin/python
# The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
#
# This example program shows how you can use dlib to make an object detector
# for things like faces, pedestrians, and any other semi-rigid object. In
# particular, we go though the steps to train the kind of sliding window
# object detector first published by Dalal and Triggs in 2005 in the paper
# Histograms of Oriented Gradients for Human Detection.
#
#
# COMPILING THE DLIB PYTHON INTERFACE
# Dlib comes with a compiled python interface for python 2.7 on MS Windows. If
# you are using another python version or operating system then you need to
# compile the dlib python interface before you can use this file. To do this,
# run compile_dlib_python_module.bat. This should work on any operating system
# so long as you have CMake and boost-python installed. On Ubuntu, this can be
# done easily by running the command: sudo apt-get install libboost-python-dev cmake
import
dlib
,
sys
,
glob
from
skimage
import
io
# In this example we are going to train a face detector based on the small
# faces dataset in the examples/faces directory. This means you need to supply
# the path to this faces folder as a command line argument so we will know
# where it is.
if
(
len
(
sys
.
argv
)
!=
2
):
print
"Give the path to the examples/faces directory as the argument to this"
print
"program. For example, if you are in the python_examples folder then "
print
"execute this program by running:"
print
" ./train_object_detector.py ../examples/faces"
exit
(
1
)
faces_folder
=
sys
.
argv
[
1
]
# Now lets do the training. The train_simple_object_detector() function has a
# bunch of options, all of which come with reasonable default values. The next
# few lines goes over some of these options.
options
=
dlib
.
simple_object_detector_training_options
()
# Since faces are left/right symmetric we can tell the trainer to train a
# symmetric detector. This helps it get the most value out of the training
# data.
options
.
add_left_right_image_flips
=
True
# The trainer is a kind of support vector machine and therefore has the usual
# SVM C parameter. In general, a bigger C encourages it to fit the training
# data better but might lead to overfitting. You must find the best C value
# empirically by checking how well the trained detector works on a test set of
# images you haven't trained on. Don't just leave the value set at 1. Try a
# few different C values and see what works best for your data.
options
.
C
=
1
# Tell the code how many CPU cores your computer has for the fastest training.
options
.
num_threads
=
4
options
.
be_verbose
=
True
# This function does the actual training. It will save the final detector to
# detector.svm. The input is an XML file that lists the images in the training
# dataset and also contains the positions of the face boxes. To create your
# own XML files you can use the imglab tool which can be found in the
# tools/imglab folder. It is a simple graphical tool for labeling objects in
# images with boxes. To see how to use it read the tools/imglab/README.txt
# file. But for this example, we just use the training.xml file included with
# dlib.
dlib
.
train_simple_object_detector
(
faces_folder
+
"/training.xml"
,
"detector.svm"
,
options
)
# Now that we have a face detector we can test it. The first statement tests
# it on the training data. It will print the precision, recall, and then
# average precision.
print
"
\n
training accuracy:"
,
dlib
.
test_simple_object_detector
(
faces_folder
+
"/training.xml"
,
"detector.svm"
)
# However, to get an idea if it really worked without overfitting we need to
# run it on images it wasn't trained on. The next line does this. Happily, we
# see that the object detector works perfectly on the testing images.
print
"testing accuracy: "
,
dlib
.
test_simple_object_detector
(
faces_folder
+
"/testing.xml"
,
"detector.svm"
)
# Now let's use the detector as you would in a normal application. First we
# will load it from disk.
detector
=
dlib
.
simple_object_detector
(
"detector.svm"
)
# We can look at the HOG filter we learned. It should look like a face. Neat!
win_det
=
dlib
.
image_window
()
win_det
.
set_image
(
detector
)
# Now lets run the detector over the images in the faces folder and display the
# results.
print
"
\n
Showing detections on the images in the faces folder..."
win
=
dlib
.
image_window
()
for
f
in
glob
.
glob
(
faces_folder
+
"/*.jpg"
):
print
"processing file:"
,
f
img
=
io
.
imread
(
f
)
dets
=
detector
(
img
)
print
"number of faces detected:"
,
len
(
dets
)
for
d
in
dets
:
print
" detection position left,top,right,bottom:"
,
d
.
left
(),
d
.
top
(),
d
.
right
(),
d
.
bottom
()
win
.
clear_overlay
()
win
.
set_image
(
img
)
win
.
add_overlay
(
dets
)
raw_input
(
"Hit enter to continue"
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment