The lecture on faces describes the algorithm due to Viola and Jones for finding faces, an algorithm that is used in pretty much every modern camera and phone. There is an implementation of the algorithm in OpenCV, and it is fairly straightforward to use.
By default, the OpenCV implementation of Viola-Jones works on grey-scale images; if you provide it with a colour image, it is converted internally to grey-scale before processing. Inas Al-Taie and your friendly neighbourhood lecturer have found that if you present the algorithm with the V-channel of an image converted to HSV, it tends to make a better job of finding faces than with the RGB image. However, we don't want you just to take that statement at face value, we want you to see if there is evidence that supports it, and the best way to do that is to process a load of faces with and without the use of the V-channel and find if there are any differences in performance. We shall do that here using 200 colour images from a database collected here at Essex by Libor Spacek (now retired) during 1994–6.
Your job is to:
Write a program that applies OpenCV's Viola-Jones face detector to an image whose filename is provided on the command line and prints out yes if a single face was detected and no otherwise. You may use code taken from books or websites but you need to acknowledge the source; failure to do so will be regarded as plagiarism.
You will find when you program this up that you will need an XML file of Haar cascades for the OpenCV implementation of Viola-Jones to use. There is one as a standard part of OpenCV and some alternatives on the Web; you should just find one that works and submit it along with your code.
To work with the test harness described below, your program must be called face-detect (no extension) and must be executable: if you're programming in Python, you do that by typing the command
chmod +x face-detectThe demonstrators will help if you have trouble doing this. It is also essential that your program outputs nothing but "yes" or "no" to work with the test harness. In other words, a typical way to run it must be
$ face-detect face-001.jpg yes $where $ represents your shell's prompt.
Run your program under the test harness as described below, saving the output as vj.res.
Modify your program to convert the image whose name is given on the command line to HSV, passing the V-channel of it into OpenCV's Viola-Jones face detector. As above, it should print out yes if one face was detected and no otherwise.
Run this modified program under the test harness again, this time saving the output as vjv.res.
Use the test harness to ascertain whether there is any difference in performance between the two versions of the program, inserting the evidence and your interpretation of it into the comments at the top of your program as described below.
You may implement your program in any of C, C++ or Python, and it must use the OpenCV library. You are free to use any OpenCV routines in your program.
If you are using C or C++, you should supply a Makefile that compiles your source code, generating an executable program called face-detect — it must have that name to work with the test harness.
If you are using Python, your program must be called face-detect (i.e., with no .py extension).
Your software must build and run under Linux on the machines in Computer Lab 1.
The principles of evaluating a single algorithm and comparing the performances of several algorithms is described in the lecture notes. It is not especially difficult to perform this kind of thing by hand — but it is very tedious. For this reason, evaluations are usually performed using programs known as test harnesses. For this assignment, you will use a test harness and some image files, running face-detect on them and comparing the V-channel and 'vanilla' performances.
The particular test harness you will use is FACT ("Framework for Algorithm Comparison and Testing"). You need to download FACT and the relevant data files. FACT separates the stages of executing a program on a series of tests and analyzing the results. This is because the execution stage is normally much slower than the analysis one and, as we shall see, there are several analyses that one might like to perform. The execution stage produces a transcript file, and all the analysis stages use transcript files as input.
The file ass2.fact contains the tests that are to be executed; it is human-readable and you are welcome to look at it. You should be able to run the tests on face-detect using the command
./fact execute ass2It is essential that your program, the data files and fact itself all reside in the same directory.
The execute keyword tells FACT to process the test script ass2.fact and output a transcript; you can use run rather than execute if you prefer. (Note that FACT uses the file interface.py as an interface to the program being tested.) When you execute the above command, FACT will write output to your terminal window. The first line contains some identification information, used for checking in the analysis stages, followed by a single line per test. These lines are together form the transcript.
To create a transcript file, you simply use command-line redirection to make these lines go to a file
./fact execute ass2 > vj.resand twiddle your thumbs while it runs.
Analysing the transcript file is both quick and easy:
./fact analyse vj.resRather than analyse, you can write analyze or anal. If the name of the file that you wish to analyse ends in .res, you can omit it. The results of the analysis are written to your terminal window; you can use re-direction to save it in a file. You will see that the output contains two distinct tables, one summarising error rates etc. and the other a confusion matrix, which shows how false positives occur.
FACT can generate HTML rather than plain text:
./fact --format=html --detail=2 -H -T analyse vj.res > vj.htmlwhich you may find easier to read.
As described above, you should generate transcript files for vanilla Viola-Jones and the V-channel version of it in separate runs of FACT, storing them in files vj.res and vjv.res respectively. Your next step is to compare them, which you do using the command
./fact compare vjv.res vj.resAgain, you can get more detail by appending --detail=2 to the command.
You should insert this FACT output into the comments at the top of your program and explain what it means. If there improvements that could be made to the evaluation process, you should say what these are too. This interpretation of the numerical results is an important part of the assignment.
You should present your program in accordance with the assessment criteria. Note that marks are awarded for commenting and presenting your code in a clear way as well as a good choice of algorithms, elegant implementation, and correct interpretation of the results.
|Submission deadline:||Thu 7th December at 11:59:59 (noon)|
|What to submit:||
the source code of your program|
any Makefile etc needed for compilation
the XML file defining your Haar cascade
vj.res and vjv.res
but not the images!
|Marks returned:||three weeks|
|Assessment criteria:||see the detailed description of the criteria|
Remember to identify your work with your registration number only. The coursework system allows you to upload your work as often as you like, so do keep uploading your files as you develop them.
|Last updated on 2017-10-12 09:22:47||Web pages maintained by Adrian F. Clark [contact]|