The recent advancement of social media has provided a platform for social engagement and interaction at a global scale. With billions of images being uploaded onto social media platforms each day, there is an increasing interest in inferring the emotion and mood display of a group of people in images. The capability of recognising group affect has wide applications in retrieval, advertisement, content recommendation and security. In this project, we aim to improve the state-of-the-art emotion recognition accuracy for groups people in images. The existing approaches have combined local features from individualâ€™s faces and global descriptors of scene contexts [1, 2] to address this problem. In this study, we will take a leap from well-engineered features tailored for our specific problem to features automatically discovered by recent advances in convolutional neural networks, such as deep convolutional activation features .
â€¢ Design and implementation of a supervised training procedure for group affect recognition using a deep convolutional network. â€¢ Performance reporting on benchmark datasets such as HAPPEI  and AFEW  in comparisons with recent state-of-the-art group affect recognition algorithms.
This project is suitable for candidates with a strong undergraduate background in mathematics, computer science and software/computer engineering. The following skills are essential for a successful undertaking of the proposed project â€¢ Familiarity with computer vision, pattern recognition and image processing, especially facial expression recognition. â€¢ Good software design and programming skills (especially in C++ and Matlab). â€¢ A strong foundation in mathematics (linear algebra, calculus and optimisation).
1. A. Dhall, R. Goecke and T. Gedeon. Automatic Group Happiness Intensity Analysis. IEEE Transactions on Affective Computing, 6(1): 13-26, 2015. 2. A. Dhall, J. Joshi, K. Sikka, R. Goecke and N. Sebe. The More the Merrier: Analysing the Affect of a Group of People in Images. Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG 2015), 2015. 3. J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Proc. ICML, 2013. 4. A. Dhall, R. Goecke, S. Lucey, and T. Gedeon, â€œCollecting large, richly annotated facial-expression databases from movies,â€ IEEE Multimedia, vol. 19, no. 3, p. 0034, Sep. 2012.
The student will gain a practical experience through working on real-world research problems with experienced researchers in the areas of computer vision and pattern recognition. He/she will receive technical support in terms of hardware equipment/software for data collection and processing. The student will gain the working knowledge applicable to the above and other related real-world applications.