[1] Mansoorizadeh M, Moghaddam Charkari N, “Multimodal information fusion application to human emotion recognition from face and speech,” Multimed Tools Appl, Springer Science,Business Media, LLC 2009.
[2] N. Ambady and R. Rosenthal, “Thin Slices of Expressive Behavior as Predictors of Interpersonal Consequences: A Meta-Analysis,” Psychological Bull., Vol. 111, No. 2, pp. 256-274, 1992.
[3] P. Ekman and E.L. Rosenberg, “What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System,”, second ed. Oxford Univ. Press, 2005.
[4] R. W. Picard, “Affective Computing,” MIT Press, 1997.
[5] Zeng Z, Pantic M, Roisman GI, Huang TS, “A survey of affect recognition methods: audio, visual, and spontaneous expressions,”. PAMI 31, pp.39–58 ,2009.
[6] H. Sadoghi Yazdi, M. Amintoosi, M. Fathy, “Facial Expression Recognition with QIM and ITMI Spatio-Temporal Database,” 4th Iranian Conference on Machine Vision and Image Processing, Mashhad, Iran, Feb, 2007, (Persian).
[7] A.F. Bobick and J.W. Davis, “The Recognition of Human Movement Using Temporal Templates,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 23, No. 3, pp. 257-267, 2001.
[8] J.F. Cohn, “Foundations of Human Computing: Facial Expression and Emotion,” Proc. Eighth ACM Int’l Conf. Multimodal Interfaces (ICMI ’06), pp. 233-238, 2006.
[9] Pantic, M., Rothkrantz, L.J.M. “Toward an affect-sensitive multimodal human-computer interaction,”. Proceedings of the IEEE , Vol. 91 Issue: 9 , pp. 1370 –1390, Sept. 2003.
[10] P. Ekman, “Facial expression and emotion,” American Psychologist, Vol. 48, pp.384–392, 1993.
[11] Fasel B, Luettin J. “Automatic facial expression analysis: a survey,” Pattern Recognition, Vol. 36, 1999.
[12] Yacoob, Y., Davis, L. “Computing spatio-temporal representations of human faces. Computer Vision and Pattern Recognition, ” Proceedings CVPR '94., IEEE Computer Society Conference, pp. 70 –75. June 1994.
[13] Jenn-Jier J. Lien, Takeo Kanade, Jeffrey Cohn, C.C. Li, and Adena J.Zlochower. “Subtly different facial expression recognition and expression intensity estimation, ” In CVPR98, pp. 853–859, 1998.
[14] Petar S. Aleksic and Aggelos K. Katsaggelos. “Automatic facial expression recognition using facial animation parameters and multi-stream HMMs, ” IEEE Transactions on Information Forensics and Security, pp. 3–11,2006.
[15] M.F. Valstar, I. Patras, and M. Pantic, “Facial action unit recognition using temporal templates, ” Proc. IEEE Int’l Workshop on Human Robot Interactive Communication, September 2004.
[16] M. Osadchy, D. Keren, “A Rejection-Based Method for Event Detection in Video, ” IEEE Trans. on Circuits and Systems for Video Technology, Vol.14, No.4, pp.534-541, Apr.2004.
[17] N. Li, S. Dettmer, and M. Shah, “Visually Recognizing Speech Using Eigensequences, ” in Motion-Based Recognition. Boston, MA: Kluwer, 1997, pp. 345-371.
[18] R. V. Babua, K. R. Ramakrishnanb, “Recognition of Human Actions Using Motion History Information Extracted from the Compressed Video, ” Image and Vision Computing, Vol.22, pp.597-607, 2004.
[19] Intel, “OpenCV: Open source Computer Vision Library,”http://www.intel.com/research/mrl/research/opencv/.
[20] Olivier Martin, Jordi Adel, Ana Huerta, Irene Kotsia, Arman Savran, Raphael Sebbe, “Multimodal caricatural mirror," In Proc. eINTERFACE-2005 , pp. 13-20, 2005.
[21] Martin O, Kotsia I, Macq B, Pitas I, “The enterface’05 audio-visual emotion database, ” In: Proc. 22nd intl. conf. on data engineering workshops (ICDEW’06), 2006.
[22] T. Kanade, J. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis, ”, Proc. IEEE Int’l Conf. Face and Gesture Recognition, pp. 46-53, 2000.
[23] M. Paleari, R. Benmokhtar, and B. Huet. “Evidence theory-based multimodal emotion recognition, ”. InMMM ’09, pp.435–446, Berlin, 2008.
[24] Mansoorizadeh M, Moghaddam Charkari N, “Hybrid Feature and Decision Level Fusion of Face and Speech Information for Bimodal Emotion Recognition, ” Proceedings of the 14th International CSI Computer Conference (CSICC'09), 2009.