Information for the Special Issue
Special Issue Call for Papers:
Recent advances in technology have boosted the development and release of active assistive living devices, based on wearable and/or non-obtrusive visual and multi-modal signals for e-health and welfare support. These solutions are seamlessly integrated in the environment, such as sensor-based systems installed in elderly people’s homes for ambient monitoring and intelligent visual warning.
In addition, research on ubiquitous computing has favored the implementation of more user-centered applications such as virtual tutoring, coaching agents, physical rehabilitation and psychological therapy systems. To allow these systems to provide features that satisfy the user’s requirements, expectations, and acceptance, the tendency is now shifting towards the conception of empathic solutions, tailored to personalized user needs. The new assistive systems must be able to understand user’s behaviors, mood and intentions, and react to them accordingly in real time, as well as detect timely changes in behaviors and health states. Furthermore, such systems are expected to infer the user’s traits, attitudes and psychological profile to deliver a more personalized user-machine interaction. These solutions require advanced computer vision and machine learning techniques, such as facial expression analysis, gaze and pose estimation, and gesture recognition, in addition to behavioral and psychological theories for modeling individual’s profiles. While these tasks are currently obtaining outstanding performances in controlled and prototypical environments (e.g., detection of facial expressions of emotion on static faces), the challenge falls on their integration and application in naturalistic scenarios, where extensive sources of variability (pose, age, behaviors, moods, illumination conditions, dynamic speaking emotional faces, among others) affect the processing of the detected signals.
The Computer Vision and Machine Learning for Healthcare Applications special issue aims to collect latest approaches and findings, as well as to discuss the current challenges of machine learning and computer vision based e-health and welfare applications. The focus is on the employment of single or multi-modal face, gesture and pose analysis. We expect this special issue to increase the visibility and importance of this area, and contribute, in the short term, in pushing the state of the art in the automatic analysis of human behaviors for health and wellbeing applications.
Topics of interest include, but are not limited to:
- Multi-modal integration
- Psychological profiling from (audio)-visual and/or multi-modal data
- Approaches based on psychology behavioral models
- Mobile-based and human-computer interaction applications
- User-understanding in human-computer interaction
- Human behavior analysis for health and well-being support
- Assistive technologies for supporting vulnerable people
- Virtual avatars and coaching
- Physical and psychological therapy systems
- Assistive care
- User acceptance of empathic assistive systems
- Real-time applications
Important dates (tentative)
Paper submission deadline: extended to 1st August 2020.
Acceptance notification: 30th September 2020.
Tentative publication date: 1st December 2020.
Sergio Escalera, Computer Vision Center (UAB) and Universitat de Barcelona, firstname.lastname@example.org
Cristina Palmero, Universitat de Barcelona and Computer Vision Center (UAB), email@example.com
Maria Inés Torres, University of the Basque Country (UPV/EHU) and SPIN RG, firstname.lastname@example.org
Anna Esposito, Behaving Cognitive System Lab, Department of Psychology, Università della Campania “Luigi Vanvitelli” and International Institute for Advanced Scientific Studies (IIASS), Italy, email@example.com
Alexa Moseguí Saladié, Universitat de Barcelona and Computer Vision Center (UAB), firstname.lastname@example.org
Bjorn W. Schuller, http://www.schuller.one/, email@example.com
Jeffrey Cohn, http://www.jeffcohn.net/, firstname.lastname@example.org
Guest Editor Bios:
Sergio Escalera (http://www.sergioescalera.com) obtained the P.h.D. degree on Multi-class visual categorization systems at Computer Vision Center, UAB. He obtained the 2008 best Thesis award on Computer Science at Universitat Autònoma de Barcelona. He leads the Human Pose Recovery and Behavior Analysis Group at UB, CVC, and the Barcelona Graduate School of Mathematics. He is an associate professor at the Department of Mathematics and Informatics, Universitat de Barcelona. He is an adjunct professor at Universitat Oberta de Catalunya, Aalborg University, and Dalhousie University. He has been visiting professor at TU Delft and Aalborg Universities. He is a member of the Visual and Computational Learning consolidated research group of Catalonia. He is also a member of the Computer Vision Center at UAB. He is series editor of The Springer Series on Challenges in Machine Learning. He is Editor-in-Chief of American Journal of Intelligent Systems and editorial board member of more than 5 international journals. He is vice-president of ChaLearn Challenges in Machine Learning, leading ChaLearn Looking at People events. He is co-creator of Codalab open source platform for challenges organization. He is co-founder of PhysicalTech and Care Respite companies. He is also member of the AERFAI Spanish Association on Pattern Recognition, ACIA Catalan Association of Artificial Intelligence, INNS, and Chair of IAPR TC-12: Multimedia and visual information systems. He has different patents and registered models. He has published more than 250 research papers and participated in the organization of scientific events, including CCIA04, ICCV11, CCIA14, AMDO16, FG17, NIPS17, NIPS18, FG19, and workshops at ICCV, ICMI, ECCV, CVPR, ICPR, NIPS. He has been guest editor at JMLR, TPAMI, IJCV, TAC, PR, MVA, JIVP, Expert Systems, and Neural Comp. and App. He has been area chair at WACV16, NIPS16, AVSS17, FG17, ICCV17, WACV18, FG18, BMVC18, NIPS18, FG19 and competition and demo chair at FG17, NIPS17, NIPS18, ECMLPKDD19 and FG19. His research interests include, statistical pattern recognition, affective computing, and human pose recovery and behavior understanding, including multi-modal data analysis, with special interest in characterizing people: personality and psychological profile computing.
Cristina Palmero (https://crisie.github.io/) received her Bachelor degree in Audiovisual Telecommunication Systems Engineering in 2011, obtaining the 2010 Best Bachelor Thesis Award about accessibility and labor integration of people with disabilities. In 2014, she received a M.S. degree in Artificial Intelligence from the Polytechnic University of Catalonia and a partial M.S. degree in Computer Vision from the Autonomous University of Barcelona, Spain. That same year, she was also awarded with a Marie Curie Early Stage Training (ESR) Fellowship for a 3-year program within the ITN iCARE (improving Children’s Auditory REhabilitation). As a computer vision and machine learning researcher, she has worked in several projects devoted to physical rehabilitation, active aging and independent living, as well as face-to-face group and dyadic interaction. She is currently a PhD student at Universitat de Barcelona, Spain, focusing on automatic gaze estimation with remote cameras, and a member of the Human Pose Recovery and Behavior Analysis group (HuPBA). She is also a member of the H2020 EMPATHIC project, which focuses on building empathic and personalized virtual coaches for the elderly. Her main research interests include multi-modal human behavior analysis and social signal processing.
Maria Inés Torres received her PhD in Physics from the University of the Basque Country in 1990, including an internship at the Centre National d’Études des Télécommunications in Lanion (France) in 1988. She was also a visiting researcher at the Polytechnic University of Valencia (Spain) during the years 1991 and 1992. She has been a member of the board of the Spanish Association of Pattern Recognition and Image, which is a member of the International Association of Pattern Recognition (IAPR), from 1995 to 2008. She is currently a Full Professor of Computer Science at the University of the Basque Country where and held several academic management positions. She founded the Pattern Recognition and Speech Technology research group in 1990, which she has been leading since then. She was a visiting Faculty at the Language Technologies Institute in Carnegie Mellon University in 2012, then a visitor in 2013. She was also a visiting Professor at the University of California granted by the Fulbright program. She has published numerous papers in journals and international conferences and edited three books. She has lead many research projects most of them funded by the National Science Agency in Spain. Also led research under contract with technical centers and companies. Additionally, she has got a large experience in evaluating research and academic at local, national and international level. She has conducted research related to speech technologies: automatic speech recognition and understanding, language identification, machine translation, specific processing of Basque language as well as to acquisition and generation of language resources. Currently her research interests focus on statistical approaches to deal with spoken dialog systems, being also interested in learning from human interaction to generate artificial interaction. She is also aimed at identifying emotions in speech and at detecting emotional language in social networks. She is now the coordinator of the H2020 EMPATHIC project (http://www.empathic-project.eu) and participates in the H2020-MSCA- RISE MENHIR action.
Anna Esposito received her “Laurea Degree” summa cum laude in Information Technology and Computer Science from the Università di Salerno in 1989 with a thesis on: The Behavior and Learning of a Deterministic Neural Net (Complex System, 6(6), 507-517, 1992) and a PhD Degree in Applied Mathematics and Computer Science from the Università di Napoli “Federico II” in 1995. Her PhD thesis on: Vowel Height and Consonantal Voicing Effects: Data from Italian (Phonetica, 59(4), 197-231, 2002) was developed at Massachusett Institute of Technology (MIT), Research Laboratory of Electronics, under the supervision of professor Kenneth N Stevens. Anna is currently Associate Professor at the Department of Psychology, UVA. Her teaching/research interests include Cognitive and Algorithmic Issues of Multimodal Communication, Human Machine Emotional Interaction, Emotions, Cognitive Economy, and Decision Making. She authored 190+ peer reviewed publications in international journals, books, and conference proceedings. She edited/co-edited 28+ books/conference proceedings in collaboration with Italian, EU and overseas colleagues. She guest edited several journal special issues, among those the International Journal on Artificial Intelligence Tools (IJAIT, 2017), Journal on Multimodal User Interfaces (2015), The Information Society Journal (2015), Intelligent Decision Technologies (2014), Cognitive Computation (2014, 2012). Anna has been general chair and/or co-chair of 50+ international events organized as conferences and conference special sessions, among those IEEE-ICTAI 2015. She has been recently doctoral chair for FG2017 and FG2019, co-chair of an Interspeech 2019 Special Session and challenge chair for ICMI 2019.
Alexa Moseguí Saladié obtained her Bachelor degree in Audiovisual Telecommunication Systems Engineering in 2016 with a thesis on: Research and development of a tool for the visualization and analysis of moving objects, while doing an internship in the Image Processing and Computer Vision group in Universitat Pompeu Fabra (UPF). In 2016, she was awarded with an EU-funded scholarship to obtain her M. Sc. degree in Color and Spectral Imaging (COSI) as part of an international ERASMUS Mundus Joint Master Degree programme (EMJMD) studying in three different universities over Europe: Université Jean Monnet Saint-Etienne (France), Universidad de Granada (Spain) and Norges teknisk-naturvitenskapelige universitet (Norway). Her master thesis was conducted in Fraunhofer Institute for Computer Graphics Research (Darmstadt, Germany) in the biometrics department on: Creating face morphing attacks with Generative Adversarial Networks. Thanks to this research she received the Best Master Thesis award of Fraunhofer IGD and the Visual Computing groups of TU Darmstadt. In 2018, she obtained her COSI M. Sc.degree diploma and a partial M. Sc. degree in Applied Computer Science by NTNU. She is currently working in the Human Pose Recovery and Behavior Analysis (HuPBA) group in Universitat de Barcelona working in a European project called EMPHATIC which aims to create an emphatic and personalized virtual coach to help elderly people to live independently. She is also doing research in dyadic interaction and affective computing.
Advisory Editor Bios:
Bjorn W. Schuller, http://www.schuller.one/, email@example.com
- Full Professor & Head of the Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany
- Professor of Artificial Intelligence & Head of GLAM – Group on Language, Audio & Music, Imperial College London, London/U.K
- Chief Scientific Officer (CSO) and Co-Founding CEO, audEERING GmbH, Gilching/Germany
- Visiting Professor, School of Computer Science and Technology, Harbin Institute of Technology, Harbin/P.R. China.
- Editor in Chief of IEEE Transactions in Affective Computing
Jeffrey Cohn, http://www.jeffcohn.net/, firstname.lastname@example.org
Jeffrey Cohn is Professor of Psychology, Psychiatry, and Intelligent Systems at the University of Pittsburgh and Adjunct Professor of Computer Science at the Robotics Institute at Carnegie Mellon University. He leads interdisciplinary and inter-institutional efforts to develop advanced methods of automatic analysis and synthesis of facial expression and prosody and applies those tools to research in human emotion, social development, nonverbal communication, psychopathology, and biomedicine (Google Scholar profile). His research has been supported by grants from the U.S. National Institutes of Health, National Science Foundation, Autism Foundation, Office of Naval Research, and Defense Advanced Research Projects Agency among other sponsors. He is Chair of the Steering Committee of the IEEE International Conference on Automatic Face and Gesture Recognition (FG), a Fellow of the Association for the Advancement of Affective Computing. scientific advisor for RealEyes, and has served as General Chair of FG2020, FG2017, FG2015, FG2008, the International Conference on Affective Computing and Intelligent Interaction (ACII 2009), and the International Conference on Multimodal Interfaces (ACM 2014). He is a past co-editor of IEEE Transactions in Affective Computing (TAC) and has co-edited special issues on affective computing for IEEE Transactions in Pattern Analysis and Machine Intelligence, Journal of Image and Vision Computing, Pattern Recognition Letters, Computer Vision and Image Understanding, and ACM Transactions on Interactive Intelligent Systems.