Recent years have seen mounting calls for the preservation of privacy when treating personal data. Speech falls within that scope because it encapsulates a wealth of personal information that can be revealed by listening or by automatic speech analysis and recognition systems. This includes, e.g., age, gender, ethnic origin, geographical background, health or emotional state, political orientations, and religious beliefs, among others. In addition, speaker recognition systems can reveal the speaker’s identity. It is thus of no surprise that efforts to develop privacy preservation solutions for speech technology are starting to emerge.
A few studies have tackled the formal definition of privacy preservation, the provision of suitable datasets, and the design of evaluation protocols and metrics based on user and attacker models. Other studies have addressed the development of privacy preservation methods which maximize the utility for users while defeating attackers. Current methods fall into four categories: deletion, encryption, anonymization, and distributed learning. Deletion methods aim to delete or obfuscate speech based on speech enhancement or privacy-preserving feature extraction for ambient sound analysis purposes. Encryption methods such as fully homomorphic encryption and secure multiparty computation can be used to implement all computations in the encrypted domain. Anonymization methods aim to suppress personal information but retain other information by means of noise addition, speech transformation, voice conversion, speech synthesis, or adversarial learning. Decentralized or federated learning methods aim to learn models (for, e.g., keyword spotting) from distributed data without accessing individual data points nor leaking information about them in the models.
This special issue solicits papers describing advances in privacy protection for speech processing systems, including theoretical developments, algorithms or systems.
Examples of topics relevant to the special issue include (but are not limited to):
formal models of speech privacy preservation,
privacy-preserving speech feature extraction,
privacy-driven speech deletion or obfuscation,
privacy-driven voice conversion,
privacy-driven speech synthesis and transformation,
privacy-preserving decentralized learning of speech models,
speech processing in the encrypted domain,
open resources, e.g., datasets, software or hardware implementations, evaluation recipes, objective and subjective metrics.
Manuscript submissions shall be made through: https://www.editorialmanager.com/YCSLA/.
The submission system will be open early October. When submitting your manuscript please select the article type “VSI: Voice Privacy”. Please submit your manuscript before the submission deadline.
All submissions deemed suitable to be sent for peer review will be reviewed by at least two independent reviewers. Once your manuscript is accepted, it will go into production, and will be simultaneously published in the current regular issue and pulled into the online Special Issue. Articles from this Special Issue will appear in different regular issues of the journal, though they will be clearly marked and branded as Special Issue articles.
Please see an example here: https://www.sciencedirect.com/journal/science-of-the-total-environment/special-issue/10SWS2W7VVV
Please ensure you read the Guide for Authors before writing your manuscript. The Guide for Authors and the link to submit your manuscript is available on the Journal’s homepage https://www.elsevier.com/locate/csl.
January 8, 2021: Paper submission
May 7, 2021: First review
July 9, 2021: Revised submission
September 10, 2021: Final decision
October 8, 2021: Camera-ready submission
Emmanuel Vincent, Inria
Natalia Tomashenko, Avignon Université
Junichi Yamagishi, National Institute of Informatics and University of Edinburgh
Nicholas Evans, EURECOM
Paris Smaragdis, University of Illinois at Urbana-Champaign
Jean-François Bonastre, Avignon Université