Publications
Please see the full list of her publications here.
Selected Publications
Book Chapters and Guest Editorials
Z. Ren, B. W. Schuller, B. M. Eskofier, T. N. Nguyen, and W. Nejdl, “Guest editorial trustworthy and collaborative AI for personalised healthcare through edge-of-things,” IEEE Journal of Biomedical and Health Informatics, pp. 5213 – 5215, Nov. 2023
N. Cummins, Z. Ren, A. Mallol-Ragolta, and B. Schuller, Artificial Intelligence in Precision Health, ch. Chapter 5 – Machine learning in digital health, recent trends, and ongoing challenges, pp. 121–148. Elsevier, 2020
Journal Articles
Y. Chang, Z. Ren, Z. Zhang, X. Jing, K. Qian, X. Shao, B. Hu, T. Schultz, and B. W. Schuller, “STAA-Net: A sparse and transferable adversarial attack for speech emotion recognition,” IEEE Transactions on Affective Computing, 2025
T. T. Nguyen, T. T. Huynh, Z. Ren, P. L. Nguyen, A. W.-C. Liew, H. Yin, and Q. V. H. Nguyen, “A survey of machine unlearning,” ACM Transactions on Intelligent Systems and Technology, vol. 16, no. 5, 2025
Z. Ren, Y. Chang, T. T. Nguyen, Y. Tan, K. Qian, and B. W. Schuller, “A comprehensive survey on heart sound analysis in the deep learning era,” IEEE Computational Intelligence Magazine, vol. 19, no. 3, pp. 42–57, 2024
Z. Ren, Y. Chang, W. Nejdl, and B. Schuller, “Learning complementary representations via attention-based ensemble learning for cough-based COVID-19 recognition,” Acta Acustica, 2022
Z. Ren, Y. Chang, K. D. Bartl-Pokorny, F. B. Pokorny, and B. Schuller, “The acoustic dissection of cough: Diving into machine listening-based COVID-19 analysis and detection,” Journal of Voice, 2022
Y. Chang, X. Jing, Z. Ren, and B. Schuller, “CovNet: A transfer learning framework for automatic COVID-19 detection from crowd-sourced cough sounds,” Frontiers in Digital Health, vol. 3, pp. 1–11, Jan. 2022 [code]
Z. Ren, Q. Kong, J. Han, M. Plumbley, and B. Schuller, “CAA-Net: Conditional atrous CNNs with attention for explainable device-robust acoustic scene classification,” IEEE Transactions on Multimedia, Nov. 2020s [code]
F. Dong, K. Qian, Z. Ren, A. Baird, X. Li, Z. Dai, B. Dong, F. Metze, Y. Yamamoto, and B. Schuller, “Machine listening for heart status monitoring: Introducing and benchmarking HSS – the heart sounds Shenzhen corpus,” IEEE Journal of Biomedical and Health Informatics, vol. 24, pp. 2082–2092, Nov. 2019
Z. Ren, K. Qian, Z. Zhang, V. Pandit, A. Baird, and B. Schuller, “Deep scalogram representations for acoustic scene classification,” IEEE/CAA Journal of Automatica Sinica, vol. 5, pp. 662–669, May 2018
Conference Papers
Y. Chang, Z. Ren, Z. Zhao, T. T. Nguyen, K. Qian, T. Schultz, and B. W. Schuller, “Breaking resource barriers in speech emotion recognition via data distillation,” in Proc. INTERSPEECH, (Rotterdam, The Netherlands), pp. 141–145, 2025
K. Scheck, T. Dombeck, Z. Ren, P. Wu, M. Wand, and T. Schultz, “DiffMV-ETS: Diffusion-based multi-voice electromyography-to-speech conversion using speaker-independent speech training targets,” in Proc. INTERSPEECH, (Rotterdam, The Netherlands), pp. 5573–5577, 2025
Z. Ren, K. Qian, T. Schultz, and B. W. Schuller, “An overview of the ICASSP special session on ai security and privacy in speech and audio processing,” in ACM Multimedia Asia workshop, (Tainan, Taiwan), 2023
Z. Ren, T. T. N. Nguyen, M. M. Zahed, and W. Nejdl, “Self-explaining neural networks for respiratory sound classification with scale-free interpretability,” in Proc. IJCNN, (Gold Coast, Australia), 2023
Z. Ren, T. T. Nguyen, Y. Chang, and B. W. Schuller, “Fast yet effective speech emotion recognition with self-distillation,” in Proc. ICASSP, (Rhodes, Greece), 2023
Y. Chang, Z. Ren, T. T. Nguyen, K. Qian, and B. W. Schuller, “Knowledge transfer for on-device speech emotion recognition with neural structured learning,” in Proc. ICASSP, (Rhodes, Greece), 2023
Y. Chang, Z. Ren, T. T. Nguyen, W. Nejdl, and B. Schuller, “Example-based explanations with adversarial attacks for respiratory sound analysis,” in Proc. INTERPSEECH, (Incheon, Korea), 2022. to appear [code]
Z. Ren, T. T. Nguyen, and W. Nejdl, “Prototype learning for interpretable respiratory sound analysis,” in Proc. ICASSP, (Singapore), pp. 9087–9091, 2022 [code]
Z. Ren, J. Han, N. Cummins, and B. Schuller, “Enhancing transferability of black-box adversarial attacks via lifelong learning for speech emotion recognition models,” in Proc. INTERSPEECH, (Shanghai, China), pp. 496–500, 2020
Z. Ren, A. Baird, J. Han, Z. Zhang, and B. Schuller, “Generating and protecting against adversarial attacks for deep speech-based emotion recognition models,” in Proc. ICASSP, (Barcelona, Spain), pp. 7184–7188, 2020 [code]
F. Ringeval, B. Schuller, M. Valstar, N. Cummins, R. Cowie, L. Tavabi, M. Schmitt, S. Alisamir, S. Amiriparian, E.-M. Messner, S. Song, S. Liu, Z. Zhao, A. Mallol-Ragolta, Z. Ren, M. Soleymani, and M. Pantic, “AVEC 2019 workshop and challenge: State-of-mind, detecting depression with AI, and cross-cultural affect recognition,” in Proc. AVEC, (Nice, France), pp. 3–12, 2019
Z. Ren, Q. Kong, J. Han, M. Plumbley, and B. Schuller, “Attention-based atrous convolutional neural networks: Visualisation and understanding perspectives of acoustic scenes,” in Proc. ICASSP, (Brighton, UK), pp. 56–60, 2019 [code]
B. Schuller, S. Steidl, A. Batliner, P. Marschik, H. Baumeister, F. Dong, S. Hantke, F. Pokorny, E.-M. Rathner, K. Bartl-Pokorny, C. Einspieler, D. Zhang, A. Baird, S. Amiriparian, K. Qian, Z. Ren, M. Schmitt, P. Tzirakis, and S. Zafeiriou, “The INTERSPEECH 2018 computational paralinguistics challenge: Atypical & self-assessed affect, crying & heart beats,” in Proc. INTERSPEECH, (Hyderbad, India), pp. 122–126, 2018
