{"id":96,"date":"2024-03-06T18:03:52","date_gmt":"2024-03-06T18:03:52","guid":{"rendered":"https:\/\/iberspeech.tech\/?page_id=96"},"modified":"2026-04-09T09:19:52","modified_gmt":"2026-04-09T09:19:52","slug":"call-for-papers","status":"publish","type":"page","link":"https:\/\/iberspeech.tech\/2026\/call-for-papers\/","title":{"rendered":"CALL FOR PAPERS"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; custom_padding_last_edited=&#8221;off|desktop&#8221; admin_label=&#8221;Hero&#8221; _builder_version=&#8221;4.24.2&#8243; background_enable_color=&#8221;off&#8221; custom_margin=&#8221;|||&#8221; custom_padding=&#8221;50px||0px|||&#8221; custom_padding_tablet=&#8221;130px||130px|&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_row column_structure=&#8221;1_2,1_2&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; width=&#8221;100%&#8221; max_width=&#8221;1280px&#8221; custom_padding=&#8221;|15px||15px|false|false&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.24.2&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.24.2&#8243; _module_preset=&#8221;default&#8221; header_font=&#8221;|||on|||||&#8221; custom_margin=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h1><strong><span style=\"color: #0c71c3;\">Call for papers<\/span><\/strong><\/h1>\n<p>[\/et_pb_text][et_pb_divider divider_weight=&#8221;3px&#8221; _builder_version=&#8221;4.24.2&#8243; _module_preset=&#8221;default&#8221; width=&#8221;25%&#8221; module_alignment=&#8221;left&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_divider][et_pb_text _builder_version=&#8221;4.24.2&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; header_font=&#8221;|||on|||||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"text-align: justify;\"><strong>IberSPEECH\u20192026<\/strong>\u00a0<strong>will be held in Madrid (Spain), from 18 to 20 November 2026<\/strong>. The IberSPEECH event \u2013the eighth of its kind using this name\u2013 brings together the XIV Jornadas en Tecnolog\u00edas del Habla and the X Iberian SLTech Workshop events.<\/p>\n<p style=\"text-align: justify;\">Following with the tradition of previous editions, IberSPEECH\u20192026 will be a three-day event, planned to promote interaction and discussion. There will be a wide variety of activities: technical papers presentations, keynote lectures, presentation of projects, laboratories activities, recent PhD thesis, entrepreneurship &amp; discussion panels, and awards to the best thesis and papers.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_2&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h1><span style=\"color: #e09900;\"><strong>GOLD SPONSOR<\/strong><\/span><\/h1>\n<p>[\/et_pb_text][et_pb_image src=&#8221;https:\/\/iberspeech.tech\/2026\/wp-content\/uploads\/2025\/11\/gold-free-low-3.png&#8221; title_text=&#8221;gold-free-low&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_image][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_row _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2><span style=\"color: #0c71c3;\">Scientific Areas and Topics<\/span><\/h2>\n<p><span style=\"color: #666666;\"><\/span><\/p>\n<p><span style=\"color: #666666;\">We welcome contributions across a broad range of speech, language, and communication research, including (but not limited to):<\/span><\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;-24px|||||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"padding-left: 40px;\">1) Spoken language generation and synthesis.<br \/>2) Speech and speaker recognition.<br \/>3) Speaker diarization.<br \/>4) Speech enhancement.<br \/>5) Speech processing and acoustic event detection.<br \/>6) Spoken language understanding.<br \/>7) Spoken language interfaces and dialogue systems.<br \/>8) Systems for information retrieval and information extraction from speech.<br \/>9) Systems for speech translation.<br \/>10) Applications for aged and handicapped persons.<br \/>11) Applications for learning and education.<br \/>12) Emotions recognition and synthesis.<br \/>13) Language and dialect identification.<br \/>14) Applications for learning and education: Speech, Voice, and Hearing Disorders.<br \/>15) Speech technology and applications: other topics.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_column][\/et_pb_row][\/et_pb_section][et_pb_section fb_built=&#8221;1&#8243; _builder_version=&#8221;4.16&#8243; _module_preset=&#8221;%22default%22&#8243; global_colors_info=&#8221;%22{}%22&#8243;][et_pb_row column_structure=&#8221;1_5,1_5,1_5,1_5,1_5&#8243; use_custom_gutter=&#8221;%22on%22&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;%22default%22&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; width=&#8221;%22100%%22&#8243; max_width=&#8221;%222129px%22&#8243; min_height=&#8221;%22590px%22&#8243; custom_padding=&#8221;%22|25px||25px|false|true%22&#8243; global_colors_info=&#8221;%22{}%22&#8243;][et_pb_column type=&#8221;1_5&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; background_color=&#8221;#0C71C3&#8243; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_font=&#8221;|||on|||||&#8221; header_2_font=&#8221;|||on|||||&#8221; header_2_text_color=&#8221;#FFFFFF&#8221; text_orientation=&#8221;center&#8221; min_height=&#8221;145px&#8221; height=&#8221;175px&#8221; custom_margin=&#8221;20px||||false|false&#8221; custom_padding=&#8221;||||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\">Speech Technology and Applications<\/h2>\n<p>[\/et_pb_text][et_pb_divider color=&#8221;#FFFFFF&#8221; divider_position=&#8221;center&#8221; divider_weight=&#8221;2px&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; max_width=&#8221;85%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;-30px||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_divider][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#FFFFFF&#8221; min_height=&#8221;856px&#8221; custom_margin=&#8221;40px||0px||false|false&#8221; custom_padding=&#8221;|13px|0px|13px|false|true&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<ul>\n<li style=\"font-weight: 400;\">Spoken language generation and synthesis<\/li>\n<li style=\"font-weight: 400;\">Speech and speaker recognition<\/li>\n<li style=\"font-weight: 400;\">Speaker diarization<\/li>\n<li style=\"font-weight: 400;\">Speech enhancement<\/li>\n<li style=\"font-weight: 400;\">Speech processing and acoustic event detection<\/li>\n<li style=\"font-weight: 400;\">Spoken language understanding<\/li>\n<li style=\"font-weight: 400;\">Spoken language interfaces and dialogue systems<\/li>\n<li style=\"font-weight: 400;\">Systems for information retrieval and information extraction from speech<\/li>\n<li style=\"font-weight: 400;\">Systems for speech translation<\/li>\n<li style=\"font-weight: 400;\">Applications for aged and handicapped persons<\/li>\n<li style=\"font-weight: 400;\">Applications for learning and education<\/li>\n<li style=\"font-weight: 400;\">Emotions recognition and synthesis<\/li>\n<li style=\"font-weight: 400;\">Language and dialect identification<\/li>\n<li style=\"font-weight: 400;\">Applications for learning and education<\/li>\n<li style=\"font-weight: 400;\">Speech, Voice, and Hearing Disorders<\/li>\n<li style=\"font-weight: 400;\">Speech technology and applications: other topics<\/li>\n<\/ul>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_5&#8243; _builder_version=&#8221;4.24.2&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_font=&#8221;|||on|||||&#8221; header_text_align=&#8221;center&#8221; header_2_font=&#8221;|||on|||||&#8221; text_orientation=&#8221;center&#8221; min_height=&#8221;145px&#8221; height=&#8221;175px&#8221; custom_margin=&#8221;20px||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2 style=\"text-align: center;\">Human Speech Production, Perception and Communication<\/h2>\n<p>[\/et_pb_text][et_pb_divider divider_position=&#8221;center&#8221; divider_weight=&#8221;2px&#8221; admin_label=&#8221;Divider&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; width=&#8221;85%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;0px||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_divider][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; max_width=&#8221;95%&#8221; min_height=&#8221;520px&#8221; custom_margin=&#8221;40px||||false|false&#8221; custom_padding=&#8221;|24px||24px|false|true&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"text-align: left;\">\n<p style=\"text-align: left;\">\n<ul>\n<li style=\"font-weight: 400;\">Linguistic, mathematical, and psychological models of language<\/li>\n<li style=\"font-weight: 400;\">Phonetics, phonology, and morphology<\/li>\n<li style=\"font-weight: 400;\">Pragmatics, discourse, semantics, syntax, and lexicon<\/li>\n<li style=\"font-weight: 400;\">Paralinguistic and non-linguistic cues (e.g. emotion and expression)<\/li>\n<li style=\"font-weight: 400;\">Human speech production, perception, and communication: other topics<\/li>\n<\/ul>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_5&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; background_color=&#8221;#0C71C3&#8243; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_font=&#8221;|||on|||||&#8221; header_2_font=&#8221;|||on|||||&#8221; header_2_text_color=&#8221;#FFFFFF&#8221; text_orientation=&#8221;center&#8221; min_height=&#8221;145px&#8221; height=&#8221;175px&#8221; custom_margin=&#8221;20px||0px||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\">Natural Language Processing (NLP) and Applications<\/h2>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][et_pb_divider color=&#8221;#FFFFFF&#8221; divider_position=&#8221;center&#8221; divider_weight=&#8221;2px&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; max_width=&#8221;85%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;0px||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_divider][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#FFFFFF&#8221; min_height=&#8221;860px&#8221; custom_margin=&#8221;40px||-1px||false|false&#8221; custom_padding=&#8221;|13px|0px|13px|false|true&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<ul>\n<li style=\"font-weight: 400;\">Natural language generation and understanding<\/li>\n<li style=\"font-weight: 400;\">Retrieval and categorization of natural language documents<\/li>\n<li style=\"font-weight: 400;\">Summarization mono and multi-document<\/li>\n<li style=\"font-weight: 400;\">Extraction and annotation of entities, relations, and properties<\/li>\n<li style=\"font-weight: 400;\">Creation and processing of ontologies and vocabularies<\/li>\n<li style=\"font-weight: 400;\">Machine learning for natural language processing<\/li>\n<li style=\"font-weight: 400;\">Shallow and deep semantic analysis: textual entailment, anaphora resolution, paraphrasing<\/li>\n<li style=\"font-weight: 400;\">Multi-lingual processing for information retrieval and extraction<\/li>\n<li style=\"font-weight: 400;\">Natural language processing for information retrieval and extraction<\/li>\n<li style=\"font-weight: 400;\">Natural language processing (NLP) and applications: other topics<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_5&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_font=&#8221;|||on|||||&#8221; header_2_font=&#8221;|||on|||||&#8221; text_orientation=&#8221;center&#8221; min_height=&#8221;145px&#8221; height=&#8221;175px&#8221; custom_margin=&#8221;20px||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\">Speech, Language, and Multimodality<\/h2>\n<p>[\/et_pb_text][et_pb_divider divider_position=&#8221;center&#8221; divider_weight=&#8221;2px&#8221; admin_label=&#8221;Divider&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; width=&#8221;85%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;0px||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_divider][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; min_height=&#8221;860px&#8221; custom_margin=&#8221;40px||||false|false&#8221; custom_padding=&#8221;|22px||22px|false|true&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p style=\"text-align: left;\">\n<ul style=\"font-weight: 400;\">\n<li>Multimodal Interaction<\/li>\n<li>Sign Language<\/li>\n<li>Handwriting recognition<\/li>\n<li>Audiovisual language processing<\/li>\n<li>Speech, Language and Multimodality: other topics<\/li>\n<\/ul>\n<p>[\/et_pb_text][\/et_pb_column][et_pb_column type=&#8221;1_5&#8243; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; background_color=&#8221;#0C71C3&#8243; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_font=&#8221;|||on|||||&#8221; header_2_font=&#8221;|||on|||||&#8221; header_2_text_color=&#8221;#FFFFFF&#8221; text_orientation=&#8221;center&#8221; min_height=&#8221;145px&#8221; height=&#8221;175px&#8221; custom_margin=&#8221;20px||0px||false|false&#8221; custom_padding=&#8221;0px||||false|false&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\"><\/h2>\n<h2 style=\"text-align: center;\">Resource, Standardization, and Evaluation<\/h2>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][et_pb_divider color=&#8221;#FFFFFF&#8221; divider_position=&#8221;center&#8221; divider_weight=&#8221;2px&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; max_width=&#8221;85%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;0px||0px||false|false&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_divider][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#FFFFFF&#8221; min_height=&#8221;860px&#8221; custom_margin=&#8221;40px||-1px||false|false&#8221; custom_padding=&#8221;|13px|0px|13px|false|true&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-size: 14px;\">Spoken language resources, annotation, and tools<\/span><\/li>\n<li style=\"font-weight: 400;\">Spoken language evaluation and standardization<\/li>\n<li style=\"font-weight: 400;\">NLP resources, annotation, tools<\/li>\n<li style=\"font-weight: 400;\">NLP evaluation and standardization<\/li>\n<li style=\"font-weight: 400;\">Multimodal resources, annotation, and tools<\/li>\n<li style=\"font-weight: 400;\">Multimodal evaluation and standardization<\/li>\n<li style=\"font-weight: 400;\">Resources, standardization, and evaluation: other topics<\/li>\n<\/ul>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=&#8221;4.24.2&#8243; _module_preset=&#8221;default&#8221; custom_margin=&#8221;71px|auto||auto||&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;4.24.2&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_2_font=&#8221;|||on|||||&#8221; custom_margin=&#8221;||-2px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2 style=\"text-align: center;\"><span style=\"color: #0c71c3;\">Paper Submission<\/span><\/h2>\n<p>[\/et_pb_text][et_pb_divider color=&#8221;#0C71C3&#8243; divider_weight=&#8221;2px&#8221; admin_label=&#8221;Divider&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; width=&#8221;85%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;||16px|||&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_divider][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p>Regular Papers must be written in English and submission will be online. Papers must be submitted in PDF following the <a href=\"https:\/\/drive.google.com\/file\/d\/1Nq1j_1AfOtadLkBx71-vLVLxZOaiAnzs\/view?usp=drive_link\">Interspeech 2026<\/a> format. Papers can have a maximum of 5 pages with the 5th page reserved exclusively for references and acknowledgments. There is no minimum length requirement for papers of the special sessions project review and demos. Aligned with Interspeech adoption of \u201cDouble-blind review\u201d, IberSPEECH submissions must be blind.<\/p>\n<p>Upon acceptance, at least one author per paper will be required to register (full &amp; early) and present the paper at the conference.<\/p>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_2_font=&#8221;|||on|||||&#8221; custom_margin=&#8221;||-2px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2 style=\"text-align: center;\"><span style=\"color: #0c71c3;\">IMPORTANT DATES<\/span><\/h2>\n<p>[\/et_pb_text][et_pb_divider color=&#8221;#0C71C3&#8243; divider_weight=&#8221;2px&#8221; admin_label=&#8221;Divider&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; width=&#8221;85%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;||16px|||&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_divider][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<ul data-start=\"637\" data-end=\"858\">\n<li data-section-id=\"jql23b\" data-start=\"637\" data-end=\"706\">\n<p data-start=\"639\" data-end=\"706\"><strong data-start=\"639\" data-end=\"656\">June 15, 2026<\/strong> \u2014 Initial submission (title, authors, abstract)<\/p>\n<\/li>\n<li data-section-id=\"1wmqivj\" data-start=\"707\" data-end=\"752\">\n<p data-start=\"709\" data-end=\"752\"><strong data-start=\"709\" data-end=\"726\">June 22, 2026<\/strong> \u2014 Full paper submission<\/p>\n<\/li>\n<li data-section-id=\"an2z19\" data-start=\"753\" data-end=\"805\">\n<p data-start=\"755\" data-end=\"805\"><strong data-start=\"755\" data-end=\"777\">September 11, 2026<\/strong> \u2014 Acceptance notification<\/p>\n<\/li>\n<li data-section-id=\"dmn09h\" data-start=\"806\" data-end=\"858\">\n<p data-start=\"808\" data-end=\"858\"><strong data-start=\"808\" data-end=\"830\">September 20, 2026<\/strong> \u2014 Camera-ready papers due<\/p>\n<\/li>\n<\/ul>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_2_font=&#8221;|||on|||||&#8221; custom_margin=&#8221;||-2px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2 style=\"text-align: center;\"><span style=\"color: #0c71c3;\">CONTACT<\/span><\/h2>\n<p>[\/et_pb_text][et_pb_divider color=&#8221;#0C71C3&#8243; divider_weight=&#8221;2px&#8221; admin_label=&#8221;Divider&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; width=&#8221;85%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;||16px|||&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_divider][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<ul>\n<li><strong>General enquiries:<\/strong> <a href=\"mailto:iberspeech2026-general-chairs@iberspeech.tech\">iberspeech2026-general-chairs@iberspeech.tech<\/a><\/li>\n<li><strong>Technical Programme questions:<\/strong><a href=\"mailto:iberspeech2026-tpc-chairs@iberspeech.tech\"> iberspeech2026-tpc-chairs@iberspeech.tech<\/a><\/li>\n<li>\n<div><strong><span lang=\"EN-US\">ALBAYZIN 2026 Evaluation Challenges: <\/span><\/strong><span lang=\"EN-US\"><a href=\"https:\/\/51.68.71.183:8443\/smb\/email-address\/edit\/id\/6\" data-discover=\"true\">iberspeech2026-albayzin@iberspeech.tech<\/a><\/span><\/div>\n<div><\/div>\n<\/li>\n<\/ul>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; header_2_font=&#8221;|||on|||||&#8221; custom_margin=&#8221;||-2px|||&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<h2 style=\"text-align: center;\"><span style=\"color: #0c71c3;\"><span data-teams=\"true\">Acknowledgement<\/span><\/span><\/h2>\n<p>[\/et_pb_text][et_pb_divider color=&#8221;#0C71C3&#8243; divider_weight=&#8221;2px&#8221; admin_label=&#8221;Divider&#8221; _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; width=&#8221;85%&#8221; module_alignment=&#8221;center&#8221; custom_margin=&#8221;||16px|||&#8221; global_colors_info=&#8221;{}&#8221;][\/et_pb_divider][et_pb_text _builder_version=&#8221;4.27.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221;]<\/p>\n<p><span data-teams=\"true\">The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Call for papersIberSPEECH\u20192026\u00a0will be held in Madrid (Spain), from 18 to 20 November 2026. The IberSPEECH event \u2013the eighth of its kind using this name\u2013 brings together the XIV Jornadas en Tecnolog\u00edas del Habla and the X Iberian SLTech Workshop events. Following with the tradition of previous editions, IberSPEECH\u20192026 will be a three-day event, planned [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"class_list":["post-96","page","type-page","status-publish","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.0 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>CALL FOR PAPERS - Iberspeech<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/iberspeech.tech\/2026\/call-for-papers\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"CALL FOR PAPERS - Iberspeech\" \/>\n<meta property=\"og:description\" content=\"Call for papersIberSPEECH\u20192026\u00a0will be held in Madrid (Spain), from 18 to 20 November 2026. The IberSPEECH event \u2013the eighth of its kind using this name\u2013 brings together the XIV Jornadas en Tecnolog\u00edas del Habla and the X Iberian SLTech Workshop events. Following with the tradition of previous editions, IberSPEECH\u20192026 will be a three-day event, planned [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/iberspeech.tech\/2026\/call-for-papers\/\" \/>\n<meta property=\"og:site_name\" content=\"Iberspeech\" \/>\n<meta property=\"article:modified_time\" content=\"2026-04-09T09:19:52+00:00\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/iberspeech.tech\/2026\/call-for-papers\/\",\"url\":\"https:\/\/iberspeech.tech\/2026\/call-for-papers\/\",\"name\":\"CALL FOR PAPERS - Iberspeech\",\"isPartOf\":{\"@id\":\"https:\/\/iberspeech.tech\/2026\/#website\"},\"datePublished\":\"2024-03-06T18:03:52+00:00\",\"dateModified\":\"2026-04-09T09:19:52+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/iberspeech.tech\/2026\/call-for-papers\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/iberspeech.tech\/2026\/call-for-papers\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/iberspeech.tech\/2026\/call-for-papers\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/iberspeech.tech\/2026\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"CALL FOR PAPERS\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/iberspeech.tech\/2026\/#website\",\"url\":\"https:\/\/iberspeech.tech\/2026\/\",\"name\":\"Iberspeech\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/iberspeech.tech\/2026\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"CALL FOR PAPERS - Iberspeech","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/iberspeech.tech\/2026\/call-for-papers\/","og_locale":"en_US","og_type":"article","og_title":"CALL FOR PAPERS - Iberspeech","og_description":"Call for papersIberSPEECH\u20192026\u00a0will be held in Madrid (Spain), from 18 to 20 November 2026. The IberSPEECH event \u2013the eighth of its kind using this name\u2013 brings together the XIV Jornadas en Tecnolog\u00edas del Habla and the X Iberian SLTech Workshop events. Following with the tradition of previous editions, IberSPEECH\u20192026 will be a three-day event, planned [&hellip;]","og_url":"https:\/\/iberspeech.tech\/2026\/call-for-papers\/","og_site_name":"Iberspeech","article_modified_time":"2026-04-09T09:19:52+00:00","twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/iberspeech.tech\/2026\/call-for-papers\/","url":"https:\/\/iberspeech.tech\/2026\/call-for-papers\/","name":"CALL FOR PAPERS - Iberspeech","isPartOf":{"@id":"https:\/\/iberspeech.tech\/2026\/#website"},"datePublished":"2024-03-06T18:03:52+00:00","dateModified":"2026-04-09T09:19:52+00:00","breadcrumb":{"@id":"https:\/\/iberspeech.tech\/2026\/call-for-papers\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/iberspeech.tech\/2026\/call-for-papers\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/iberspeech.tech\/2026\/call-for-papers\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/iberspeech.tech\/2026\/"},{"@type":"ListItem","position":2,"name":"CALL FOR PAPERS"}]},{"@type":"WebSite","@id":"https:\/\/iberspeech.tech\/2026\/#website","url":"https:\/\/iberspeech.tech\/2026\/","name":"Iberspeech","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/iberspeech.tech\/2026\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/iberspeech.tech\/2026\/wp-json\/wp\/v2\/pages\/96","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/iberspeech.tech\/2026\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/iberspeech.tech\/2026\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/iberspeech.tech\/2026\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/iberspeech.tech\/2026\/wp-json\/wp\/v2\/comments?post=96"}],"version-history":[{"count":36,"href":"https:\/\/iberspeech.tech\/2026\/wp-json\/wp\/v2\/pages\/96\/revisions"}],"predecessor-version":[{"id":1675,"href":"https:\/\/iberspeech.tech\/2026\/wp-json\/wp\/v2\/pages\/96\/revisions\/1675"}],"wp:attachment":[{"href":"https:\/\/iberspeech.tech\/2026\/wp-json\/wp\/v2\/media?parent=96"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}