Dr Julie Vonwiller conceived the project in 2014 and commenced engagement with an extensive global network to identify the leading capabilities to develop a prototype for a real time, speech-to-speech translation system. Once it became clear that the concept was feasible and interest was shown by a number of key technical experts, the project was established in 2015 to develop a proof of concept prototype.
This initiative will develop a system that can be adapted for any region and language to provide rapid, real-time translation which will be of vital assistance to aid agencies and first responders in disaster situations. It will complete the development of proof-of-concept software on a single device- a small, hand-held, easy to use device that will provide two-way speech-to-speech translation for deployment in emergency situations where the responder does not speak the language used in the disaster region.
The Humanitarian Babel Fish project offers an immense opportunity to radically improve the response capabilities of disaster and humanitarian efforts. The potential impact is huge and could result in a widely adopted tool used by many disaster response agencies. Though focusing on the Pacific, one of the most disaster prone areas on the planet, the Humanitarian Babel Fish project is leveraging global expertise to benefit our region.
A real time, reliable, speech to speech translation system would transform the vital first step of ‘assessment’ in every disaster response situation and not only that, if expanded could prove to be a useful tool in every humanitarian response situation. From disaster relief to long term Millennium Development Goals, this could be a bold and exciting solution.
Discussions with front-line agencies such as RedR Australia and Habitat for Humanity have demonstrated that there is a real and urgent need for translation support of the type being proposed. At the same time, it is clear that the key elements of state-of-the-art technology are now available to support the development.
What is needed next is a working prototype proof-of-concept system to demonstrate to all stakeholders concerned with disaster relief. This should help activate global interest from commercial systems providers and funding agencies such as the United Nations to facilitate widespread rollout of field-ready systems.
Governments and NGOs are actively responding to disasters. A vital element of effective disaster relief is human communication. Agencies who respond to disasters such as earthquakes, tsunami, cyclone, floods and typhoon to assist the affected people, frequently go into a situation where they do not speak the local language. This inability of relief workers to communicate in the local language is compounded in the disaster scenario by the environment in which the worker and the local find themselves communicating. The situation may be without infrastructure, noisy, emotionally charged, and involves spontaneous speech not written language.
Interpreters for the local languages are often in limited supply, and those who are available are thinly spread. Effective and immediate communication is vital to prevent the after-effects of the disaster deteriorating when medical and civil problems are not addressed. Our project addresses this challenge.
As the prototype stage has continued, the broader humanitarian applications have become apparent and the project intent has always been to develop a system easily accessible by all humanitarian relief organisations. The focus of this project on the ‘needs assessment’ phase of disaster relief, could benefit all relief agencies working on long term programs such as specific Millennium Development Goal (MDG) aligned programs
The project uses existing capabilities to develop a two-way hand-held speech-to-speech translation device in languages used in known disaster areas. Emergency response personnel will be deployed with these devices.
The devices will be lightweight and trained specifically to recognise the input language in domain specific scenarios such as medical and civil emergencies. The software will operate independently of cloud-based network services, recognising that such facilities are often disabled in natural disasters.
The device will (a) take the local language as input, (b) convert the spoken words of local language into text through Automatic Speech Recognition (ASR); (c) translate the resulting local language text into English text via machine translation (MT) and (d) pass the resulting English text to a speech synthesis (TTS) to output spoken English. This structure is reversible and will convert the English speech to the local language.
Small and portable translation devices customised for use in the field by first-line emergency personnel can potentially save valuable time, valuable funds and valuable lives. The ultimate beneficiaries of this project initiative will be the populations in areas where disasters have the most devastating effects, and aid workers. The IMF has calculated that more than 450 million people were affected by natural disasters worldwide in the two years prior to October 2012.
The development and use of this device will facilitate
Dr Julie Vonwiller has provided the Project Management, disaster scenarios, and linguistic consultancy; Professor Alan Black and Dr James Nealand advised on the technology development and consulted throughout; RedR Australia have provided access to their staff on the ground in the Philippines to facilitate recording of data for model training, and made a place available for Dr Vonwiller to attend a Needs Assessment training program to learn about the difficulties with the activity in the field. Appen, our commercial partner, have provided their capability in collecting and transcribing the required materials to underpin the speech recognition, machine translation, and the speech synthesis systems.
The pilot involves five phases. Phases 1 & 2 are complete. Phase 3 is current and progressing as planned. Phase 4 & 5 have not yet commenced.
Phase 1: define the priority disaster scenarios, the likely physical situations, the key subject domains (medical and health, water and sanitation, infrastructure etc.)
Phase 2: fieldwork in the Philippines to collect a speech and text database for a pilot in the project languages (English and Cebuano) necessary for training the speech recognisers and translation systems.
Phase 3: the rapid development of a working prototype system to demonstrate proof of concept. This involved developing modules for speech recognition, machine translation and speech synthesis into a single system.
Phase 4: the prototype is demonstrated and field-tested among Cebuano and English speakers to gain initial user experience.
Phase 5: the development of a high level roadmap for future development and rollout of commercial systems in potential real-life disaster situations in developing countries.