No launch date in sight for govt's Nadia virtual assistant

By on
No launch date in sight for govt's Nadia virtual assistant

NDIA still unsure of how to use bot.

The government's Nadia virtual assistant that was launched to much fanfare in February thanks to a partnership with Cate Blanchett has no set release date and needs "a lot more testing" before it can be released to the public.

The Department of Human Services is building the Nadia bot, which combines a human face and voice with cognitive intelligence, for the National Disability Insurance Agency (NDIA) to help handle the 8000 calls it receives to its hotline each week.

The virtual assistant will, once complete, be able to speak, write and chat online with people wanting to know more about the national disability insurance scheme (NDIS).

When the partnership with Cate Blanchett was publicised in February the NDIA said Nadia would operate in a trial environment for 12 months before it became fully operational.

But it revealed in senate estimates last week that it had no go-live date set for the virtual assistant, and still had to work out how it would actually use the virtual assistant.
 
NDIA COO Grant Tidswell said the agency needed to properly "think through" where Nadia would sit within its various means of communications.
 
"Our plan is to develop a channel strategy to say where this capability fits with all the other things that we are doing—our website, our portal improvements, our website improvements, social media platforms and the like," he said.
 
"So it has to be seen as part of that total picture, rather than just a thing sitting to the side."
 
He said the agency had not set a date for when Nadia would become available to the public.
 
"There was never an original go-live date. There was a 'get started' introducer as a trainee. That was about setting up the technology so that she could also be learning. The position of the agency always was that that would continue to be assessed before we went live."
 
Response lag
 
DHS CIO Gary Sterrenberg revealed one of the early issues the agency had faced in its development of Nadia was latency between when a person had finished talking and Nadia started to respond.
 
The department has been toying with IBM's Watson cognitive computing platform as one of the 12 platforms making up Nadia.
 
"You can imagine a human-like interaction whereby if I start speaking to you the first version of Nadia had to wait until I stopped talking. It would then take that data and change it from voice to text and then send it to the Watson platform. So there was a latency there of about 30 seconds," Sterrenberg said.
 
"You can imagine what a poor performance that would be. At the time IBM was working on a streaming technology that allowed it to send the voice as you were talking. As we moved on there is a number of platforms the department is having a look at. IBM is just one of them. As the technology matures we will be making the appropriate decisions about it."
 
He did not detail other solutions the depatrment had trialled but said it was looking for a solution that had no latency.
 
The early phases of the Nadia development have involved working out the 3500 basic question and answers sets asked by someone seeking more information about the NDIS. 

"It took a long time to put those base pairs together. Since then we have been working on the technology," Sterrenberg said.

"It is not just one technology. There are about 12 technologies linked together in this. As soon as that is ready we will be ready to take it to the next level. The technology is not yet at the level at which it can be used. There is still a lot more testing."

Sterrenberg said the technology that underpins Nadia had "a lot of promise" and could make a significant contribution to the NDIA, but caveated that it was in "very early" stages of development. 

"I think a lot of people think of this technology as cognitive, and yet it has four or five things that are really important for us and particularly for the disability sector. One of the five is a natural language interface, so you can imagine the improved accessibility in using technology when you can speak to the technology like Siri rather than type, particularly if you have a disability that does not allow you to use your arms in that way," Sterrenberg said.

"The second one is around machine learning, where the system is able to learn from the multiple interactions and provide better advice. The other thing it does which is quite unique is that it has multilanguage capability. It is able to speak in 32 different languages.

"So the technology itself is not just the visual of the virtual avatar; it has significant capabilities to improve accessibility for those who are more vulnerable than us. The promise is huge in terms of access, choice and control in how you would want to interact with a government agency."

One very early potential use case for Nadia could be in a service centre, similar to a kiosk, Sterrenberg suggested.

But such decisions would be up to the NDIA, and there would need to be "a lot more testing" before the virtual assistant can be released to the public, he said.

"As a department, our advice has always been to test it on our own to make sure this technology supports our own staff first. At some later stage it will be ready to be used externally."

Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Log In

Username:
Password:
|  Forgot your password?