Government agencies have shielded their high-risk IT projects from rating poorly in Federal reviews and failed to implement review recommendations, an audit has revealed.
The Australian National Audit Office this week reported on the Government's Gateway review (pdf), used to assess the delivery of 46 high-risk, high-value projects, including 34 ICT projects, since 2006.
Federal projects with a total cost of $30 million or more and programs valued over $50 million undergo "short, sharp and confidential reviews" at six key stages of the project lifecycle, known as Gates.
At each gate, the project's progress is labelled green, amber or red, with green indicating good progress, amber suggesting there are issues that need to be addressed, and red indicating significant issues.
Although the Audit Office reported that the Gateway Review had been effectively implemented, it suggested that agencies were able to avoid red and amber ratings by redefining the goal posts and deferring reviews.
Until mid-2009, reviewers gave projects a "red" light if it was “critical to the overall success of the project that the issues raised in this review are addressed before the project proceeds”.
The "red" rating was later redefined to an amorphous four-part determination that "effective and timely delivery" of the project outcomes were “in doubt” and that there were also "major issues" that required "urgent action".
Under the original criteria, about one in every five reviews resulted in a red rating.
No gateway reviews raised a red light in the past two years.
The Audit Office noted that participation in the Gateway review process was no guarantee of success in meeting specified project objectives.
“At least three of the nine projects that completed the full suite of Gateway reviews up to 30 June 2011 were not completed on‐time and on‐budget and/or did not deliver the outcomes expected when funding was approved,” it reported.
Deferrals and recommendations
The audit report also revealed that agencies often deferred Gateway reviews of their projects, with about a third of all planned reviews being rescheduled.
Some reviews were scheduled and rescheduled up to six times, with long delays between reviews.
Nine of the 29 ongoing projects as at 30 June 2011 had not been the subject of a review for at least 12 months, including one project that was last reviewed more than three years ago.
The Audit Office noted also that the duration between reviews was likely to increase as the project progressed through the sequence of Gates.
The audit report did not identify the projects that it placed under scrutiny.
Although over 90 percent of senior officers responsible for projects said Gateway recommendations would improve their project’s outcomes, they were unlikely to fully implement the recommendations in a timely manner.
Of the 106 instances where an earlier Gateway review had been conducted, previous recommendations were fully implemented in a mere 28 percent of cases, the Audit Office found.
Recommendations were only partially implemented in the remaining 72 per cent of cases, subsequent reviews revealed.
Additionally, the Audit Office noted that several Gateway reviews misleadingly conveyed that previous recommendations had been implemented. When checking the appendix of a report, the Office found some recommendations required further work to implement.
Few agencies could provide the Audit Office with documentation that demonstrated that there had been regular and systematic follow‐up of Gateway recommendations.
Some ‘Actions Taken’ reports appeared to have been prepared after the agency received the Office's request for documentation.
As of 30 June 2011, a total of 150 Gateway reviews were conducted for 46 ICT, infrastructure and procurement projects. Reviewed projects were managed by 23 agencies and worth more than $17 billion in total.