Ok, never thought I'd be doing this...
A thread on testing software (which is my job).
There should be a Product Requirements Document. This should be agreed between all the relevant parties.
This would have the details of all the areas to be covered by the software
This document should be the basis for the creation of "Stories" and things like a Test Strategy Document. There would also be agreed Acceptance criteria. That means that the finished product should have been tested by the Department before accepting it.
A "Story" means breaking down the programming to smaller manageable pieces.
Each "Story" would have details and associated test cases. They should be peer reviewed.
Each Story should then be tested and demonstrated as working.
(assuming that the Dept sent the correct data set)
It should then be put all together and integrated. There should then be "end to end" testing scenarios done. There should be an agreed set of tests to be done here.
Again was all the required data sent, was there a test matrix of scenarios to be tested agreed and supplied?
Once the vendor and the Department agree that the software is up to scratch it should be sent to a pre production environment. There it should be tested again...
Which leads me to wonder were there many in the Department actually working on this software.
Was it verified independently, by the Department or did the Dept take the word of the supplier?
Anyway the point of all this is that the issue with the Calculated grades software is something that shouldn't have been missed in the testing.... and that there should have been various stages of testing where it should have been caught.
Ultimately it's the Department's fault
You can follow @electionlit.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: