What causes the Outrage?
On 03 July 2010, IBM received an alert notification on an instable communication between DBS’s storage system and mainframe. Upon approval granted by DBS, IBM despatched its field engineer to DBS data centre at 11.06am (DBS, 2010). Two more alert notifications were generated on 05 July 2010 at 2.50am (40 hours after the first alert notification). Taking into account that the errors will threaten the data integrity and banking services, DBS should have keep track or requested their vendor, IBM, for the root cause analysis of the issue.
But DBS Group chief executive, Piyush Gupta, appeared to lay the blame for the outage on their vendor, IBM. He said that IBM service crew relied on outdated procedures to replace a defective storage component within the disk storage sub-system connected to our mainframe (Winston, 2010). Due to this oversight, a routine replacement eventually grew to become a complete system outage.
How DBS could have reacted?
DBS has admitted that their internal escalation process could have been more immediate as the error was allowed to occur 3 times within 40 hours. They should also questioned IBM for a more detailed overview for the replacement a defective storage component before approving the change. This showed a massive oversight on DBS and their dependency on IBM.
In terms of communication, DBS and IBM technicians were despatched on 05 July at 3.40am to restore the services but DBS only starts to inform their staff about the issue at 6.30am, which by then complaints had already been received (Chee, 2010). DBS has failed to respond in a proactive manner in their information disseminating and missed the opportunity of establishing its trust with their customer.
Key Points for DBS
Gartner (Winston, 2010) recommends DBS to identify key level of services in their outsourcing contracts which have impact on the business performance of the bank. And against the small number of...