DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image

Visa’s cockup put down to ‘very rare’ data centre crash

Wed, 20th Jun 2018
FYI, this story is more than a year old

Friday afternoon (and through to the evening) on the 1st of June saw tens of millions of Visa payments in Europe fail to be processed.

51.2 million to be exact, undoubtedly causing carnage among the profusion of after-work drinks across the continent.

Visa Europe CEO Charlotte Hogg has explained in an 11-page letter to House of Commons' treasury select committee chairperson Nicky Morgan that the glitch all came down to a malfunctioning switch.

The company maintains two redundant data centers in the UK with either one equipped to handle all of the Visa transactions in Europe. With all systems running normally, the processes are synchronised with one data center able to take over from the other seamlessly in the event of a technical problem.

"Each centre has built into it multiple forms of backup in equipment and controls. Specifically relevant to this incident, each data center includes two core switches (a piece of hardware that directs transactions for processing) – a primary switch and a secondary switch," says Hogg.

"If the primary switch fails, in normal operation the backup switch would take over. In this instance, a component within a switch in our primary data center suffered a very rare partial failure which prevented the backup switch from activating.

The result of this was a lengthy period in which Visa had to isolate the system of the primary data center that was malfunctioning and stubbornly attempting to synchronise messages with the backup data center, culminating in a backlog of messages at the second facility that hampered its ability to process transactions.

"Due to this complexity and the very rare partial failure of the switch, a number of key steps were taken throughout the afternoon – including turning off software applications at the primary site and cleaning up message backlogs at the secondary site by both manual and automatic means," Hogg says.

The whole incident as confirmed by Visa lasted from 14:35 BST on 1 June and ended 00:45 on 2 June – although the bulk of the glitch was said to have been resolved by 20:15 on 1 June.

The global giant confirms it is now working to ensure similar problems don't happen in the future, like removing components of the switch that malfunctioned and replacing them with new components supplied to Visa by the manufacturer.

The two companies are now working together to conduct a forensic analysis of the switch to determine exactly what went wrong, and from the initial findings can assert (many times) that it was a 'very rare failure'.

"The manufacturer has provided us with recommendations on software for automating the monitoring and shutdown of the switch in the event of a similar type of malfunction," Hogg says.

"When operational, the programme will continuously review key components in the switch to track their availability. If the same errors are detected, the programme will automatically take the component or switch out of operation.

Visa has requested accounting firm EY to undertake a review of the incident and is offering compensation to all those affected.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X