The new servers didn’t always work. They usually worked – but at unpredictable times they stopped. And, when they stopped they couldn’t be restarted remotely: a human being had to physically push the reset button. The customer had a ‘lights-out’ datacenter, i.e. no onsite staff, so when the server did fail, someone had to drive to the datacenter. The customer was not happy.
“They are being unreasonable,” our support team said.
Normally I would have agreed: they are usually unreasonable. But this time:“They just want what they bought to work,” I replied. “I think that is very reasonable.”
The problem is intermittent and hard to replicate.”
“I know. That’s why you’re fixing it and not me.”
It became clear that it getting a fix be neither straightforward nor rapid. Because the old servers didn’t have this problem, we agreed to temporarily replace the new servers with old servers. It wasn’t easy to locate the replacements which had gone EOL (end-of-life) many months earlier.
After an ‘inordinate delay’ (customer evaluation), we located 170 severs on the after-market. After we refurbished and certified them we shipped them, at our expense, to about 13 different countries. There were two small glitches:
After an ‘inordinate delay’ (customer evaluation), we located 170 severs on the after-market. After we refurbished and certified them we shipped them, at our expense, to about 13 different countries. There were two small glitches:
- The problem with the new servers was fixed just after the old servers were shipped.
- It was impossible for us to pay for the final leg of shipping in Mumbai. We agreed to reimburse the customer.
Unaware of the impending doom, I basked in the knowledge that we had solved the problem.
But: To Be Continued.
The photo is from India.
No comments:
Post a Comment