Monday, July 15, 2013

A common support anti-pattern: the stale issue that comes back to haunt you

So, here's a scenario if you are supporting users of (your) software or systems.  An urgent issue is reported, and you get to work on addressing it. After a while, a workaround is discovered and for now, the problem has gone away.  Or, what also happens frequently, the problem goes away by itself.

As a diligent supporting organization, you might 'ping' the user once to figure out of they are happy, and perhaps you still have some outstanding questions for them (log files, packet traces, versions installed etc). But otherwise, both user and vendor move on to more pressing issues, and we don't get to the bottom of it. It is not in most organizations' nature to focus on things that are not broken.

Time passes, and maybe a few months later, the customer is fuming, the issue is back, "and we reported this MONTHS ago, we have a 2 hour SLA, and it STILL isn't solved!" The blame is put squarely on the vendor, because the individual corporate employee most certainly isn't going to blame himself. It is just not done, and this is to be understood.

Meanwhile, you or your people dig out the old email exchange and note that "well yeah, but you didn't get back to us on X!", or the weaker variant "the workaround worked, and you went silent on it".

Escalation ensues, and it is noted that a more professional support organization would've kept nagging about the open question, or working on (what appeared to be) the low-priority remaining issue.

By now everybody is seriously pissed off at each other.

This anti-pattern is well known, and occurs everywhere. A common first-order approach to prevent it is for supporting organizations to attempt to proactively close issues that aren't progressing.

This sometimes works, but most often it makes the customer feel that their vendor is trying to artificially "solve" the issue, and not actually help.

Additionally, it doesn't feel good for people to have to agree to non-solved issues to be 'closed', or even 'solved'. In corporate environments, such things might come back to haunt the employee ('why did you sign off on that?!').

So, often a low-level stalemate develops where the customer is unwilling to spend time with the vendor to get to the bottom of the issue, but also not agreeing to close it.  And a few months down the road, BOOM, "this problem STILL isn't solved, and we've been at it for MONTHS!".

Neither side wants this, but it keeps on happening, and it keeps on pissing people off. It is human nature and corporate realities working against us.

So - what is the solution? Clearly we need some indication that is acceptable to all sides, but saves a lot of shouting later on.  One suggested way to achieve this is to add another status flag to an issue: 'Paused'.  This does not in any way imply the issue is solved, or  unimportant, or that anyone has agreed the fault is on their side.

It means what it says - this issue is paused. And if later on the problem becomes urgent again, it can be unpaused. Of course, the people that now shoulder more of the blame won't be too happy about it, but at least there is a reflection of the fact that *nobody* was working on it.

Supporting organizations meanwhile should remind supported users to respond to outstanding questions, and note that it is perfectly fine to agree to 'pause' the issue. This might even happen automatically after a few reminders.

So summarizing, by not angering people by closing issues the user is not actively working on, but by adding a 'Paused' status, when the problem  resurfaces, we can all get to work faster because the mutual screaming about issues being left unresolved for months 'while we have a 4 hour SLA with you!'

PS: And yes, if you really think this post is about you.. it might well be ;-)

Sunday, July 14, 2013

A "null result" bonus to improve science & science reporting

Every week we get at least one, but usually more, hype filled press releases & news items about how certain foods, medicines or lifestyle choices will either kill or save you. The vast majority of these weekly claims don't turn out to hold water.

As examples lifted from this actual week, I offer;

If you actually spend time on the press releases and underlying papers (if they even exist!), you often discover that:
  • there is no actual (new) research to back up the claims, or
  • that the claims bear scant relation to what is in the paper, or
  • that the data has been massaged heavily until some correlation popped out (and massaged & weak correlation is pretty far from causation, and most often proof of no causation).
These days, the discerning internet user can find sites that take the time to debunk over-hyped claims, but the brave souls dissecting the research behind the headlines will always be 'late', and secondly, they don't make Fox News or the New York Post.

So, the average person worried or interested in her health is bombarded by multiple confusing and conflicting headlines per week. This does nothing to improve our actual health, and in all likelihood worsens it ("forget that, the story changes every month").

What is behind this avalanche of weak or even bogus results in the news? It goes like this. Scientists perform expensive research, and very often, nothing spectacular comes out. Healthy people are healthier, people that exercise have lower blood pressure, folks that do things in moderation do lots better etc. 

Scientists are people too, and they have to justify their work, so they start the first round of trawling the data. And if you've measured enough, some interesting correlation always pops up! To counter this, Bonferroni correction should be applied to statistics, but not doing is so a common but helpful oversight. I mean, the research was expensive enough, something should come out!

So we have a claim, for example: 'Overweight post-menopausal women with pre-diabetes who eat fifth quintile amounts of avocados have lower insulin resistance'. This is typically what you'll find in a research paper, and where such a claim (had it survived Bonferroni correction, which it would likely have not) actually is worth reporting. Meanwhile, the claim is flagged with 'p < 0.05' which means the result is statistically significant; in actual effect, the impact can still be clinically insignificant (and often is)

Next, the research institute also wants to look good, so its PR department takes the paper, speaks with the scientists and writes a press release: "Benefit of eating avocados on insulin resistance, preventing diabetes". Note that they lobbed off all the qualifications, plus extrapolated the claim into preventing disease.

Finally, journalists fed this press release are eager for clicks on their articles, so they liven up the press release with some further human interest quotes and headline the piece: 'Scientists say: Eat avocados to ward off diabetes'. 

And there we go - from an investigation with no really significant results, we end up with a pretty stonking headline with incorrect advice. 

So what do we do?

Here's an odd idea. Zappos, an online shoe store, has a 'quit now' bonus for new hires. If after training you decide to leave, the company pays you $3000. The net effect of this is that people have an incentive to leave if they feel Zappos is not going to be a great place for them. 

And, although I don't know how it works in practice, in theory this should be a big win - anyone who stuck around against their will but thus inticed to leave will 1) not be a drag on Zappos 2) be able to move on to better pastures all the quicker.

The relevance to our scientists feeling pressured to publish should be obvious. Launch a fund, perhaps at department or institute level, or make it a national prize, for researchers honest enough to claim 'no significant results' from their research if there were none.

Compare the (at best misleading) headline 'Eat avocados to ward off diabetes' with 'Different levels of fruit consumption did not meaningfully change levels of diabetes among 3500 randomly selected staff of healthcare institutes'. 

The latter headline would admittedly not make the evening news. But it would allow investigators to move on to new research, and not further confuse the public. And very importantly, it would also make sure that even negative or null results make it to (the academic) press. 

As Ben Goldacre of www.alltrials.net often points out, not reporting unwelcome results leads to a statistical excess of positive results, thus "proving" that ineffective treatments actually work!

Now, I admit the details of this 'Zappos prize' would be daunting, and it would also require a significant fund to have any impact. It would need prestige too - scientists (who, as noted above are people too), are less swayed by money than most.

But something has to change. Today, mediocre research grabs the headlines while researchers honest with themselves struggle to get their voices heard!

Your thoughts are more than welcome.