IT Support: What happens when turning it off and on doesn't work?

Exploring ways to bridge the gap between frustrated users and the IT helpdesk

IT Support: What happens when turning it off and on doesn't work?By Dave White    May 11, 2018      Thinking

In the early days of technology adoption, inexperienced users would routinely seek help from an IT Helpdesk for commonplace errors. That has changed. Today, the consumerization of IT has led to increased expectations in the workplace for always-on, high-performing IT services, and business users have grown increasingly frustrated at responses from IT which suggest a lack of knowledge, empathy or both. So how do we bridge this difficult divide?

The catchphrase of a stereotypically weary IT helpdesk technician in the UK sitcom “The IT Crowd” resonates with many – “Have you tried turning it off and back on again?”. Incident Management within ITIL, a widely-adopted framework for IT Service Management, is focussed on restoring normal service as quickly as possible rather than performing root cause analysis or service improvement. So in many cases, the humble workstation reboot is often the quickest and easiest way, sometimes the only way, to restore normal service.

But what happens when an issue persists after a workstation reboot? Let’s look at a typical incident management scenario around an issue that affects a single user, from both the end user and IT perspective, and then see if we can find a way to improve the engagement.

Typical incident

Once an incident has been identified and reported, the IT helpdesk will gather and document additional information regarding symptoms of the issue and check some variant of a Known Error Database (KEDB) to see if similar issues and associated solutions have previously been encountered and documented. If known errors are found which seem to match the user’s description, the helpdesk may suggest trying to apply known resolutions.

From the perspective of IT, this is following a process focussed on restoring normal service as quickly as possible. From the perspective of an end-user, this may feel like a hit-or-miss type guessing game or a lack of knowledge on behalf of the helpdesk technician involved, especially if the suggested solutions prove not to work.

If it doesn’t work, the next step is escalation to a level 2 team with the expertise required to resolve the issue. If the problem has not been encountered previously or is not identifiable in the KEDB, there is a requirement for additional evidence gathering and investigation by the level 2 engineer in collaboration with other technical teams as necessary.

From the perspective of IT, it is necessary to understand the problem so that a fix can be applied as quickly as possible. From a user’s perspective, this often involves significant disruption as IT attempts to reproduce the issue, gather logs, alter debug levels and conduct ancillary tests. I’ve come across end users who have either not reported issues or regretted reporting them because they would rather suffer the inconvenience of the original issue than the inconvenience of the resolution!

Once the problem has been identified, progressing to incident closure will sometimes involve applying a “quick fix” or workaround to restore normal service to the user and further disruption may be required to apply a permanent solution at a later stage. The user’s service has been restored to normal and IT will update the KEDB so that any future occurrences can be resolved more quickly by the helpdesk team without requiring escalation. And there ends one example of a typical incident management cycle.

Perception problems

Now let’s consider how an organisation can get visibility into its incident management processes and make improvements. There are many data driven help desk metrics, including initial response time, mean time to resolution, which can be used to form a Service Level Agreement between IT and the business. IT management can track progress against these metrics and report back to the business on SLA compliance.

While this will certainly help, it’s still the people, processes and technology involved that will determine the user experience. Many IT teams include a user experience rating system to try to gauge a user’s perception of the service being delivered. These can be in the form of a satisfaction survey or similar. Yet even this may be misleading if not correctly implemented as it is intended to measure an individual’s perception of a service delivered.

Two individuals who’ve had the same experience may provide completely different ratings. This usually boils down to an individual’s expectations and how the service met those expectations rather than how the service delivered against the SLA. Human nature also comes into play. A user who has had a negative experience may be more likely to complete a survey to voice their frustration than a user who has had an average to good experience. Similarly, a user who was very impressed with the service may take the time to complete the survey as a “thank you” to the IT staff involved, so outliers are commonplace.

Improving the process

Incident management processes and the users’ perception of them can vary significantly leading to frustration in both camps, each side feeling like the other is not holding up their end of the bargain.
Here are five ways around the problem:

  1. Agree acceptable levels of service with the business and help manage expectations by ensuring these are clearly communicated and understood by the user community.
  2. Implement monitoring solutions that help track adherence to SLAs and help easily diagnose the causes of missed SLAs.
  3. Select user experience metrics that are appropriate for your business and ask targeted, simple questions to gain insight. Including too many questions or over-complicated questions may dissuade users from accurately completing the survey. Open-ended questions can often provide a valuable insight and allow users to give feedback in their own words.
  4. Consider offering incentives for survey responses to get a more accurate representation of user experience across the company.
  5. Streamline the data gathering and diagnostic process to minimise user disruption and improve user perception of the service delivered.

At Corvil, we work with many operations teams to help drive both SLA compliance and improved user perception across a wide variety of IT services. We do this by tapping into the network as a rich, yet underutilised source of data. We offer a solution that takes a non-intrusive, proactive approach to monitoring IT services. We also provide a streamlined approach to assist evidence gathering and analytics with minimal user disruption, which helps both drive down mean time to resolution and improve user perception.

To find out more, contact us.

IT Support: What happens when turning it off and on doesn't work?

Dave White, Sales Engineer, London, Corvil
Corvil is the leader in performance monitoring and analytics for electronic financial markets. The world’s financial markets companies turn to Corvil analytics for the unique visibility and intelligence we provide to assure the speed, transparency, and compliance of their businesses globally. Corvil watches over and assures the outcome of electronic transactions with a value in excess of $1 trillion, every day.
@corvilinc

You might also be interested in...