The Consortium for the Establishment of Information Technology Performance Standards (CEITPS) was founded to develop standards for measuring the performance of Information Technology (IT) organizations and the value/quality of the IT services and products within Higher Education. But, the question is...
Where to start?
Standards for performance measurements are not a new concept. They have been developed and used in many places, for many industries and organizations. When we judge the quality of an airline, one of the accepted standards for measuring the performance of the airline is on-time departures. But, for that to be useful to others, there has to be an agreed upon definition for what "on-time departure" means. Is it when the doors are closed? When the plane moves away from the gate? Is it when it enters the queue for take-off? Is it when it actually takes off?
For the simple measure of "on-time departure" we need an agreed upon definition, a "standard" for the measure. What would be the first measure for Information Technology you'd like to see defined?
Why do we need standards?
A common language.
Without standards, we end up wasting valuable time and effort due to confusions generated from different interpretations of words. When I say “Usage” or “Availability” or “Speed” or “Accuracy” – does the definition in my mind match that of the person I am speaking to? If not, we cannot work or collaborate effectively. This is obviously true for most. If I speak French, and you don’t, we’ll have a hard time collaborating. This is intuitively true. In the case of a foreign language, we end up looking for different ways to communicate. In the case of the same base language (English for argument’s sake), we usually fail to realize that we're not communicating. We say, “did you wash your hands?” and our three year old says “yes.” As a weary parent, we quickly learn to be more direct – “with soap? In the last five minutes? Before or after you played in the mud?” But in adult-to-adult conversations we usually forget to be specific and we assume (all too often incorrectly) that what we mean by a term is what is understood as the definition of that term by the other adult in the conversation. The late Gilda Radner’s routine, in which she played Rosanne Rosannadana, a hard of hearing elderly lady who invariably misinterpreted what was said is apropos to our problem. She would go off on a tirade arguing against what she “heard” until the other person in the conversation clarified what he “said.” Her, “never mind” was the punch line. Unfortunately, most of our misunderstandings are not so quickly (nor humorously) uncovered. Most times, the differing interpretation of language is not noticed until valuable time and resources are expended in the wrong direction. Bosses become frustrated that workers fail to do what they’ve been asked to do,and workers can’t figure out why their bosses’ never seem to “listen” to them.
This failure to communicate is rooted in differing interpretations of words we all believe we know – words spoken in a common language. This occurs in performance measurement also. Foundational terms like; metrics, measure, information, and data can all be construed by different people to have different meanings. The next level of language, terms like effectiveness and efficiency, suffer from the same lack of common understanding. When we go deeper, and start talking about metric categories of effectiveness (availability, speed, accuracy, usage, customer satisfaction, and security) we run the same and exponentially increasing risk of miscommunication.
Standards, first and foremost, must provide the practitioners a common language with terms clearly defined so that collaboration can occur. We cannot discuss where performance measurement in Information Technology should (or could) go unless we have a common vocabulary.
A means of comparison.
One of the best benefits of having standardized definitions is we are able to perform comparisons. The proverbial error of comparing apples to oranges is avoided when we use the same definition. Although our industry is a very volatile one, our leadership and customers rightly require a means of determining how well we are performing – and the easiest means to that end is to show how we compare to our peers. Even within an organization, comparisons are impossible without standards. The most basic performance measure may be simply comparing how well we do against how well others do. This is common in life.
One of the best analogies for well performing organizations is the use of sports teams. Teams know quickly if they are high performing (compared to others) because they can see their standings in their district, conference, and most times, in the nation. Granted, their criteria for placement is simple – wins and losses. It would make life so much easier if we could measure the performance of our IT teams by a scoreboard. Even so, the analogy is useful. We know how good our favorite team, or our child’s High School team is, by comparing them to the other teams. We can also compare the individual performance of players the same way. What are the stats for your favorite player? How does he or she stack up against the others on the team or in the nation? Again, the statistics used are easier to collect, and easier to use…but the need for standards is still paramount. If you are tracking assists (in basketball) differently than another statistician, the comparisons will be invalid. Struggles with standardization are not absent in sports – but they have made some great strides.
For IT organizations, this standardization is woefully deficient. The good news is, we can see the power of standards using the sport analogy.
A means of benchmarking.
Many times we are asked to provide a benchmark for a given measure. Not only are we asked to provide comparisons (how does organization X do?) but to identify a barometer of quality. What is good? What is great? What is the demarcation for quality for a given measure? In the Consortium for IT Performance Standards, we’ve chosen to address the need for benchmarks as a later phase of our efforts. In an ideal world, after we’ve clearly defined the definitions for IT Performance Measures, our target institutions (IT supporting academia) will adopt those standards. If adopted, the standards will allow for comparisons. If the institutions embrace the concept of continuous improvement and freely share their data, comparisons should follow. Rankings on a local, regional, and national level become possible. Rankings are not only good leadership fodder – they also help us to identify the “best” and that allows us to identify models to emulate.
But benchmarks aren’t just the best in class – those are easy to identify (once comparisons are possible). Benchmarks should identify the minimums required to achieve “good.” Many times, based on resources and priorities, “good” is the target. But, we need to know what that minimum is. We need to know if we are falling below the minimum expectation for performance so that we know if focus is required. Benchmarking is the third step in a logical progressive use of standards.