I am very excited about the new Analytics tool that ARL is beta testing right now. It's far from complete, but hopefully it is an indication of the direction LibQUAL is heading. I think that offering a GUI to do some advanced analysis is great. The real power behind Analytics though is the ability to work with participant data. The PDF versions from other institutions have always been available, now we do something with it.
There are two tools:
Institution Explorer – allows you to look at participant data back to 2004 and to filter by user groups (faculty, grad, undergrad) as well as by discipline.
Here is an example. These charts show the difference between our engineering and social science faculty in terms of information control:
Two very different perspectives, however the social science faculty sample is only 4 people, so I think the data is exaggerated.
Longitudinal Analysis – allows you to look at an institution's data over time.
Here is an example of how our undergrads feel about our library as a place. The perception has risen, but so has the minimum expectation. When you give people nice stuff, they want more.
And that's how I feel about Analytics—it's a good start, but only the tip of the proverbial iceberg with the level of data we need.
Here are some things I'd like to be able to do:
• I hope that ARL takes it beyond the dimension level and gives us question by question access and comparability. For example, I want to take the three collections questions and see how the engineering faculty at GT match up with engineering faculty at other schools, as well as a composite average of all engineering faculty participants. This could allow me to say, here is where we stand amongst our peers, and here is where we stand nationally.
• I'd like a true benchmarking tool. Take group study place for example. Our rating is pretty strong, although I feel we have a long way to go. I'd like to see the top 10 or 20 highest perceptions as well as adequacy means amongst all participants. If USC or Wake or whoever continually rates highly, then maybe we want to see what they are doing, what works for them.
• It's great that they started to allow custom disciplines, but I want to be able to sort by other factors too—such as gender, class (freshmen vs seniors), as well as frequency of visits. Do the patrons who visit the library weekly/daily have different opinions than those who visit monthly? I have looked around at the raw data of a few schools and unscientifically I can claim that the more female freshmen you have, the better your satisfaction rating will be—but don't quote me on that. It's also very interesting to look at the change that occurs between sophomore and junior year.
• I wish there was more control toward building dynamic charts and tables and the ability to create personalized templates, so that I could apply them to other schools or groups of schools.
There is a lot more, but this is for starters. Right now, we can use Excel and SPSS to generate these types of things, however we do not have access to the raw data files of other schools—therefore we cannot dig into the discipline level or really do anything very advanced. That's the power of Analytics, it opens the door, but right now it only allows us to place one foot inside.
The only negative thing I can say is that there seems to be some scripting problems with some schools. You get frustrating screens like this more often then you'd think:
But overall it's a great step toward the future of library service assessment. Thanks ARL.