OTC Goes Bold With Redesign

I want to extend a sincere congratulations to the folks at Ozarks Technical Community College on their redesign. It is probably one of the single most brave things I’ve seen a college do with their homepage in quite some time, for better or worse. And that’s good, because that’s how everyone learns. Someone has to take a chance once in a while. What especially caught my attention though was that they basically did something I never really thought would be possible. Back in 2009, I wrote a bit on the principles of IA in large sites like a university. Several conversations ultimately were spun off that article, one of which involved talking about the idea of driving a university site’s navigation entirely through search.

Back then, it was little more than a pipe dream though. Random musings about a “what if” scenario. There’s so much to consider for it – and I’m not even talking about things like the political side of university sites – that as neat as the idea seemed, I never thought it could be done. And while I applaud OTC’s attempt, I still think the approach is not really ready – though it could be with just a little more work. Here’s why.

Majors Search Results

Majors Search Results

Probably the most important thing is SEO. If you are going to lean so heavily on search, that means your site – all of it – needs to have pristine SEO so that everything can be found and located properly. We’re talking meta data, keyword density, link text, the whole shebang. OTC is using a Google Appliance of some kind, which can afford you a lot of power (sadly, Google discontinued the Mini this year, leaving only the more expensive GSA on the product line). You can see some of that power in action if you do a search for “programs.” Note at the top, you specifically get the keymatch that they manually entered to make sure that a search for “programs” always results in the right page first thing. That’s good. Now do a search for “majors.” No keymatch this time referring the visitor to the programs page. The top matches aren’t relevant at all, as a matter of fact. That’s not to say the results are consistently bad, but in this approach, there’s just so little room for error.

PSU’s unified search

PSU’s unified search

Another pain point for me here is the use of the stock results page as well. It’s bland, uninteresting, and doesn’t invite the user to explore the results. They have added additional search options above the box, but they aren’t integrated at all – each is a different landing page that isn’t necessarily search related. Lastly, they don’t seem to be taking advantage of collections, which can make a GSA or Mini so powerful in getting users into the right “bucket” of information. Collections are a way of filtering content into logical categories of some kind. For instance, you could have a “News” collection that keeps all the press releases searchable and separate from the normal search. At PSU, their search is an example of both unified search and collections (seen to the right). Things like “athletics” and “classes” are collections, while “people directory” is actually a separate system. But it all works through the single interface (though the people results do go to a different results page, so it’s not entirely unified).

Something else, and this is specific to the GSAs still, is that they don’t appear to be using OneBox modules either. That’s the perfect way, for instance, to try and pull in some of those external searches from the result page header, like departmental and contact searching. For instance, do a search for “NUR 230,” a nursing program course. Using a OneBox module, they could instantly provide course information, schedules, associated books, teachers, etc. If you want more examples, the OneBox is what gives you instant results in Google when you do things like typing in a FedEx tracking number, looking for movie times, checking the weather, and so on. That’s the trick here. If you’re going to go all in, blow it out of the water. Universities have TONS of structured data that could be presented this way, to fantastic results. Won’t someone think of the user’s clicky finger?

From an article at the Community College Times:

OTC Chancellor Hal Higdon said a review of the college’s website using Google Analytics showed that more than 80 percent of site visitors find what they look for through an outside search tool or OTC’s Google search server. Often, visitors skip the front page and go directly to the search box to quickly find the information they need.

Google Analytics Site Search Usage report

Google Analytics Site Search Usage report

Admittedly, I know nothing about just what went into this research (and if anyone at OTC reads this, I live in Pittsburg, KS, about an hour and a half from you – let’s talk), but I would caution any school interested in this that analytics alone will absolutely not give you the full picture here. It can give you a lot of information, to be sure, but context and intent are intimately important to this particular endeavor. For instance, it’s easy to say that people may search a lot on a site because the the navigation or IA sucks – something analytics alone won’t tell you. So it would seem reasonable that going all search would avoid that problem, since search is designed to do an end-around on such things (this is, of course, assuming you aren’t considering things like nav and IA in your search logic). But maybe they search simply because your content sucks, and they’re trying to find something more informative. That’s a content problem. My point is, know your problems and know your goals. Have a plan for each, isolate your success metrics, and have a maintenance and measurement scheme ready.

And there certainly may be something to catering to users that search. A quick look deeper into the Google Analytics report sampled above (you do look at your search reports, right?) revealed some extremely interesting metrics. For instance, the average user spend 4:33 minutes on the site, as opposed to 11:48 minutes for users that searched. Users that don’t search viewed on average 2.73 page compared to 8.28 for searching users. But, what the analytics here don’t tell me is why. But hopefully, if your numbers are similar, you would want to know the answer to see if there’s something valuable there to be leverage.

There’s something else that bugs me, though. While I don’t want to nitpick, I feel the need to point some of this out.

“Start Here” navigation

“Start Here” navigation

In trying to mimic Google, they also used a “services header” on the homepage. That’s fine, go for it. But, I gotta admit the logo really bugs me. It just looks stuck on and clip-arty. But more than that, I am really bugged by the “Start Here” link. First off, “Start Here” isn’t at all descriptive about what to expect when I click on it. And once I did, I was confused that I was looking at a page with a careers based URI, but the content seemed to be related to academic programs. That’s just a labeling thing, but it’s a pretty major one, since it’s first chair in what little navigation they have. They also added a “more” link. While I know this is in line with mimicking Google, it smells too much like rebranded quick links. As a user, if the goal is to have me search, why would I click the “more” link rather than just type in the keyword for what I want? From the very start, you’re already inviting me to break with your intended navigation scheme, and that’s a dangerous game.

At the end of the day, I still think there’s something to this. Every university struggles desperately with IA and navigation. Awesome, global search just seems natural. The barriers that will most commonly prevent success are technology that can’t deliver, and the politics of university web maintenance. If you’re considering it, keep this stuff in mind:

  • Hire a full time SEO person. Period. Don’t be cheap here.
  • Don’t abandon navigation all together. Consider your “services” that require fast access. This requires a shift in thinking, making your homepage that of a “service provider,” rather than whatever you are now.
  • Spend six months on taxonomy. Card sorting. User research. Whatever people call something, make sure those keywords are mapped and accounted for
  • Make use of autocomplete and dynamic results (again, both things Google does). Save your users as much time as you can, and help eliminate mistakes.
  • Utilize tools like OneBox or similar systems to provide enhanced result data for commonly accessed, structured data.
  • Make sure you have a reporting system on pages. A “Was this what you were looking for?” flag people can click that will report the page and search that sent them there.
  • Accept the fact that you may have to take away a lot of editing rights from people to prevent pollution of your results. Two words – Quality. Control.
  • You might consider splitting the site into a sort of “gated” and “ungated” area, where the gated area is vetted, approved, specific info. The ungated section is everything else that no student ever cares about.
  • Respect the results page and how important it is
  • Unify your search platforms
  • Measure and track everything. Can you tell me the most viewed, but unclicked autocomplete keywords? Most common misspellings? Keywords most likely to result in an application? Bounce rate after a search result? And these are just some of the easy ones.
  • Your search needs to be smarter than your users. It should know what they want, regardless of how they ask for it. It needs to deliver, accurately, without question. It needs to adapt incessantly.
  • Hire a full time SEO person. Period. Don’t be cheap here.
Edinboro’s keyword autocomplete

Edinboro’s keyword autocomplete

Oh, there’s one more important thing here. I don’t care if your homepage is a Google knockoff or not, you should care about search. Almost all of my bulletpoints above hold true no matter what your web strategy entails. Edinboro University is one I credit with putting a ton of work into mapping keywords for things on their site to an autocomplete feature for their search. Their keyword system is a completely secondary system too, it’s not in a GSA or anything like that, but they unified it properly so the user’s experience is seamless. All they know is they are getting good recommendations that can save them keystrokes. But otherwise, Edinboro’s search is implemented just like any normal search, nothing else special about it. But the details, the little things, that’s what can matter the most.

Good search is like a life preserver. It can save a visit. It doesn’t matter if it’s just a tool, or your entire navigation. Bad search frustrates users and drives them away, and I don’t know anyone in that business. At the end of the day, I have no doubt OTC will continue to improve, and for a community college I have a ton of respect for the effort they’ve put forth here. I’m damn interested to see how it evolves.

Rethinking the UX of the Program Listing

Take a moment and think about your listing of majors and minors. Really think about it. Is it good? Does it reflect how great your offerings are? Is it even accurate? Is it just a stupid, boring, damned list (if you’re interested in something a bit off the beaten path, check out RIT’s Pathfinder system or look at what the University of Arizona is doing)? If the answer is yes, I want to kick you an idea. Filtrify.

On its face, Filtrify is just another jQuery plugin that you can use for atomic control of a collection of DOM elements. Which is cool enough I suppose. But check out this example on their demo site. Now, instead of movies, imagine it’s student action photos from different programs, or some other visual representation of the program. Instead of genres and actors and directors as filters, you have schools and interests and jobs. It would leave you with an interactive program listing that invites a user in to play and explore. In this particular case, Filtrify is serving as an extension of the live filter design pattern – enabling a user to see all the available options, and then selectively removing that which isn’t relevant to them. People like toys, and they are inherently curious. Create an environment that promises an opportunity for exploration, and you’ll net some explorers.

But wait, it doesn’t have to be Filtrify per se, either – that’s just one idea. Something like filtering blocks would work just as well. As would something you come up with entirely on your own. The trick is, you need to start rethinking the UX of the program listing (and probably a lot of other stuff on your sites, too), and really consider how your tools may be impacting prospective students’ ability to see you as the right institution for them. Jakob Nielsen pointed out how bad lists could be nearly a decade ago (see #7), yet schools seem to be married to them for lack of the desire to construct a better way. People don’t find long, unfilterable lists to be user-friendly at all. We already know that 17% of students will drop a school from their list if they can’t find what they want on your site. Even more will mark a school down if they have a bad experience. What is that risk worth?

The underlying issue here is that schools need to start putting more effort into the next step of their web design processes, and start looking at the user experience of what they are making. It’s easy and fast to slap stuff together and move on, but there is enormous value in usability testing. It’s part of the overall process that is too frequently skipped, since a webpage published is frequently seen as “good enough.” While the old fashioned linked list may be functionally adequate for the data being displayed, it’s a terrible way to encourage interaction and leave a good impression on your visitor.

Even if you didn’t want to use a library like Filtrify, you can still come at the problem of filtering content in a user friendly way by falling back on some basic principles like LATCH. LATCH is a content filtering methodology that most users are, consciously or not, readily able to adapt to. That makes it a great place to start when trying to solve the problem of helping people find what they need in any large archive of structured information.

So how could we apply LATCH to a set of link filters for our program listings? Here’s one example (and there are plenty others):

  • Location: This could be a physical campus location, online programs, or a more meta concept like a college or school.
  • Alphabetical: This pretty much goes without saying. But keep in mind your taxonomy might not be the same as the visitors. Don’t be afraid to overload topics and point them to the same overall detail page.
  • Time: This one can be harder, but could be length of the overall program, number of credit hours, or number of total semesters.
  • Category: Think generalized subject or job areas here. For instance, “teaching” will likely return a number of different specializations.
  • Hierarchy: You could use this to break down by schools and departments, or requirements, or to set up graduate tracks

The insane part about all this is that in many cases it would only take a little work to make fairly significant usability improvements over the current lists of programs. Something as basic as a live search filter would provide users with at least a little empowerment over the current model for many schools. Empowered users will be engaged users. And it’s much easier to get an engaged user to fill out an application. And on the other hand, if the technology you’re employing on your website doesn’t instill them with faith in you to be modern and student-centric, then they’ll move on.

Majors, minors, and programs are just one of many examples that could benefit from a little of this kind of TLC. I mention it as the focus of this post mainly because it tends to be really high in the funnel though. But how about:

  • Student organizations
  • Offices and departments
  • Faculty listings
  • Events
  • Courses

How many things could you improve with just a few hours work, and a little focus on the overall UX of the content you are trying to present? Which do you think your visitors would get better use out of? Are you particularly proud of your program listing page? Share it in the comments below for others to see. And if anyone actually does build a site based on Filtrify, let me know, I’d love to see how it turns out!

Facebook Hates Your Brand

Have you ever heard the phrase how you can have too much of a good thing? That’s sort of how I feel about Facebook’s move to add Community Pages. I understand it. It’s not that it doesn’t make some sense. But it feels very much like a case of execution before consideration. They thought it was a good idea, so they just ran with it. I’m a huge proponent of the idea that companies don’t own their brand, the consumers do. Brand is ephemeral. It’s an idea, a perception, an attitude, an emotion – all had by your customers. You can try to shape it, to mold it, but ultimately the only thing you actually own are your trademarks.

But, in terms of user generated content, that doesn’t mean you give away the keys to the kingdom. People are still adapting to their role in the social brand development space. They need a garden hose, yet Facebook is handing over a fire hose. Hannah Feldman of Clark College contacted us about this issue recently via our Ask the Gurus section out of a concern for this new change in Facebook’s system, and I thought it was more than important enough to open up the discourse to everyone. The issue she noted was that some folks had listed “Clark College” as an interest (or maybe as their employer, or college they attended – it can get sucked in from anywhere), and as a result, Facebook made a community page for them (actually, it appears that they have made several). Like many of us, they have put a lot of time and effort into setting up their own page for people to become fans oflike.When you are dealing with communities, especially if it is a small one, it is enormously important to cultivate them and concentrate them. When people have to hunt out and think about which page to like, it ultimately hurts engagement and people miss out on the real dance (remember students joining the fake Class of 2013 pages last year?).

Facebook Quick Results FAIL for Clark College

Facebook Quick Results FAIL for Clark College

One of the most immediate effects is that now, a quick search for Clark College doesn’t even turn up their main page in the top eight. That’s some fine SEO work there, Lou. I mean really, Facebook, you value an algorithmically generated page with 9 likers (?) over their established organizational page with almost 900 fans (screw it, fans is easier to say, and so I’m going to)? At least Lewis & Clark can sleep well tonight. Actually, most big players have nothing to fear in this instance, for whatever mysterious reason. It’s all the smaller players that really need the exposure the most that seem to have been overlooked.

And sure, maybe that could be overlooked, except for one thing: the community pages are worthless. Let’s just call a duck a duck here. Big pages are just scraping and displaying information from Wikipedia, and small pages just do a regex match for people’s status updates and aggregates the hits. That has an obvious effect: the community page for Clark College is showing people mentioningLewis & Clark College. And you know how I mentioned that big pages seem to fair the quick results storm okay? Don’t worry, Facebook has reserved the right to co-op your pages and turn over it’s development to the community, and there’s pretty much no information on what rules or standards apply to that policy, besides “If it becomes very popular (attracting thousands of fans)…”. This is what I meant when I said too much of a good thing. You can’t just open the flood gates and start creating pages algorithmically from the interests, et cetera that people are putting in. Something like this must be balanced, tempered. At least for now, fans of Clark College are now forced to do a hard search for their school if they want to actually engage with them.

What’s to be done? At the moment, I can find very little, besides getting loud and angry. There appears to be no “flag this page as erroneous” feature, or anything like it. Write Facebook and explain your displeasure, and see if we can motivate the development of better checks and balances. I wouldn’t suggest signing up to be notified when they are ready for help building the pages yet, one because there’s no information on when that will be or what it will enable, and two because I’d hate to do anything that would appear to be supporting this process before there’s a clear path to how it’ll help us fix it. Social networking is a dance. You throw a party, help people get out on the floor, and get out of the damn way. People have a good time, they talk about that time you threw an awesome party, and you take credit for that couple that met there and got hitched later. Instead, Facebook has turned into that loud, obnoxious girl that spills her drinks on people and trips you when you’re trying to make your move on the cute girl with the Jimi Hendrix obsession. We just want you to throw an awesome party and stay out of the way, Facebook.

In today’s web, brand dilution is a huge challenge to deal with when marketing your school (or ANYTHING), and Facebook just walked up and dumped a bucket of gas on the fire (no more metaphors or similes, I swear! Especially because a bucket of water is probably a more fitting comparison in this instance…). AllFacebook.com takes a good look at brand impacts from this as well, using an excellent example with Coca-Cola. I’m waiting to see how they answer this issue, but my worry is that some of the damage is done. My initial thought is just how much actual, human interaction it will take to clean up this mess, and I just don’t think they have it, and in this case I don’t really think you can just rely on the “community” while claiming “hey, that’s how Web 2.0 works, baby.”

I wish I could say there was a silver lining. I wish I had a bulletproof suggestion for how to make sure your brand management isn’t tossed to the wind. I don’t. Community pages are designed to be “dedicated to a topic or experience that is owned collectively by the community connected to it,” which is fine for pages about Pogs and brewing your own beer. It’s trouble for companies though. I doubt this story is anywhere near over.

Is Hosted Search Really Ready for Prime Time?

In my years that I’ve now spent in higher education, one universal truth I have found is that nothing quite moves a project along like when someone much more important and much less web savvy than you deems an issue worth addressing.  Such was the case only a couple months after I had started at the university, when the Director of Marketing noticed that new information she had put up on the site wasn’t coming up in search results, and the results that were hitting weren’t particularly relevant to the topic in the first place.  Thus, a mission was born, to find a way to make our search better, and to do it NOW.  That’s the other thing about people higher up than you, when they say jump, generally you jump.

At the time (approximately three years ago), we had been using the pretty straight forward Google search for web sites.  It amounted to putting a box on your page that submitted to Google, restricting results to your domain.  You couldn’t really do anything else with it then besides add a banner to the top.  So began the odyssey.  Most of the major players all offered a basic site search back then, all of which fairly equally crippled.  The Google Search Appliance was (and still is) crazy expensive and totally overkill for our site.  The IBM/Yahoo product OmniFind was still a few months from launch (nor did we have hardware to run it on at the time).  The Thunderstone Parametric Search Appliance just looked a little… well, no one I know had ever heard of them, and their website wasn’t (and still isn’t) something that inspires my confidence.  The Mini, on the other hand, was cheap, more than adequate for our site size, and was getting good reviews.  Not to mention that the money to get it was ready, willing, and able.  All that made the choice pretty easy for us, so we dove in.

Now, fast forward a couple years.  We are still using our Mini.  In fact, I just upgraded to 5.0.4 on Monday.  I’ve never had a lick of problem with it and became a pretty quick fan of it.  This year at eduWeb I had the good fortune to share my experience with a couple people, and the conversation generally drifted towards: “Why is that better than Google Site Search?”  Originally, the Mini offered a ton of unique features, such as custom collections, theming, or the ability to export results as XML.  The past year has seen a growth in the availability and features provided by free, hosted search solutions.  Yahoo BOSS looks to be an API that wants to take a serious swing at the hosted search crown.  Google’s Custom Search Business Edition (CSBE) AKA Google Site Search is also offering business and schools the opportunity for search with many of the features of the Mini like ability to remove branding and ads and call results as XML (note: Google Site Search is free for universities).

With all these new options, is the Mini even a worthwhile investment now?  We’re coming up on the end of our support term, so I figured this was a prime time to evaluate the field.  My short answer is: Yes, it still is.  My long answer also happens to be yes.  See, search is important. Search is doubly important for universities because we have so much crap out there, and so many different topics to address (many of which also happen to be crap, but you can’t tell that to the people putting it out there).  A Mini now costs $3000 with 2 years of support, which would be equal to six years of equivalent CSBE service (assuming you had to pay) which prices out at $500 a year for 50,000 pages.  Obviously Google isn’t trying to mothball its own products, so where does the Mini make up that cost?

First, I think there’s huge value in crawling.  Remember our original problem?  Content was not making it into the search results fast enough.  With the Mini I can schedule crawls, or just set it on continuous mode and let it go nuts.  Using nightly scheduled crawls, I ensure that any content added to the web site shows up in search within 24 hours, and usually faster than that (unless some crazy person is up and adding content to the site at 12:01 AM).  Going through the Webmaster Tools, I can only tell Google to crawl our site at a Normal or Slower rate.  We don’t even rate high enough to get the Faster crawl rate option.  So users of Site Search are pretty well cornered on the matter.  Once I crawl our site with the Mini, I can have the it output a sitemap that I feed to Google’s spider to help with their indexing as well, so the benefit becomes twofold.

Next up, raise your hand if you have an intranet, or otherwise secured information not available to the public.  All of you can pretty well scratch CSBE/SiteSearch off your short list if you’re looking for a way to dig through it.  If you want to index any kind of protected content, you’ll have to go with an actual hardware solution, as both the Mini and GSA support mechanisms to crawl and serve content that is behind a security layer.  This is a great option if you buy a Mini, use up the initial two years of support, then buy a second one: use one for internet and the other for intranet.

You’re also going to find that you are capable of pulling more valuable metrics out of the Mini than what you get with CSBE/SiteSearch.  Granted the standard “what are people searching for” question is easily enough answered.  But what about “what are people searching for that isn’t returning results?”  That can be equally as valuable in a lot of cases.  And while Site Search allows for search numbers by month and day, the Mini can go down to the hour, as well as show you your current queries per minute.  It’ll even keep tabs on how many pages it’s crawling currently, how many errors it found, and email you about it all.  All the reports can be saved out as XML, naturally, so you can mix and match datasets as you need for custom reports.

dir1boxAnd I have one word for you: OneBox.  Mini has it, thanks to a trickle down effect from the GSA – hosted Google options do not have it.  The OneBox essentially allows you to add in custom search results based on query syntax, and tailor the styling of the results.  You see this all the time at Google, for instance when you type in a phone number, or FedEx tracking number.  As you can see, these results need not come from your Google Mini search index.  It can come from other collections, or other sources entirely.  In the screenshot to the right, you can see a mock up of a OneBox result that matches a name format and returns contact information along with the standard search results.  Uses for this are many, and can span anything you might store in databases, such as course listings, book ISBNs, names, weather (if you have campuses in different cities), room information, etc.  Anything that you can define some kind of search pattern for.

On a quasi similar note, you can also link certain searches (or parts of searches) to keymatches.  These are commonly used for ads on Google that appear at the very top of search results (usually highlighted light yellow with the “Sponsored Link” caption), but you can use them to highlight a link that goes right to the automotive department when someone searches for something containing the word “auto.”  This is another feature unique to the Mini and GSA, and one more way to make sure searches are presented with relevant links.  This is very useful in cases where a department might not have a well optimized site which doesn’t show up first in a search for their department.

Ultimately, it’s a judgment call whether or not these features are worth the money to you.  At $3000, you’re basically paying $1000 each for the server itself and two years of support.  You can’t buy the unit without support though, so that notwithstanding, you’re getting a full featured search box with support for about twice the cost of a good PC.  If you have more than 50,000 pages to index though, you’ll find that price goes up.  At the same time, if you do have over 50,000 pages, there are a lot of other reasons not to go hosted, such as control over results, index freshness, result relevance, etc.  All these are always important, but they become even more so the bigger your site is.  Consider, if you have half a million pages on your site, and you need to make sure people find the needle that they need to in that haystack, would you rather have some control over that, or cross your fingers and hope Google gets it right?

My end impression is the Google’s Site Search is a great little tool for small businesses that are dealing in a few thousand pages, who can’t afford a server, or who don’t have the resources to maintain it.  Keeping up the server isn’t an involved job at all, but does require someone capable of checking in on it monthly or so, at least.  But, as universities, we generally have the resources for such a tool, both financially and manpower-wise.  We’re also large enough to justify a dedicated box for such an important task.

If you’re still researching what’s right for you in hosted search, it might well be worth keeping an eye on Yahoo BOSS though, it’s making some pretty cool claims on functionality.  OmniFind is also great free software if you have the resources to run it already in place (like a VMWare cluster or other virtualized environment) and can function within its limitations (only having up to five collections being the big one).  Just remember, search is possibly the single biggest tool on your website behind maybe your portal, and it deserves due process to get the treatment and attention your users expect and deserve.