How Home Theater Plays The Ratings Game

A lot of consumer electronics editors and reviewers have a love-hate relationship with product ratings. The love side comes from knowing they make readers happy. Assuming the ratings structure is well thought out (that is, simple and easy to understand) and the ratings are applied with fairness and accuracy, they wrap the whole product up in a nice little ball and tell you, at a quick glance, whether it's a winner, loser, or in-betweener. Perhaps most important, a good rating, or a good rating coupled with a seal of approval like our Top Picks designation, is validation that the product is worthy of the money you plan to spend on it. Given the sea of black boxes, identically thin TVs, and similar speaker systems out there, we recognize that giving you this validation is really the essence of our job at Home Theater.

But editors also hate ratings because, in practice, they really are difficult to administer with accuracy and consistency across a full range of products and reviewers. And they tend to whitewash the critical nuance contained in a well-crafted review, or cause readers to bypass the review entirely and base their buying decisions only the ratings. We think that's a mistake.

Our reviews are essentially subjective and supplemented by lab measurements to confirm the manufacturer's specifications and achievment of a basic level of technical performance. When I receive a raw manuscript to start it on its journey to print and the Web, there's a place on top where the reviewer has indicated how the product should be rated. Except in rare instances, that reviewer is the only person on our staff who will have fully auditioned the product. To avoid coloring the subjective results, we don't usually allow the reviewers to see our lab measurements until after their copy is submitted. We do take those results into account before committing the final ratings to the magazine or Website, but short of the product coming up seriously short in the lab, I'd always prefer not to mess with a reviewer's designated ratings.

So in the end, it's an honor system. We have to first trust that our reviewers have the technical skills to properly set-up and fairly assess a product, as well as the broad experience to understand how a product stands among others in its category and price range. Then, when it comes time to establish the ratings, we have to trust further that, armed with this knowledge, the reviewer can put fully out of his mind his or her friendship with the public relations or marketing person at the company who loaned us the product, ignore that this might be a product from a regular advertiser (if they're even aware of such things; I like to think most aren't), and rate the product honestly, based on what is hopefully some set of established criteria that defines what each ratings level means.

This is actually harder than it sounds, not just for us but for any reviewer in our industry. I know with confidence that each of our writers understands the honor and serious responsibility to the reader that goes with being a Home Theater reviewer. None of us would knowingly lie about the performance of a product, consciously avoid pointing out a significant flaw, or take a shortcut that shortchanges the final result. And though we each bring our established friendships with industry colleagues up to the door of our reviewing rooms—in my case some go back well over 25 years—we do leave them in the anteroom when it comes time to sit down and write.

That said, the phenomenon of "ratings creep," in which too many products get a top or near-top performance rating, is a real danger when you test as much stuff as we do. This happens because we often select products for review that have some buzz in the field or which otherwise stand a good chance of reviewing well, and they're often all closely competitive in their performance. If you're working from that kind of pool, any absence of well-established definitions for what separates a merely great performer from an amazing one will create a tendency to err on the high side. Next thing you know, everything is wonderful, and the best you can do to delineate one product from the next is to read the reviews and hope you pick up from the copy which one the reviewer was more positive about and why.

With this in mind, I've established some fresh criteria for our reviewers to apply to Home Theater's ratings, as well as for adding products to our Home Theater Top Picks list of recommended products. They are largely subjective rather than hard and fast, and zero in on how the reviewer feels emotionally about various aspects of the product based on their expertise and vast experience. Products with the best ratings should really come across that way on page, not just in the nuts and bolts of what it did right, but in the energy and enthusiasm of our reviewer. Just as you count on your favorite movie or restaurant critic in your local newspaper to share your specific tastes and set you on the right path, you will hopefully find those among our staff whose tastes and own brand of observation and integrity you can personally come trust.

The Ratings Explained
Home Theater has a five star/10 point rating system that's applied across most product categories using the criteria you see in the ratings box at the top of this article. The primary areas are Performance, Features, Ergonomics, and Value. For the speaker category, we skip Features and Ergonomics, but add Build Quality. By far the most critical rating for any product is Performance, which directly addresses its audio or video quality. For HDTVs and projectors, we recently introduced separate Performance ratings for 2D and 3D, knowing that some superb 2D displays do a poor job with 3D, and that 3D performance is secondary or unimportant to many users. Similarly, for audio/video receivers, we have just begun separating out Audio Performance and Video Performance to reflect that many great-sounding receivers botch aspects of the video processing, and that this may or may not be a deal breaker for you.

Products may be given whole star ratings or half-star ratings to allow them to straddle the middle area between each ratings category when the reviewer feels its warranted. Ratings are applied independently to each of the price segments we use to separate products—Entry, Midrange, and High End—so a product is generally rated against it's similarly priced competition, although there can still sometimes be significant differences in price within each range.

Following are the designations for each star rating, with some general guidelines/benchmarks for how these would be applied.

1 Star=POOR
Poor ratings generally indicate an extreme situation. For example, this might be applied to a Performance rating when multiple samples of a product have failed under normal conditions, or when a product has genuinely offensive audio or video performance which can't be traced to an isolated parts failure. A Poor rating for Features could be found on a product that is stripped down to the point of near nakedness vis a vis its direct competition, especially one that offers no other redeeming value such as high performance or ease of use. Poor Ergonomics could apply to a product that is so annoying or difficult to use as to be intolerable or incomprehensible, even to our experienced reviewer. A product that is so vastly overpriced as to suggest a blatant rip-off irrespective of its performance could rightly be considered a Poor Value. Poor Build Quality in a speaker suggests a bad cabinet design that feels flimsy or overtly resonant, uses obviously inferior drivers or parts, or cheesy fit-and-finish. Except for the cosmetics, any of these could also contribute to a poor Performance rating. Since we try not to waste our time testing dog doo (it stinks up the lab), products with Poor ratings are somewhat rare.

2 Stars=FAIR
A product with a Fair Performance rating would be a notch up in its audio or video quality from the wasteland that is Poor, but it likely has one or more significant flaws suggesting poor design or execution. This is probably not a recommendable product. A Fair rating for Features suggests a product that looks noticeably stripped down next to the competition, but it might still be recommendable for its performance or ease of use. Fair in Ergonomics should tell you that this was a difficult product to use in at least one key respect that the reviewer deemed critical in regular, steady usage. A Fair Value rating is reserved for a product that, while not an obvious rip-off, is notably overpriced vis a vis the competition. Still, it may very well be worthwhile for its performance or some unique features it might have. Fair speaker Build Quality might be applied to cabinetry or fit-and-finish that doesn't contribute directly to poor performance but generally isn't executed at even the average level for the price segment.

3 Stars=GOOD
Good is good. That is, not Excellent, but Good. These are fine, workaday products that represent the middle of the pack. A product with a Good Performance rating is one the reviewer could live with, albeit with the recognition of some compromise from the very highest standards of picture or sound quality demanded by the hardcore enthusiast, and it is one he or she might certainly recommend to a friend or family member whose requirements are less stringent (if the price is right). Based on the relatively high standards we use to select and cull products for review, you can probably expect the bulk of our product reviews going forward to acheive Performance ratings in the Good to Excellent range. A Good Features package might have a notable omission or two (as reported in the review's text), but is generally well featured and competitive with like products. Good Ergonomics means the product was relatively straightforward to operate and about on par with its direct competition in this regard. For the area of Value, expect a Good rating to designate a combination of performance and features which, taken together, seem on par with what the competition offers at the same price or perhaps a little better.

The Excellent Performance rating is reserved for those products that really say something special to the reviewer. Excellent performers may not do everything brilliantly well within a given price range, but the overall video or audio quality separates it from the pack and suggests smart or gifted design and a commitment to high performance on the part of the manufacturer. If a product gets an Excellent Performance rating, you should expect to see it reflected in the reviewer's enthusiastic tone and comments. An Excellent Features mix is one that is both chock full at this price point and perhaps offers some unique option or innovation that's helpful to the end user. Excellent Ergonomics speak to a product in which special design considerations have clearly been made with the end goal of improving ease-of-use, particularly among products like AVRs that are notoriously complex. We all know an Excellent Value when we see it—it'll give the reviewer (and you) the satisfaction of getting way more than you paid for. Excellent Build Quality in a speaker imparts a feeling of fine quality and pride of ownership, with solid construction and baffle design that supports the end goal of good sonics, attention to details like the quality and positioning of binding posts, and superb fit-and-finish.

In my view, invoking the R word is a serious matter indeed, and should not be undertaken lightly. Reference-quality for any given criteria designates a standard by which all other direct competitors must be measured. Reference ratings should be reserved for those products that truly represent the current pinnacle for their category. In particular, a Reference Performance rating suggests a product whose video or audio quality is so demonstrably superior to the bulk of the competition that our reviewer recognizes it instantly as one of a very elite few. It should be obvious from the reviewer's enthusiastic comments that he or she has been not only impressed with the product but essentially thrilled by it—you won't have any trouble labeling this as a rave review. These are the products on our shortest short list when friends and family come calling for recommendations, and they are the ones we most often like to recommend to readers via our Top Picks list. It might even be one that sends the reviewer running to his checkbook to upgrade his own system.

A Reference feature package might or might not appear in a product with Reference Performance ratings, but it designates a truly packed feature mix that simply isn't found often—everything plus the kitchen sink. Reference Ergonomics may be even rarer than Reference-level Performance; it designates ultimate attention to the end-user's ease-of-use from set-up through day-to-day usage, and cutting-edge or unique details that hopefully set a new standard for how this product type can be experienced. A Reference rating for Value means what you think it does: an unbelievable, over-the-top bargain that meets the highest standard or sets a new standard for what's possible in the product category and price segment; this alone, coupled with Good-to-Excellent Performance ratings, is enough to catapult a product to our Top Picks list. Reference Build Quality should also be obvious, not only in the solidity of construction and choice of parts in a speaker, but in the beauty of its design and the high quality finish applied.

Going forward, we'll be applying these general standards to all our reviews at Home Theater, with the result that you'll likely see fewer five-star Performance ratings than in the past. We've also tightened up the criteria for placement of products on our Top Picks list. You can read more about that in my recent November issue editorial, "What's in Home Theater's Top Picks?"

notabadname's picture

Would the recently reveiwed Golden Ear Triton Two speaker be "reference" quality for it's price range? I am so torn between those and the Def Tech Mythos STS Supertower.They both have been reviewed as the best money could by at their price-point. Decisions, decisions.

Scott Wilkinson's picture
In the Floorstanding Speaker Top Picks list, you'll see that the Triton Two system was rated 5 in all three criteria, meaning it is a reference product at its price point all the way around. The Mythos STS system is almost $1000 more expensive, and it got 4s and a 4.5, meaning it's excellent in its price range, but not reference. Still, that doesn't mean it isn't great...I'm sure it is. And the KEF Q900 scored a bit higher in performance and value, but a bit lower in build quality, making it another worthy contender priced between the Triton Two and Mythos STS. I don't think you can go wrong with any of these; it depends mostly on your budget.
notabadname's picture

It will be such a tough choice for me. I have to find a place that will allow me to demo them side-by-side. I was sold on the Mythos until the Golden Ear review on this site not long ago. I am eager to replace my 20-year-old Klipsch Forte IIs. The price of the Mythos is in my budget, if it proved that they would have been a 5 had they cost $1,000 less I would spring for the extra investment. That can be the hard part of comparing products that cross that price comparison point. Is a 4-star of a $4,000 price-point an equal to a 5-star of a lower price point, or vice versa? Bottom end seems that it may be a little better on the Mythos (36 Hz versus 44 Hz at the -3 db point), and highs with the ribbon tweeter better on the Triton, which would likely affect far more material than the selections that truly dig to those low depths.

Thanks again for your response Scott, and the great article and clarification of the review system.

etrochez's picture

I have the STSs and I love them. The wife approval factor in the STSs is pretty high, since they have a smaller foot print than the Tritons. Also, remember that one of the reasons the system is more expensive is because the Mythos have a bigger center channel than the Tritons. One of the complains of the Tritons is the small center channel. You can pair the STSs with the Mythos Seven instead of the Mythos Nine and save yourself $400. Truth is you can't go wrong with either one.

Good luck

notabadname's picture

I am honestly biased towards the Mythos because they are so pretty as well as great performers. Their better low-end performance temps me too, because I am more of a soundtrack listener / movie guy than music. I too was concerned about the Golden Ear center.

I finally admit the shallow pleasure of name-dropping with Definitive Technology as compared to Golden Ear, which may get more of a "they're made by who?" response. I know that is the worst reason to choose a speaker, especially for an additional $1,000.

Thank you for sharing!

jhbchess's picture

Has the Mythos STS been surpassed by the Mythos Bipolar at this price range? Will it be replacing the STS as a top pick?

The Mythos STS package has two 4s and 4.5 and costs $4,000.

The Mythos Bipolar has two 5s and a 4.

And you can now buy the Mythos Bipolar (Definitive Technology BP-8080ST towers ($1499 ea) and Definitive Technology center CS-8080HD ($999) and get a pair of the Definitive Technology SR-8080BP surround speakers ($698) for free, for a total of $3,997.

Scott Wilkinson's picture

HT already reviewed the BP-8080ST and CS-8080HD in an LCR system:

It is designated as a Former Top Pick only because we had so many good speaker systems, and the review was only 3 channels, not 5.1. Adding the surrounds you suggest would probably make a killer 5.1 system.

jhbchess's picture

Ok I pulled the trigger! The Definitive Technology Mythos BP-8080ST (2), CS-8080HD and SR-8080BP's rock when put together. I paired them with a Marantz SR-7005 and a Samsung UN65D8000 and it all showed up today.

I'll have to tweak the audio settings manually since the Mythos come with big bold print disclaimers saying not to use automatic speaker configuration because it wont read Bipolar correctly. That's a shame because I was really looking forward to using the Audyssey thing that comes with the Marantz. Should I ignore the warning and run Audyssey anyway??

Thanks for all the Top Picks and product reviews, I spent about a month reading and reading before picking (and switched from the Mythos STS to the Bipolar at the last minute)!