Skip to main content

How we rate

A primer on how we review products and score them

How we score

The Verge reviews: how we test and review products

Product reviews can be a tricky thing. Every reviewer has a different style and different way of assessing a product, which is why no two reviews of the same product will ever read the same. At The Verge, we have built a reviews program that strives to standardize our reviews without abandoning the uniqueness of each of our reviewers. Below is a short guide to our methods and an explanation of our rating practices.

General reviewing

Our reviews are, first and foremost, centered on real-life experience with the product. They are firmly based on the reviewer's experience of using the product for a substantial amount of time; they are absolutely never written off of a spec sheet or a fleeting experience with the product. Whatever the product is — a phone, laptop, TV, app, etc. — we strive to work it into our everyday lives and give the reader a picture of what it’s like to use the product in the real world.

Benchmarks and battery testing

In many cases, we crossbreed that anecdotal experience with systematic (or synthetic) benchmarks, especially in the realm of performance. Many of the tests are used industry-wide, with the exception of how we evaluate battery life.

Our current battery testing method is entirely based on real-world usage. We use devices as we would use them in the real world and then come up with an evaluation based on how a device performs compared to devices we’ve tested previously. In the case of laptops, we set the screen to 200 nits of brightness (or as close as we can get to that using the laptop’s brightness adjustment tools) and then include what activities we did on the machine while we were evaluating its battery stamina.

This form of testing can produce different results for different users (which why we include how we used the device during the test), but we feel that our real-world evaluation of battery life is more indicative of how a device will actually perform when you buy it as opposed to other rundown tests that don’t reflect actual device usage.

Scoring

Every reviewed product (unless otherwise noted) is given a score. We score a product based on a variety of performance, value, and subjective criteria. Since the score is not a weighted average, the editor reserves the right to adjust the average to better reflect the overall assessment of the product, including the price and other qualities that aren’t always included in rigid rubrics. A score is best viewed as a snapshot in time that is compared to other devices available at the time the review is published; a device would likely not receive the same score if it were to be reviewed six months later, for example.

A badge demonstrating a Verge score of 9 out of 10

It’s possible we may change the score in a review after it is published due to software updates or other changes. We’ve only had to do that in exceedingly rare cares, and we will always include our reasoning about why a score changed if and when it happens.

We assume the 10-point scale is relatively straightforward, but below is a short guide as to how we view the numbers. All review scores are whole points. We no longer use half points or decimals when scoring a product.

  1. Utter garbage and an embarrassment.
  2. A product that should be avoided at all costs.
  3. Bad — not something we’d recommend.
  4. Mediocre — has multiple outstanding issues.
  5. Just okay. This product works well in some areas but likely has significant issues in others.
  6. Good. There are issues but also redeeming qualities.
  7. Very good. A solid product with some flaws.
  8. Excellent. A superb product with minor or very few flaws.
  9. Nearly perfect.
  10. The best of the best.

Last updated August 1st, 2022, by Dan Seifert.