Metacritic is a website that aggregates scores given by critics of different media, such as movies, TV shows and video games. Its purpose is for people to easily and quickly see what the consensus of critics at large is of a given piece of art. They do this by taking the score given by a critic or publication and scaling it to a 0-100% scale, and then calculating a weighted average of all such scores.
The problem with this is, however, that the numbers are often rather meaningless, and don't necessarily really reflect the actual quality of the work (either in general, or personally for you in particular).
Giving a score to a piece of art is in itself often a rather meaningless thing to do. Not least because the scale itself is highly subjective and dependent on the publication or even individual critic.
For example, in a scale from 0 to 10, some people might consider 5 to be "average" (ie. not excellent, not horrible, but ok; still very watchable/playable), while other people might consider 7.5 to be exactly that.
(This latter concept comes from some school scoring systems in some countries where anything less than 5 means that you failed, 5 being the absolute minimum score required to pass, and about 7.5 being the "average" score. In some countries all failed tests are scored with a 4, while in other countries they use the full 0-4 scale to indicate how far the test was from passing.)
What this means is that, using a scale from 0 to 10, one publication giving a 5 to a game may mean the exact same thing as another publication giving it a 7.5, depending on how they express their scores. However, as far as anybody knows, Metacritic does not take this into account.
Then there's the problem of publications and critics giving different scores to different aspects of eg. a video game. For example they may give a score of 6 to the graphical quality of the game, but a 10 to its gameplay, with an overall score of 8. From a user's perspective, they might value the gameplay a lot more than the graphical quality, so they would be more interested on that aspect. But this isn't reflected very well in the final score of 8, and much less in the final Metacritic score.
Another big problem is the scaling of the original score to the 0-100% scale of Metacritic. Rather infamously, if a publication uses a scoring system like eg. a letter between A and F, Metacritic will take an 'A' as 100%, and an 'F' as 0%. In actuality, an 'A' may well be anything between about 85% and 100%. But Metacritic simply equates 'A' with 100%, thus inflating the score (and deflating it in the other extreme.) (A better mapping would be to assign 'A' with about 92% and 'F' with about 8%, and everything in between linearly. While this is still way too coarse, at least on average it would be more accurate.)
In short, Metacritic scores are almost meaningless. A movie or game with a metacritic score of 80 could well be better than one with a score of 90. It's really random and subjective.
Well, that's just one website's take on the subject. You can interpret it as you want, and ignore it if it bothers you, right? Well, the problem is that Metacritic as a lot more influence in the industry than it really should. Many publishers and investors are looking too tightly at the Metacritic score of works they are considering. In fact, some publishers will demand a higher cut on games with a lower Metacritic score (based on critic previews). In other words, Metacritic is actively hurting content creators.
Metacritic also demonstrably has an influence on sales (there have been studies about this). It's demonstrable that games with a higher Metacritic score will sell better based solely on that fact (in other words, the exact same game will start selling better if its Metacritic score increases).
This is great for games that happen to get a higher score, but not so great for those that don't. Metacritic may have too much influence on this, given how arbitrary and ultimately meaningless the numbers are.
Another criticism of Metacritic is that they do a weighted average of scores, and their weighting factors are kept secret. They give more weight to big, "reliable" critics while giving less weight to others. This kind of secrecy may be cause for concern because it's unknown how much bias there is in the choice of weights.
The problem with this is, however, that the numbers are often rather meaningless, and don't necessarily really reflect the actual quality of the work (either in general, or personally for you in particular).
Giving a score to a piece of art is in itself often a rather meaningless thing to do. Not least because the scale itself is highly subjective and dependent on the publication or even individual critic.
For example, in a scale from 0 to 10, some people might consider 5 to be "average" (ie. not excellent, not horrible, but ok; still very watchable/playable), while other people might consider 7.5 to be exactly that.
(This latter concept comes from some school scoring systems in some countries where anything less than 5 means that you failed, 5 being the absolute minimum score required to pass, and about 7.5 being the "average" score. In some countries all failed tests are scored with a 4, while in other countries they use the full 0-4 scale to indicate how far the test was from passing.)
What this means is that, using a scale from 0 to 10, one publication giving a 5 to a game may mean the exact same thing as another publication giving it a 7.5, depending on how they express their scores. However, as far as anybody knows, Metacritic does not take this into account.
Then there's the problem of publications and critics giving different scores to different aspects of eg. a video game. For example they may give a score of 6 to the graphical quality of the game, but a 10 to its gameplay, with an overall score of 8. From a user's perspective, they might value the gameplay a lot more than the graphical quality, so they would be more interested on that aspect. But this isn't reflected very well in the final score of 8, and much less in the final Metacritic score.
Another big problem is the scaling of the original score to the 0-100% scale of Metacritic. Rather infamously, if a publication uses a scoring system like eg. a letter between A and F, Metacritic will take an 'A' as 100%, and an 'F' as 0%. In actuality, an 'A' may well be anything between about 85% and 100%. But Metacritic simply equates 'A' with 100%, thus inflating the score (and deflating it in the other extreme.) (A better mapping would be to assign 'A' with about 92% and 'F' with about 8%, and everything in between linearly. While this is still way too coarse, at least on average it would be more accurate.)
In short, Metacritic scores are almost meaningless. A movie or game with a metacritic score of 80 could well be better than one with a score of 90. It's really random and subjective.
Well, that's just one website's take on the subject. You can interpret it as you want, and ignore it if it bothers you, right? Well, the problem is that Metacritic as a lot more influence in the industry than it really should. Many publishers and investors are looking too tightly at the Metacritic score of works they are considering. In fact, some publishers will demand a higher cut on games with a lower Metacritic score (based on critic previews). In other words, Metacritic is actively hurting content creators.
Metacritic also demonstrably has an influence on sales (there have been studies about this). It's demonstrable that games with a higher Metacritic score will sell better based solely on that fact (in other words, the exact same game will start selling better if its Metacritic score increases).
This is great for games that happen to get a higher score, but not so great for those that don't. Metacritic may have too much influence on this, given how arbitrary and ultimately meaningless the numbers are.
Another criticism of Metacritic is that they do a weighted average of scores, and their weighting factors are kept secret. They give more weight to big, "reliable" critics while giving less weight to others. This kind of secrecy may be cause for concern because it's unknown how much bias there is in the choice of weights.
Comments
Post a Comment