Hello, my name is nymz and for those who don’t know, I’ve tried more than 100 pieces of gear over the year of 2022 and most of them were IEMs.
Given we just turned the year and the desperate need of making a change, I’ve decided to re-do my ranking list or, better yet, go back in time to when I first started.
The first list I ever had was only for myself and it included too much information, but it was my way to keep track of what I heard – and that still hasn’t changed. Most of the time I find myself going back into my notes or my list just to see what I thought of a piece of gear. But that overly complex list started to be more of a side job than a guidance, and upon being asked to post it online, I made it simpler.
Over the past year I’ve been rating gear from S to F over three different sections: Tonality, technicalities and personal rankings. Heavily inspired by Crinacle, this format made things relatively simple up to a certain point, but some constraints kept rising:
- Being overly biased – despite the personal ranking column, the tonality and technicalities were not overly objective but rather how well a set worked with my library
- Being “too simple” – the more gear I heard, the harder was to distinguish between sets in the same category, which leaves us with the following point
- There was way too many incoherencies – to a point I either kept de-ranking stuff or just do a new one from scratch
- Did not represent my personal taste anymore.
This last point is where pitchforks get airtime and people go mad on their purchase validation from reviewers, but like any of you reading this (yet), we are humans. Humans are biased, have preferences and most important, evolve.
So, how to make something biased, well… less biased?
Well, the answer always stood in front of my eyes and it was pretty obvious: once I went two steps forward with my simplification, but now I must take one step back. Back to 2019 I guess, I mean, kind of.
First, we need metrics and for that I got help from a lot of you, random readers. I went back and traced comments, talked to people and inserted random questions between chats and a lot of other reviewer’s tier lists. And this is where it struck me – I needed three parts to make a jazz trio.
First of all, the objective one: the tonality. A best tuning for library A might not be the best for library B, but in terms of tonal balance, it’s pretty objective to explain why something is well tuned or not – unless you start inserting “rich” buzzwords. Peaks, dips, sudden changes or scoops are pretty obvious to spot on.
But I got my mind thinking on what makes a good bass standout or how to discern between good and great mid-ranges (that are arguably easy to hit if the tuning is right)? For that, things like bass texture, mid-range separation or treble extension and air need to be factored in this part.
Into more subject fields: the technicalities. Some people hear them, some people don’t. Some people will say everything sounds the same if the tuning is 100% matched, some will say two different dynamic drivers sound different inside the same shell with the same tuning and damping materials. To each their own, but I’m a publicly voiced technicalities-head, and for that I decided to split it in three different categories: imaging chops, detail and what I called “others”.
Detail or resolving power is the most obvious one and is also connected to the tuning part. The imaging chops are the ability on how to render stage and positional queues. The “others” section is the tricky one where I decided to include macro dynamics, but also can be used for incoherency problems or overly plastic timbre, for example. This last section is where a set like the Anole VX will take a hit due to how plastic it sounds with my instrumental library.
Since unbiasedness doesn’t exist in audio, the last thing to factor in will be the good old personal bias column, that will also serve as a weighting for the overall ranking and to distinguish between sets, without influencing the other two sections.
Now that we have the variables, how does this translate into actual ranks?
Overall Rank = (45% * Tonality Rank) + (45% * Technicalities Rank) + (10% * Personal Bias Rank)where Tonality Rank = Average(Bass + Mids + Treble)
and Technicalities Rank = Average(Imaging + Detail + Others)
Now that we have a formula, we just need to solve the better separation between ranks problem, and that is easily solved by going back to the 0 to 10 scale rather than letter grading.
Now that you’ve read all this wall of text, what you are about to see is a product of thousands of hours, getting back old products, scavenging notes and asking for loans once again to turn this list the most consistent I can at this moment.
Hope you enjoy it and let the purchase validation pitchforks see the sun of light. And please keep in mind this is a constant work in progress, especially the comments.
Leave a Reply