Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

https://www.economist.com/blogs/graphicdetail/2017/05/daily-...

It's a competition rather than research, but is consistent with both some people being better than others and some wines being more distinctive than others.



The summary sounds interesting, but it's paywalled for me.

The dissenting view, from what I guess is one of the most famous studies:

https://www.cambridge.org/core/journals/journal-of-wine-econ...


Quality distinctions are a bit hard to talk about, being qualitative. How do you differentiate between changing tastes/moods or inconsistent wineries?

The competition the economist article was about was judged (by them) purely in correctly identifying the grape and country of origin of the wine.

This is more resistant against noise, but only supports the idea that people can reliably identify wines — not that they can be graded independently. It certainly challenges the “wine judges can’t tell the difference between red and white” narrative.


Sure, yeah. I mean naively, one would expect that some grapes or qualitative aspects can be reliably discerned, and others can't, and that research into the subject would give us some idea which is which, so that's the kind of thing I thought someone might know about.

I mean, it may not all be bunk, but at any rate it seems fairly non-obvious which bits aren't. It seems like every study that asks "are wine tasters biased by X?" finds that they are.


Unless I'm reading the abstract incorrectly, 10% of judges were consistent, 10% were consistent within the top 3, and 80% were terrible. Being a home brewer of beer, this matches my experience pretty closely. Surely, this supports the idea that some people are more skilled than others, though.


Per the popular article I read, he found that ~10% of the judges were consistent in a given competition, but it wasn't the same 10% across competitions. To decide what this implies I guess one would have to figure out whether 10% is higher or lower than you'd expect by chance.

(That said, it sounds like he chose his definition of "consistent" after gathering the data, so it doesn't sound like it was a very rigorous process. One could say he found 10% of the judges were consistent, or one could say that he defined "consistent" to mean "within the top 10%".)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: