Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> in the one-month period covered by this study, the city's police officers did 588 stops of Black drivers and only 262 stops of white drivers.

So keep in mind they had recordings from 262 traffic stops of white drivers.

If they had the data that said "white drivers" were 80% less likely to hear the escalatory phrases and so they didn't escalate - boom! That's an open and shut case.

But them using the excuse of a lack of escalations to justify the complete absence of this data is just way too problematic.



They're being pretty clear about the limits on the data they had, the decision they made because of it, and why they made that decision.

What exactly are you alleging here? Malpractice is a pretty strong word but also... vague. PNAS is fairly prestigious and there would be substantial reputational risk of malfeasance with this data. To me that carries a lot more weight than someone on the internet throwing unclear accusations around idk.


The actual study in question is pretty limited - I don't have any problem with the data published to the PNAS report: https://www.pnas.org/doi/10.1073/pnas.2216162120

But the decision to limit the study exclusively based on race was a post facto decision that doesn't make a lot of sense given the context of what they were studying.

Imagine if they published a study called "Escalated police stops of Right Handers are linguistically and psychologically distinct in their earliest moments" and then you found out that they left out the data on left handers, you would also be very suspicious.

Malpractice might be a bit of a strong word, but I find it distinctively annoying when researchers intentionally play around with their sample sizes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: