Very interesting piece by Douglas L. Campbell on his blog about fixing academic journals. But I think there’s also another problem: negative results have a very low probability to be published.
I know this specific issue is not related to academic journals per see, as it depends more on the way researchers perceive what’s scientifically valuable in their field. But negative results should not be forgotten in the race of publishing, because a good experimental protocol or a good econometric setting that leads to a “I didn’t find what I was looking for” conclusion is a form of knowledge.
A journal of negative results has been launched in medicine. What about (and when) something similar in economics? It could help to make economics an even stronger science.
During the campaign, Donald Trump promised to raise tariffs on imports from countries like China or Mexico. A lot warned against the risk of retaliation and a trade war. But what’s a trade war?
Continue reading “What is a trade war?”
Austin Frakt for The Upshot (my own highlights):
A great deal of the decrease in deaths from heart attacks over the past two decades can be attributed to specific medical technologies like stents and drugs that break open arterial blood clots. But a study by health economists at Harvard, M.I.T., Columbia and the University of Chicago showed that heart attack survival gains from patients selecting better hospitals were significant, about half as large as those from breakthrough technologies. […]
Rather than clinical quality, which is hard to perceive, patients may be more directly attuned to how satisfied they, or their friends and family, are with care. That’s something they can more immediately experience and is more readily shared.
Fortunately, most studies show that patient satisfaction and clinical measures of quality are aligned. For example, patient satisfaction is associated with lower rates of hospital readmissions, heart attack mortality and other heart attack outcomes, as well as better surgical quality.
This story is a striking illustration that simple heuristics as “following the satisfaction of other people” can be surprisingly good (and cheap) at processing information. In other words, relevantly processing information does not necessarily require sophisticated individuals with alleged high cognitive skills – like homo economicus.
It’s also (and to me, very) important to note that these heuristics are collective heuristics: it’s not a single individual who processes information for the whole group. Rather, it’s virtually all the members of the group who do a little bit of information processing, and what’s actually relevant is the aggregated result. A patient with a bad experience in a given hospital is statistical noise: mistakes can happen ; a lot of patients telling bad things about a given hospital probably means that this hospital is actually not that good: a lot of mistakes means something’s not going well.
Source: The Life-Changing Magic of Choosing the Right Hospital – The New York Times
Saw this on Twitter today:
Before retweeting it, I decided to check the source. Mainly to be sure it isn’t an hoax. Well, it is not an hoax but the author himself wrote “This no peer review, and I wouldn’t describe this as a “study” in anything other than the most colloquial sense of the word“.
On the source website, the RateMyProfessor.com reviews can be sorted by positive and negative ones. It made me wondered about the context in which the word has been written: “boring” in “this class was boring” doesn’t mean the same thing than “boring” in “this class was everything but boring” or “I expected a boring class but it was finally not“. Can we control for that? Short answer: we can’t. At least, not easily in this graph. Again, author’s words.
Reasonably, what this graph shows is that students use “boring” more when they wrote reviews of economic classes in RateMyProfessor.com. To say what? Hard to tell. And is RateMyProfessor.com data generalisable? Naturally, I wouldn’t trust this kind of website – where control is very limited, leading to very noisy data. For instance, are students leaving reviews here representative of all the students? We don’t know.
At the end, the most striking thing about this tweet is the kind of debate it ignited – especially on Facebook. I have to admit this is a bit disturbing to see many researchers (i.e. guys with a PhD and stuff) not checking at all the source but still trying to figure out why economics would be boring with arguments like “economics uses math for ideological reason, this graph is another proof of that“. I mean, aren’t researchers among the best in the world to check anything before trusting it? For someone claiming that economics is an ideology, that’s not a very scientific way to assess something!
Anyway, this is still a funny tweet, and I guess I’ll eventually retweet it. As an economist I may be “boring“, but I also display some basic form of self-mockery. Better than nothing I guess…