Will mainstream science be using primarily Bayesian analysis by 2035?
➕
Plus
25
Ṁ1041
2036
34%
chance

Get
Ṁ1,000
and
S1.00
Sort by:
bought Ṁ20 YES

if mainstream science is essentially just a bunch of AIs, with humans feeling useful by being in the loop but actually they aren't useful, and the AIs are using bayesian analysis, is that sufficient to resolve YES?

@Bayesian Just want to point out that the majority of AI frequentist, not bayesian.

sold Ṁ20 YES

@Shump I would think training AIs to believe true things leads them to exhibit bayesian approximations, if they are sufficiently powerful? I would expect that AI would end up mostly bayesian-like, and that if it does not it'll be because humans gave them frequentist biases or bc of the social benefit of all agreeing to the frequentist norm. really not sure about this tho, do you disagree?

@Bayesian

Approximations

That's the key word here. Finding Bayesian posteriors of statistical distributions involves sampling from high-dimensional, complex posterior functions. Doing that in an accurate and computationally efficient manner is an active area of research.

Machine learning usually only calculates likelihoods and disregards priors. This leads to overfitting. However, since using actual priors is very slow,we resort to doing some "tricks" that approximate bayesian priors. For example, an L_2 norm in a linear regression is analogous to putting a prior around a value of 0 for each coefficient. But not all regularization methods have a Bayesian interpretation.

People are too scared of the spectre of subjectivity to use Bayesian analysis. They want to leave the application of priors to the reader. Plus in many cases frequentist stats are easier. But the biggest reason that this will not happen is that people don't have enough of a reason to switch. Try convincing a social scientist to use Bayesian stats. These people are not willing to use any statistical analysis they haven't learned in their Bachelor's. Why would they make that change?

When I was researching causal inference I learned that the people who are actually applying these techniques were like 10-20 years behind the research, which is depressing because there were a lot of advanced since that could have made social sciences much more rigorous.

predicts NO

@Shump Frequentist analysis also includes a prior! The choice of null hypothesis is arbitrary and subjective, they just pretend it isn't.

predicts NO

@IsaacKing I generally agree. In my ideal world scientists would be using much more Bayesian stats. But ideals and reality can be quite different.

I think the problem is that the"null hypothesis" and "p-value" are just extremely widely misinterpreted. There is a valid frequentist interpretation, but it's actually way less intuitive than the way most people think about those concepts, which goes something like "I should believe the null hypothesis until somebody can 'reject' it, at which point it's pretty much a done deal unless the study is somehow flawed" and "p-value is the probability that we got this result because of random chance". Or even worse "the p-value is the chance that the results of this study are incorrect".

My dream is that one day scientists will just stop using "alpha" (usually the p<0.05 threshold) and just start reporting raw p values and treating them as a rough degree of confidence. Even that is wrong but at least better than the mess we have now.

predicts NO

When I was a TA I used to add a question asking students to interpret p-values to all the assignments I wrote. Literally everybody kept getting it wrong despite being taught that in the introductory course they took a year before. I started off by grading them down on that but I couldn't justify doing that for everyone and just wrote them stern comments.

predicts NO

@Shump Hmm. If they've misunderstood one of the founding pillars of modern science and statistics, I think grading them down is correct; what's the point of the grades if not to reflect knowledge? But if >80% of the whole class is doing it, that implies that the previous class is teaching the concept poorly, so maybe that teacher should be talked to as well.

I find that intuitive examples get the point across better than hammering in the math. To explain the difference between P(A|B) and P(B|A), I ask people "If all you know about Alice is that she went on a skydive yesterday and her parachute didn't open, what's the probability that she's dead?" and then follow it up with "If all you know about Bob is that he's dead, what's the chance that he went on a skydive yesterday and his parachute didn't open?"

And then for p values specifically, I ask them to think about the hypothesis "wearing a green hat causes any coin I flip to come up heads". If they perform the experiment and get heads 5 times in a row, that's a significant result with p = 0.03125! Should they reject the null hypothesis? Probably not.

predicts NO

@IsaacKing I would do that if I was the professor but I don't think I had the authority as a TA. I picked my fights, I was already considered aggressive because I accused a few students of ChatGPT plagiarism and felt quite comfortable giving students low grades when they deserved it. I don't think that giving all students a lower grade for something is very educational.

predicts NO

Anyways my conclusion is that the scientist of 2035 will use whatever methodology they learned in 2015. As far as I know, that's predominantly frequentist statistics.

predicts NO

@JacyAnthis Similar to your prediction for 2050, just a shorter time frame.

I have lots of words to say here. But I think due to institutional decadence, the answer will be no. Reform will have to come from outside science.

You really need to expand the description. Like do you mean most clinical trials use a bayesian design?? Cause i would bet yes on that.

predicts NO

@BTE I'm open to people presenting specific operationalizations of this. If I think one adequately captures the general idea I had in mind, I'll add it to the description.