>If you just accept that this laundry list of assumptions is true, and take an infinite number of samples, then the chance of being a fucking liar is only 5%!The justifications for these statistical tests are a series of asymptotic approximations. They also rely heavily on real analysis to construct continuous distributions from discrete ones so statistics professors just dump the complicated formulas on students as a given without proof or motivation.With the advent of AI it's time to start teaching statistics completely differently, from the bayesian perspective. The focus needs to be less on ensuring you are making accurate causal assumptions and more on making flexible inferential models.
>>16546776>With the advent of AIWhat does AI have to do with "Niemann-Pearson vs Bayes"?
Frequentist stats is perfectly fine for the pea plant experiments it was designed for. It's also good enough if all the compute you've got is a scientific calculator.
>>16546776A maximum likelihood estimate is just a MAP (Bayesian) estimate with an uninformative prior. They are basically the same thing as far as "assumptions" go. Bayesianism is the right way to go, but distribution assumptions are present in both approaches..
I never really understood the big difference between one or the other desu. What statistics are you talking about specifically, what test, what methods?they're kind of like the 2 sides of the same coin the way I see it, but each problem has it's own (best practice) model.
>>16547009Yes Bayesian Statistics includes frequentist statistics as a special case. That is part of the reason why it's important to reshape statistics pedagogy around it. Instead of having the formula for the t-distribution dropped on you, you will see it as the posterior predictive distribution for a sample of the normal distribution. Bayesian statistics builds intuition around quantifying uncertainty with samples that the frequentist perspective is generally ignorant about. >>16546996I think an important area of research right now would be in scalable inference methods. Making use of sampling methods to approximate a posterior distribution with fewer samples or computation steps.
>>16547710To find out the probability that a coin will land on heads, frequentists would flip the coin n times for x successes and say that probability is x/n.The bayesians would say that they already believe the probability is .5 meaning that neither heads nor tails are favored one way or the other. As additional tests are made, this probability is slowly updated one way or the other. The bayesian method is by nature more conservative for small samples leading to quantifiable differences between the two methods. It's *not* simply a philosophical debate. Bayesian credible intervals are larger and more conservative than confidence intervals, and they are everything confidence intervals aspire to be.
>>16547760in this case are monte carlo methods technically frequentist? since if you are simulating such a process you need to input the parameter of the distribution, which you already assume is 1/2.in any case it sounds like a whole bunch of semantics, I think I was taught the "bayesian" way of thinking in university, but I've never heard anyone talk about it like that
>>16547760You'd never use a point estimate as a prior on a coin flip. You'd use something like a uniform distribution or a beta between [0,1]. If you have an "overly indormative" prior (like a point-mass) you'll just completely negate the evidence from the data.
i do think the same, frequentist stastistics is deadi also have my schizo idea that most statistics is based on retarded assumptions like iid or exchangeability. exchangeability is a bit better, but still fucking bad. what's the point of a prediction if it works only on distribution? thats fuccking retarded. statistics should be prescriptive and account for distribution shifts
>>16546776Do you even know what you are talking about? Bayesian statistics makes way more unjustified assumptions and is dismissed by anyone who has actually worked on the foundations of statistics. Bayesian "statistics" is garbage cooked up by data scientists.>>16547721What the fuck are you even talking about? Bayesian statistics also has t distribution. Do you think frequentism = parametric? What the fuck? This is what's wrong with this board. People learn some new terminology and then give it their own meaning and argue in their own goddamn echo chamber.
I just said in my post that bayesianism explains the t-distribution *better* you idiot. Also pfizer used bayesian statistics to prove the efficacy of their covid drug. >What are you even talking about!?So you don't know what a posterior predictive distribution is? You are mad that I'm using terminology in a way you don't understand? Go read a textbook cocksucker.
>>16548489>Also pfizer used bayesian statistics to prove the efficacy of their covid drug.lol. Lmao even.
>>16548502Yeah and it was effective. Oh you think Bill Gates microchips are set to self-destruct any minute now? Yeah you're the smart one and I'm the idiot. Tell me what do you do for a living?
>>16548507>Tell me what do you do for a living?I don't bring my subjective beliefs in my objectives studies, I can tell you that.
>>16548538this is the lowest IQ post ive ever read. you should test the lower bound of your IQ based on shit youll never see in your life retard thats making 0 assumptions
>>16548480> Got filtered by ridge regression and LASSO> Nobody who knows anything uses informative priors!!! Bayesian statistics are often abused, but it's not an accident that "Bayes optimal" is the general standard for testing optimality of hypothesis tests/point estimators.
Frequentist statistics revolves around maximum likelihood estimation. We assume that the data comes from a given distribution. Sample means are often used because for large samples they are approximately normally distributed. A confidence interval is created centered on the mean of this normal distribution *most likely* to have generated that data. This makes sense when we know that the data generating distribution can only take on one form. However say we are attempting to use randomness to prove that the probability of getting a heads on a coin flip is .5. If that is what we are trying to test, we are accepting that this probability itself is subject to choice (randomness).In this case our a priori assumption is indeed that there is a uniform chance of p taking on any value from 0 to 1. Given 7/10 coin flips are heads, the maximum likelihood estimation would be p=.7 but *any p* could have generated that data. The bayesian posterior distribution reflects the uncertainty in the true probability. The peak of the posterior distribution will be at .7, but the mean of the posterior distribution is at .66. Conservatively accepting that the range of probabilities is maximized at .7 but has a center of mass at .66. If you think the probability that a coin will land on heads never changes, the maximum likelihood estimation is sufficient. If you think there are different possible probabilities the coin could assume, the mean of the posterior distribution gives a better idea of the center of mass for that parameter. Just because it peaks at .7 doesn't mean the probability has to be .7.
>>16548870>ridge regression and LASSOThey are both techniques used in Frequentist statistics. Thanks for outing yourself as a non-knower. >general standardStandard decided by who? Data scientists? lol. Talk to anyone who has worked on foundations.
>>16547760>The bayesians would say that they already believe the probability is .5 meaning that neither heads nor tails are favored one way or the other. As additional tests are made, this probability is slowly updated one way or the other.this is why Baeysians are morns. Typical atheists who love to say there's no truth to seize power, but then there's truth as defined by the dogmas of the atheists, but then truth changes over time. This is the birth of the ''new science just dropped''
World population: 8 billionChina population: 1.4 billion1 out of 5 children are Chinese, if you are a family of 4, your next child will be Chinese.
>>16549473> They are both techniques used in Frequentist statistics.Ridge regression is a Gaussian prior on your parameters. LASSO is a Laplace prior. You missed the point, which is pretty funny for someone claiming I'm the "non-knower." The whole idea of regularization is using an implicit prior to stabilize the inference procedure > Standard decided by who?Look at grad level stats textbooks. A good one is Theoretical Statistics by Keener. The way they evaluate the optimality of a hypothesis testing procedure (see their discussion of the SPRT, as an example) is whether it is Bayes optimal.
>>16547760>To find out the probability that a coin will land on headsStopped reading there.
>>16550134Having a Chinese child is conditioned on being Chinese, unless you are Angelina Jolie. >>16550472This is the classic frequentist ignorance. They are like "Oh yeah we are going to handle randomness" and then they cover their ears and say "la la la" any time they are faced with real randomness. There is nothing subjective about the bayesian having an initial prior assumption that assigns equal weight to all outcomes. In fact that is the *most* objective framework. Frequentists are just too dumb to handle real randomness.
>>16550465And frequentism is Bayesian with uninformative prior. Omg I get the point. I heckin love Bayesian now.
>>16552404It more or less is, yes. You say this as an ironic joke, but yes. Frequentist/likelihood based inference is just Bayesian inference but with both the normalization and prior being constants. The difference between a Maximum A Posteriori estimate and a Maximum Likelihood estimate is that one of the constants is now a function.