Skip to content

first commit of score test #105

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

first commit of score test #105

wants to merge 1 commit into from

Conversation

amichuda
Copy link
Collaborator

@amichuda amichuda commented Jun 6, 2024

This is a rough build of the score test as in Kline and Santos (2012) that I tested a bit against their code.

Essentially, the score test perturbs the score with weights, and generates wald statistics for those perturbed scores. You only need to estimate the parameters once, so it's computationally simple, and I use joblib to parallelize the bootstraps.

What's nice is that this can be implemented for any statsmodels model. Or really any estimation model that implemented a score and hessian method.

I've implemented this so far in a jupyter notebook, so you can see where I'm going with it and let me know if it passes at least a preliminary smell test. The data I tested it on was the same data in the do-files of Kline and Santos (2012).

My plan is to test this against their code to make sure that I'm getting the right numbers. But at least preliminarily, the p-values look sane and I even tested it against wildboottest for a linear model.

The files are in the tests/ folder and specifically, the notebook is test_mle.ipynb

Of course before merging this, I will clean things up and incorporate it into the python files and create a test suite. But I wanted to start a PR now so we can iterate.

What do you think?

One thing I was thinking is that perhaps it would be good to refactor some of the WildBootTest classes so that it would be a little easier to extend? Or do you think we should just add this to the codebase. I'm game to work a little more to make all code in wildboottest have a consistent experience.

@amichuda amichuda requested a review from s3alfisc June 6, 2024 23:34
@s3alfisc
Copy link
Member

s3alfisc commented Jun 8, 2024

Hi @amichuda - super cool! I will try to take a look before the end of the weekend. I'll also have to (finally!) take a closer look at the paper by Kline & Santos... 😄

One thing I was thinking is that perhaps it would be good to refactor some of the WildBootTest classes so that it would be a little easier to extend? Or do you think we should just add this to the codebase. I'm game to work a little more to make all code in wildboottest have a consistent experience.

What do you have in mind? Generally I am open to everything =)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants