first commit of score test #105
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a rough build of the score test as in Kline and Santos (2012) that I tested a bit against their code.
Essentially, the score test perturbs the score with weights, and generates wald statistics for those perturbed scores. You only need to estimate the parameters once, so it's computationally simple, and I use
joblib
to parallelize the bootstraps.What's nice is that this can be implemented for any
statsmodels
model. Or really any estimation model that implemented a score and hessian method.I've implemented this so far in a jupyter notebook, so you can see where I'm going with it and let me know if it passes at least a preliminary smell test. The data I tested it on was the same data in the do-files of Kline and Santos (2012).
My plan is to test this against their code to make sure that I'm getting the right numbers. But at least preliminarily, the p-values look sane and I even tested it against
wildboottest
for a linear model.The files are in the
tests/
folder and specifically, the notebook istest_mle.ipynb
Of course before merging this, I will clean things up and incorporate it into the python files and create a test suite. But I wanted to start a PR now so we can iterate.
What do you think?
One thing I was thinking is that perhaps it would be good to refactor some of the WildBootTest classes so that it would be a little easier to extend? Or do you think we should just add this to the codebase. I'm game to work a little more to make all code in
wildboottest
have a consistent experience.