Same Test, Different World: What Standardized Scores Actually Measure in New Hampshire
A note before we begin. This piece does not argue that standardized testing is inherently wrong, or that any school, administrator, or teacher has acted improperly. What it examines is something more specific: what happens when the same test is administered on the same day to students in two towns that share a school district, a superintendent, and a state testing window — but whose educational circumstances are separated by a distance that test scores alone cannot explain. The data cited here is drawn from publicly available sources: the New Hampshire Department of Education, the NH School Funding Fairness Project, the NH Fiscal Policy Institute, and national school performance aggregators drawing on NCES data. The pattern that emerges is documented. The conclusions are ones the reader can reach independently.
Every spring, students across New Hampshire sit down for the NH Statewide Assessment System — the NH SAS. The tests are the same statewide. The standards are the same. The testing windows are the same. And when the results come back, they are treated as a measure of how well students are learning and, by extension, how well their schools are performing.
In most public discussions of those results, that is where the analysis stops.
This backgrounder asks what happens when you do not stop there. When you take two communities — same district, same tests, same dates — and look carefully at what surrounds those scores. What the towns earn. What they spend. What their tax bills look like. What share of their students carry additional learning needs. And what each school does in the weeks before the testing window opens.
What you find is not a story about good schools and struggling ones. It is a story about what standardized test scores are actually measuring — and how much of that measurement has nothing to do with what happens in a classroom.
Two Towns, One District
Charlestown and Walpole are both members of Fall Mountain Regional School Administrative Unit 60 in southwestern New Hampshire. Their students share a superintendent. They will eventually share a building at Fall Mountain Regional High School in Langdon. They take the same state assessments, on the same days, under the same state requirements.
The similarities end approximately there.
Charlestown carries the highest property tax rate in New Hampshire: $36.54 per $1,000 of assessed value. A home valued at $500,000 in Charlestown generates an annual tax bill of $18,270. That rate is not the product of extravagant local spending. It is the product of a relatively modest property tax base — lower overall property values generating insufficient revenue at any reasonable rate — combined with the full weight of school funding, municipal services, and county obligations that New Hampshire places almost entirely on local property taxpayers. Readers of this publication’s earlier piece on New Hampshire’s property tax system will recognize the structure immediately: it is the same architecture that produces $2.62 rates in wealthy resort communities and $36.54 rates in working towns, for the same constitutional obligation.
Walpole sits just across the county line in Cheshire County, bordering Charlestown’s Sullivan County to the south — yet both towns are served by the same SAU. Its property tax rate is substantially lower. Its residential profile is considerably more affluent. And its students, taking the same tests on the same days, score dramatically differently.
The numbers are not ambiguous.
Charlestown Primary School, serving grades PreK through 4, ranks in the bottom half of all New Hampshire elementary schools for combined test performance. Math proficiency runs 35 to 39 percent — against a state average of 42 percent. Reading proficiency runs 40 to 44 percent — against a state average of 51 percent. North Charlestown Community School — formerly the Farwell School — served the village of North Charlestown and recorded math proficiency between 20 and 29 percent and reading proficiency between 11 and 19 percent in its final years of operation, placing it 425th out of 457 ranked New Hampshire schools. On March 12, 2024, Charlestown voters approved its permanent closure by a margin of 276 to 269 — seven votes. By that point, declining enrollment had reduced the school’s use to pre-K only, and the property has since reverted to the Farwell Trust. Its remaining students were consolidated into Charlestown’s schools. That seven-vote margin deserves a moment’s pause. It was not a community eager to shed an institution. It was a community that had run out of options — and the children who attended that school absorbed the disruption of its closure on top of everything else the data already shows surrounding them.
Walpole’s schools tell a different story. North Walpole School on Cray Road, serving grades 2 through 4, achieves 57 percent math proficiency and 62 percent reading proficiency — both well above state averages, ranking it 38th out of 234 New Hampshire elementary schools and earning a U.S. News Best Elementary Schools designation. Walpole Elementary School on Bemis Lane, serving grades 5 through 8, achieves 45 to 49 percent in math and 60 to 64 percent in reading, ranking in the top 30 percent of all New Hampshire schools and earning a U.S. News Best Middle Schools designation.
Same district. Same tests. Same dates. The gap between those numbers is not small. It is the kind of gap that, in a public accountability system, marks one set of schools as succeeding and another as struggling.
What it does not mark — not directly, not automatically, not without further examination — is the reason.
What the Scores Are Sitting On
Before a single student in either town opens a test booklet, the circumstances surrounding that test are already dramatically unequal.
Economic need. Charlestown Primary School enrolls 47 to 52 percent of its students in the free or reduced-price lunch program — the standard federal proxy for economic disadvantage in school populations, and the figure most consistently correlated with academic performance in the research literature. Walpole Elementary enrolls 27 percent. Within the same school district, on the same test, the student population presenting for assessment in Charlestown carries nearly twice the rate of economic disadvantage as the one in Walpole.
This is not a minor variable. It is, in the academic literature, one of the strongest and most consistently documented predictors of standardized test performance — stronger, across most studies, than school quality, teacher experience, or instructional approach considered in isolation. A Harvard-based research team studying SAT and ACT performance found that children of the wealthiest one percent of Americans were thirteen times more likely than children of low-income families to score at the highest levels. Stanford researcher Sean Reardon has documented that economic gaps in academic achievement appear before kindergarten and compound throughout schooling. The connection between economic disadvantage and test scores is not a hypothesis. It is one of the most replicated findings in education research.
Special education concentration. School staff with direct knowledge of both programs report that Charlestown’s schools serve an above-average concentration of students with Individualized Education Programs — a figure verifiable through the NH Department of Education’s public iReport database for any reader who wishes to confirm it. Students with IEPs face a compounding challenge with standardized testing: they are assessed against the same grade-level standards as their peers, on instruments that the test designers themselves caution are not designed for accountability determinations, in a format that may not reflect their actual knowledge and capability.
In New Hampshire, educating one student with an IEP costs an average of $31,093 more per year than educating a student without one. The state and federal governments together cover 16.65 percent of that cost. The remaining 83.35 percent falls to local property taxes. In Charlestown — already carrying the highest property tax rate in the state — every additional IEP student represents a budget pressure that a town like Walpole, with a stronger tax base and lower overall rate, can absorb more readily.
The squeeze is real, documented, and directional. It runs against the towns with the greatest need.
The Preparation Gap
The test scores between Charlestown and Walpole reflect not only what students know, but what they have been given the opportunity to practice before they demonstrate it.
In the weeks before the annual state testing window — a period confirmed by school personnel with direct knowledge of both programs — Walpole’s students receive structured preparation covering the testing format itself: what the screens look like, how questions are presented, how answers are entered, what to expect from the testing environment. This is not unusual in resource-advantaged schools. Two-week pre-test preparation sessions covering test format and procedure are a widely documented practice in schools serving more affluent populations nationally.
Charlestown’s students receive the same test. The preparation for that test is not equivalent.
This distinction matters more than it might initially appear. Standardized tests measure a compound variable: knowledge and content mastery on one hand, and test-taking fluency — familiarity with format, timing, question conventions, and navigation — on the other. For a student encountering a particular testing interface for the first time, a meaningful portion of cognitive load during the assessment goes toward figuring out how the test works rather than demonstrating what they know. For a student who has spent two weeks practicing in a simulated environment, that cognitive load is already resolved before the testing window opens.
The test does not know the difference. The score does not record it. The accountability framework treats the results as equivalent measures of equivalent preparation.
They are not.
What the Scores Are Not Measuring
A 2022 analysis in The Century Foundation put the core issue directly: schools that narrow their curricula to focus on test preparation do not generally improve students’ test performance. The best way to help students perform well on standardized reading and math assessments may be to spend less time on decontextualized test preparation and more time on rigorous, meaningful content engagement.
The research on standardized testing validity raises a related concern that the accountability framework rarely surfaces. The tests measure a defined and relatively narrow set of skills — primarily mathematics computation and procedural knowledge, and reading comprehension of specific passage types. They do not measure critical thinking capacity, collaborative problem-solving, creative reasoning, scientific curiosity, or the social and emotional foundations that developmental researchers consistently identify as predictive of long-term learning success. What the tests capture is real. What they omit is also real, and the omission is not evenly distributed: the skills that standardized tests measure most reliably tend to be the skills most amenable to direct instruction and rehearsal, which means they are disproportionately accessible to students whose schools have the time, resources, and institutional priority to prepare for them.
There is a deeper validity concern embedded in the test design itself. The correlation between socioeconomic status and standardized test scores is among the most consistently documented relationships in education research. A large multi-dataset study found that while socioeconomic status correlates with test scores at r = .42, controlling statistically for socioeconomic status barely changes the test-to-performance correlation — suggesting the tests capture something real about academic preparation. What that finding does not resolve is whether the academic preparation gap it captures reflects differences in school quality, or differences in everything that surrounds school: stability, nutrition, housing, healthcare access, parental availability, summer learning loss, and the accumulated advantages and disadvantages that compound across a childhood.
When the accountability system presents Walpole’s scores and Charlestown’s scores as a report on school quality, it is presenting the output of that entire system — and attributing it entirely to the schools.
The Funding Architecture Behind the Scores
This publication has examined New Hampshire’s education funding structure in two earlier backgrounders: the adequacy fraud piece, which documented the state’s thirty-three-year pattern of withholding constitutionally owed education funding from public schools, and the property tax piece, which documented that New Hampshire relies on property taxes for 63 percent of all state and local government revenue — the highest proportion of any state in the country, by a margin that is not close.
Those two findings meet in Charlestown in a specific and documentable way.
When the state withholds adequate education funding — a pattern the courts have repeatedly found unconstitutional and which continues regardless — the shortfall lands on local property taxes. It lands hardest on communities with the least capacity to absorb it: towns with lower property values, higher needs, and rates already stretched toward their practical limits. Charlestown, at $36.54 per $1,000, has nowhere left to go. The margin that a wealthier community might use to fund additional staffing, enrichment, or targeted intervention simply does not exist.
The result is a feedback loop with a clear direction. The state underfunds education. The underfunding lands on local property taxes. The local tax burden in property-poor communities is already the highest in the state. The resources available for instruction, support, and preparation are constrained accordingly. The test scores reflect those constrained resources alongside the demographic factors that surround them. The accountability framework then presents the scores as a measure of school quality — and the cycle of disadvantage is confirmed by data that was generated by the disadvantage itself.
Who Designs the Tests — and Who Profits
New Hampshire’s standardized testing landscape operates on two tracks that are worth understanding separately.
The NH SAS — the state-required summative assessment given each spring — is administered through a portal operated by Cambium Assessment, a subsidiary of Cognia. Cognia is the entity formed when Dover, New Hampshire-based Measured Progress merged with the Georgia-based accreditation firm AdvancED. There is a certain irony in the geography: the primary instrument used to assess New Hampshire public school students was designed by a company that was, until recently, a New Hampshire company. It has since been absorbed into a much larger national nonprofit enterprise, but the local roots are worth noting. A New Hampshire institution, built on New Hampshire tax dollars, now operates at national scale and answers to a national board.
The interim assessments — the MAP Growth tests used by many New Hampshire districts two to three times per year to track student progress between state testing windows — tell a more complicated story.
MAP Growth was developed by NWEA, the Northwest Evaluation Association, which operated for four decades as a nonprofit research organization with a genuine mission orientation. In January 2023, NWEA was acquired by Houghton Mifflin Harcourt. HMH had itself been taken private the previous year by Veritas Capital, a New York private equity firm, for $2.8 billion. When the HMH acquisition of NWEA closed, NWEA relinquished its 501(c)(3) nonprofit status. A separate foundation was established to receive the sale proceeds. The nonprofit identity that had defined NWEA for forty years was set aside to complete the transaction.
Veritas Capital’s investment philosophy is publicly articulated by its CEO: the firm targets technology companies operating in sectors dominated by the federal government — defense, healthcare, education — which it describes as captive markets paid for by tax dollars. The observation is accurate as a description of the standardized testing market. Federal law requires testing. That requirement does not create an optional marketplace. It creates an obligated customer base — school districts that must comply, funded by taxpayers who have no choice about whether testing occurs.
The entity that emerged from these acquisitions now occupies a structural position that warrants examination. HMH is one of the country’s largest curriculum — textbooks, lesson plans, and classroom materials — publishers. NWEA produces the assessments many of those same districts use to measure student progress. When the same company sells both the curriculum and the test, the incentive to align them is financial as well as pedagogical. A district using HMH instructional materials and NWEA assessments is measuring its students on an instrument produced by the company whose textbooks it also purchased. One education market analyst noted the implication directly: districts relying on MAP Growth should expect to be encouraged to purchase HMH curriculum so it can be aligned to the test. The model is not hypothetical — it already existed at smaller scale with Curriculum Associates, which produces both the i-Ready assessment and the curriculum it measures. The HMH/NWEA merger extended that model to a market position of considerably greater reach.
The consolidation extends further still. Renaissance Learning has partnered with curriculum providers Great Minds and Savvas. McGraw-Hill has integrated Pearson’s assessment tool into its instructional materials. The pattern across the industry is consistent: the companies that sell schools what to teach are acquiring or partnering with the companies that measure whether students learned it. Critics of this trend — including the executive director of the Center for Assessment, who consults states on testing — have raised a concern that is structural rather than conspiratorial: when assessment RFPs used to attract five, six, or ten competing bidders and now attract two or three, expertise declines, costs rise, and the leverage that public institutions once held in procurement negotiations shifts to the vendors. Fewer options means less accountability for the companies doing the measuring.
NWEA’s own published guidance for families includes a disclosure worth noting: the organization states explicitly that it does not recommend using MAP Growth for grade-level advancement or accountability purposes. That disclaimer is part of the documentation available to parents. It is considerably less visible in the public accountability frameworks that treat MAP Growth scores as meaningful measures of school performance and district quality.
New Hampshire taxpayers are financing this system — through property taxes, state education funding, and federal pass-through dollars. The communities bearing the heaviest per-capita burden of that financing are, as this piece has documented, the same communities whose students score lowest on the instruments the system produces. The companies that profit from the accountability framework those scores support are not located in Charlestown.
This is not an accusation of fraud or conspiracy. It is a description of a market structure — one that merits the same scrutiny this publication has applied to New Hampshire’s property tax architecture, its education funding adequacy obligations, and its constitutional amendment proposals. Public money flowing to private equity in exchange for mandatory compliance products is a pattern worth understanding, wherever it appears.
What the Scores Are Actually Measuring
The NH SAS tests themselves were built against the New Hampshire College and Career Ready Standards, aligned broadly to the Common Core framework adopted by most states in the early 2010s. The assessments are designed by professionals using established psychometric methods. The scores are not arbitrary. Within their defined scope, they measure something real.
The question this backgrounder has examined is not whether the tests work. It is what they are measuring when they produce the results they produce in communities like Charlestown — and whether a public accountability system that presents those results as a school quality report card is telling an accurate story, or a partial one.
The scores reflect student preparation. That preparation reflects instructional quality — but it also reflects family stability, economic security, housing, health, summer learning access, test familiarity, and the accumulated weight of living in a community where the property tax rate is the highest in the state and resources are stretched past their limit. A test that cannot distinguish between those variables is not measuring what the accountability framework claims it measures.
That is not an argument for abandoning assessment. It is an argument for reading the results honestly — for understanding that when two schools in the same district produce scores thirty percentage points apart on the same test on the same day, the distance between those numbers is not a measure of one school’s success and another’s failure.
It is a measure of everything that surrounds both schools.
And in New Hampshire, most of that is a funding story.
The Quiet Cost of a Score
There is a version of this story that is comforting in its simplicity. Some schools do well. Some schools struggle. The data shows us which is which. The accountability system ensures that struggling schools improve or face consequences.
That version requires believing that standardized test scores in a system like New Hampshire’s are measuring school quality — and only school quality. It requires setting aside what we know about the relationship between economic disadvantage and test performance. It requires ignoring what school personnel with direct knowledge describe about the preparation gap between Walpole and Charlestown. It requires not asking who designed the tests, who profits from them, and whether the instrument was built to answer the question the accountability system is asking it.
It requires, in short, looking at the number and not asking what the number is sitting on.
Charlestown’s students are not failing. Charlestown’s teachers are not failing. What is failing is a funding architecture that extracts the highest property tax rate in the state from one of its least wealthy communities, returns inadequate state support, concentrates students with the greatest learning needs in schools with the most constrained resources, and then measures the result with an instrument that cannot see any of that — and calls the output a school report card.
The test is the same in Walpole and Charlestown. The test is measuring the same thing in both towns.
That is precisely the problem.
Sources
The following sources informed this piece. Readers are encouraged to consult them directly.
NH Department of Education, Office of Assessment, NH SAS overview and testing requirements. education.nh.gov
PublicSchoolReview.com, Charlestown Primary School profile, 2024–2026. publicschoolreview.com
PublicSchoolReview.com, N. Charlestown Community School profile, 2024. publicschoolreview.com
PublicSchoolReview.com, Walpole Elementary School profile, 2025–2026. publicschoolreview.com
SchoolDigger.com, North Walpole School profile and NH ranking, 2024. schooldigger.com
SchoolDigger.com, Walpole Elementary School profile and NH ranking, 2023–2024. schooldigger.com
U.S. News & World Report, North Walpole School ranking, New Hampshire Elementary Schools. usnews.com
U.S. News & World Report, Walpole Elementary School ranking, New Hampshire Middle Schools. usnews.com
NH Fiscal Policy Institute / NH School Funding Fairness Project, “School Funding and Special Education Update with Latest 2023–2024 School Year Data,” February 2025. fairfundingnh.org
NH School Funding Fairness Project, “School Funding and Special Education: It’s Getting Worse,” March 2024. fairfundingnh.org
NH Fiscal Policy Institute and Valley News, “Property Tax Rates Vary Widely Across New Hampshire,” April 2026. vnews.com
Business NH Magazine, “New Analysis Details NH’s Massive Variation in Property Taxes,” April 2026. businessnhmagazine.com
Valley News, “Charlestown Town Meeting Results,” March 14, 2024. vnews.com
Valley News, “Charlestown Considers Shuttering School,” March 5, 2024. vnews.com
NWEA Wikipedia entry, acquisition history and MAP Growth description. en.wikipedia.org
EdWeek Market Brief, “Houghton Mifflin Harcourt Acquires Assessment Provider NWEA, in Education Mega-Deal,” January 2023. marketbrief.edweek.org
EdWeek Market Brief, “HMH and NWEA: An Inside Look at What’s Happened Since the Deal,” November 2024. marketbrief.edweek.org
EdWeek Market Brief, “Vendors Are Pairing Assessment and Curriculum. Is That What K-12 Officials Want?” March 2026. marketbrief.edweek.org
NWEA, Common Questions — Families (MAP Growth guidance including non-recommended uses). nwea.org
Steven Singer / Gadfly on the Wall Blog, “A Private Equity Firm, The Makers of the MAP Test, and an Ed Tech Publisher Join Forces,” January 2023. gadflyonthewallblog.com
K-12 Dive, “HMH Finalizes Acquisition of Assessment Provider NWEA,” May 2023. k12dive.com
NH Business Review, “Cognia’s the New Name of Former Measured Progress,” August 2019. nhbr.com
The Century Foundation, “This School Didn’t Teach to the Test — And Scored Better,” March 2022. tcf.org
Harvard Gazette, “Wide Gap in SAT/ACT Test Scores Between Wealthy, Lower-Income Kids,” November 2023. news.harvard.edu
Opportunity Insights / Harvard, Chetty, Friedman, and Deming, research on SAT/ACT performance by income. opportunityinsights.org
Fiveable / EBSCO Research Starters, “Standardized Testing and IQ Testing Controversies.” ebsco.com
Citizens Count NH, “Standardized Testing in Schools,” NH Issue Brief. citizenscount.org
Steven, “The Invisible Income Tax: What ‘No Income Tax, No Sales Tax’ Actually Costs in New Hampshire,” The Quiet Cost, April 30, 2026. thequietcost.substack.com
Steven, “The Adequacy Fraud: Thirty-Three Years of a Promise New Hampshire Has Never Kept,” The Quiet Cost, April 2026. thequietcost.substack.com
Fall Mountain Regional High School, Wikipedia entry, district history and composition. en.wikipedia.org
This piece reflects independent analysis of publicly available data, reports, and published journalism from the sources cited above. Where specific figures or findings originate in reporting by named organizations or researchers, attribution is made in the text. The NH DOE’s public iReport database contains school-level IEP enrollment data verifiable by any reader.
School personnel with direct knowledge of practices in both Charlestown and Walpole schools informed the description of pre-assessment preparation. Their identities are not disclosed.
About sources and drafting methods →


