We almost missed this finding.
When you run a survey study, incomplete responses are typically treated as a methodological inconvenience. You note the completion rate, acknowledge it as a limitation, and proceed with analysis on whoever finished. The incomplete data gets set aside, mentioned in a footnote, forgotten.
But when we looked at who didn’t finish the three surveys in the EFEIA Protocol, and compared them to who did, a pattern emerged that changed how we understand not just our own data, but EHS research in general.
286 people enrolled in the census. 94 completed all three surveys. The question we should have asked earlier: what happened to the other 192?
The Numbers
The completion pattern broke down like this: a third of participants completed all three surveys, seven percent completed two, and sixty percent completed only one.
At first glance, this looks like a typical dropout curve. Long surveys lose people. Attention wanders, priorities shift, life intervenes. A 33% completion rate isn’t unusual for a multi-part online assessment.
But something didn’t fit. These weren’t random internet users clicking a link out of mild curiosity. They were people concerned enough about electromagnetic hypersensitivity to seek out a specialized assessment, provide detailed information about their symptoms, and begin a process they knew would take time. They were motivated. They wanted answers.
Why would motivated people stop?
They Weren’t Disinterested. They Were Too Affected.
We compared scores between participants who completed all three surveys and those who completed only one. If dropout were random, or driven by lack of interest, we’d expect similar scores between groups. People who found the survey boring would have roughly the same exposure levels and symptom severity as people who found it engaging.
That’s not what we found.
On Survey A, measuring lifestyle and EMF exposure, the single-survey group scored 10% higher than the complete group. Higher scores mean worse EMF hygiene, greater exposure burden. The people who stopped were more exposed than those who continued.
On Survey B, measuring symptom severity, the gap widened. The single-survey group scored 15% higher. More symptoms. Greater burden. The people who stopped were sicker than those who continued.
On Survey C, measuring sleep dysfunction, the gap was largest of all: 29% higher scores in the single-survey group. Dramatically worse sleep. And here, the pattern made sudden, terrible sense.
The most affected participants couldn’t finish because their condition is what prevented them from finishing.
Why Sleep Shows the Largest Gap
Completing a three-survey assessment requires capacities that severe sleep disruption specifically impairs.
You need cognitive function to understand the questions. Sleep deprivation degrades comprehension, working memory, and the ability to hold multiple pieces of information in mind while formulating a response. Questions that seem straightforward when you’re rested become effortful when you’re not.
You need energy for sustained effort. Each survey takes time and attention. Stringing three of them together, even spread across days, requires a reserve that chronic sleep dysfunction depletes. When your baseline energy is already exhausted by daily functioning, an optional assessment becomes one more thing you can’t get to.
You need concentration for accurate responses. The symptom survey asks about 25 different experiences, each rated on a scale. The sleep survey probes multiple dimensions of a complex phenomenon. Responding accurately rather than just clicking through requires focus, and focus is exactly what fragments when sleep architecture collapses.
You need follow-through capacity. Starting something is easier than finishing it. The executive function required to return to a task, pick up where you left off, and push through to completion depends on prefrontal resources that sleep deprivation taxes heavily. People with severe sleep dysfunction often describe a pattern of beginning things they can’t complete, not from lack of will but from lack of capacity.
The 29% gap in sleep scores isn’t coincidental. Sleep dysfunction creates the very impairments that prevent people from documenting their sleep dysfunction. The worse your sleep, the less able you are to complete an assessment that would reveal how bad it is.
What This Means for EHS Research
This isn’t just a limitation of our census. It’s a structural problem in how EHS research is conducted.
Nearly all published studies rely on complete-case analysis. You recruit participants, administer assessments, analyze whoever finishes. The people who drop out become missing data, acknowledged in the methods section, excluded from the findings. This seems methodologically sound: you can only analyze what you have.
But if the completion process itself filters out the most severely affected, then every complete-case analysis is systematically biased toward the healthier end of the spectrum. The “average” EHS patient in published research isn’t average at all. They’re the subset who retained enough function to complete the study requirements.
Our data suggests this bias is substantial. If single-survey participants show 10-29% higher scores across domains, then published findings likely underestimate true population burden by 20-30%. The “typical” symptom profile in the literature is actually the profile of those well enough to fully participate. The sickest patients are statistically invisible.
This creates a distorted picture of the condition. EHS appears milder than it is. Intervention needs appear lower than they are. The urgency of the problem is systematically understated.
And there’s a secondary effect: when patients read research describing “average” EHS severity and find their own experience is far worse, they may conclude they’re outliers, unusually severe cases, perhaps even that something else must be wrong with them. In reality, they may be more typical than the research suggests. The research just never captured people like them.
A Different Kind of Finding
Traditionally, we think of findings as coming from data we collect. Someone answers a question, we record the answer, we analyze the pattern. Missing data is absence, a gap where information should be.
But the completion paradox suggests something different: the pattern of missing data is itself informative. The 192 people who didn’t finish weren’t silent. Their inability to complete was a communication, one we almost failed to hear.
When someone with suspected EHS cannot sustain a multi-part assessment, that tells us something about their functional status. It’s not separate from their condition. It’s a manifestation of it. The very symptoms we’re trying to measure, including cognitive dysfunction, fatigue, impaired concentration, and reduced follow-through, are what prevent measurement.
In a sense, non-completion is diagnostic. If you can’t finish the assessment, you’ve already told us something important about your severity level. The blank responses where Survey B and C should be aren’t absence of data. They’re data of a different kind.
This reframes how we should think about research participation. The goal isn’t just to maximize completion rates so we have more data to analyze. It’s to recognize that who completes and who doesn’t carries information about the population we’re studying. Ignoring that information doesn’t make it go away. It just makes our conclusions wrong.
Implications for Practitioners
For clinicians working with EHS patients, the completion paradox has practical implications.
First: don’t interpret non-completion as non-engagement. When a patient fails to finish intake paperwork, or doesn’t complete the symptom diary you asked them to keep, or can’t sustain the assessment protocol you designed, the instinct may be to see this as lack of motivation or follow-through. The completion paradox suggests another interpretation: they may be too impaired to complete it, and that impairment is part of what you’re assessing.
This doesn’t mean you abandon assessment. It means you adapt it.
Shortened protocols make a difference. If a comprehensive assessment takes cognitive and energetic resources your patient doesn’t have, a briefer version that captures essential information may be more successful, and more representative of their actual status. Getting 70% of the information from 100% of patients may be more valuable than getting 100% of the information from 33%.
Phased approaches help too. Rather than administering everything at once, spreading assessment across multiple shorter sessions gives recovery time between efforts. The patient who can’t sustain 45 minutes of questions may be able to handle three 15-minute sessions across a week.
Proxy measures can supplement self-report. If a patient struggles to complete a detailed sleep diary, a simple “did you wake refreshed?” rating each morning captures something. If a comprehensive symptom inventory is too much, tracking the top three complaints may be feasible. Imperfect data is better than missing data.
And there’s value in noting what patients couldn’t complete, rather than just discarding it. If someone begins an assessment and stops partway through, document where they stopped. Track what was manageable and what wasn’t. This becomes part of their clinical picture.
Implications for Patients
If you’re reading this and recognizing yourself, recognizing the pattern of starting things you can’t finish, of assessments abandoned partway through, of good intentions that didn’t survive contact with your actual capacity, there’s something we want you to understand:
That struggle is information. It’s not separate from your condition. It’s part of it.
The inability to complete things, to sustain cognitive effort, to follow through on tasks that seem straightforward, is one of the most common functional impairments in EHS. Fatigue, concentration problems, and memory issues ranked among the top symptoms in our Survey B data. These directly affect the capacity to do exactly what we asked participants to do.
If you couldn’t finish our survey, or any other assessment, that doesn’t mean you failed. It means the assessment captured something about your state, just not in the way it was designed to. Your non-completion told us you’re likely among the more severely affected, that your sleep is probably significantly disrupted, that your functional capacity is compromised in ways that the assessment itself couldn’t measure because you couldn’t get there.
You’re not an outlier. You’re not uniquely bad at completing things. You’re part of a pattern that affects the majority of people seeking EHS evaluation. Sixty percent of our participants were where you are. The difference is that you now know this is a recognized pattern, not a personal failing.
This also means that if you’ve read EHS research and felt your experience was more severe than the “typical” patient described, you may be right, but not because you’re unusually severe. The research never included people at your severity level. They couldn’t complete the studies either.
What Research Must Do Differently
The completion paradox isn’t just our problem to solve. It’s a challenge for everyone studying EHS and similar conditions.
Traditional methods assume that participation is independent of severity. They assume the people who complete studies are representative of the people who start them. For conditions where functional impairment affects the capacity to participate, this assumption is wrong, and it biases every conclusion that follows.
Addressing this requires methodological innovation.
Adaptive assessment protocols that adjust length and complexity based on participant capacity could capture data from people who would otherwise drop out. If someone is struggling, the protocol shortens. You get less detailed information but from more representative participants.
Partial completion analysis should become standard rather than exceptional. Rather than excluding anyone who didn’t finish, weight the available data appropriately and acknowledge what it represents. A participant who completed two of three surveys still contributed information.
Severity-stratified recruitment could deliberately oversample from severely affected populations, with accommodations designed specifically for their limitations. This is harder and more expensive, but it’s the only way to ensure research represents the full spectrum.
Caregiver and proxy reports can supplement self-assessment when patients can’t complete detailed questionnaires themselves. A family member’s observations about sleep patterns and daily function aren’t as precise as self-report, but they’re better than nothing.
And published research should routinely report completion patterns as findings, not just limitations. If 60% of your sample couldn’t finish, that should be in the results section, not buried in methods. It’s telling you something about the population you’re studying.
What 192 People Taught Us
The 94 participants who completed all three surveys gave us the data we analyzed. The correlations, the phenotypes, the risk stratification, everything in our findings comes from them.
But the 192 who couldn’t finish taught us something equally important: that our findings are incomplete in a specific, predictable direction. That the picture we’re presenting is the healthier end of the spectrum. That somewhere out there are people too impaired to document their impairment, and their experience is more severe than anything in our reports.
Research has historically treated these people as noise, as dropout statistics, as methodological limitations to acknowledge and move past. But they’re not noise. They’re signal. Their absence from our data is presence of another kind, telling us the condition is worse than our numbers show, the need is greater than our analysis suggests.
Including them, really including them, requires changing how research is designed, conducted, and interpreted. It requires recognizing that the absence of data is itself data, that non-completion carries meaning, that the people we most need to understand are often the ones least able to participate in being understood.
The 192 people who couldn’t finish our census are still part of what it found. We just had to learn how to hear what they were telling us.
This article examines the sleep-symptom relationship in depth. For the complete findings including methodology, phenotype classification, and sensitivity analysis, the full reports are available at 2025 EHSGC Reports.
