Skip to content

benwade/hearing-explorer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

format readme-v1
project hearing-explorer
version 0.1.0-prerelease
status active development
audience CI users, clinicians, researchers in speech perception
license AGPL-3.0
platform Windows 11 (primary), Python 3.11+
entry_point hearing_explorer.pyw
one_liner Self-assessment instrument for cochlear implant users to systematically document perceptual artifacts and generate audiologist-ready reports.
not_a_substitute_for clinical audiometric evaluation
maturity_honest Built by one CI user for their own use; shared openly; not yet validated with n>1.

Hearing Explorer

A desktop self-assessment instrument for cochlear implant users. It lets a CI wearer play controlled audio stimuli (pure tones or recorded speech), apply parametric EQ to isolate frequency regions, and rate what they hear against a structured artifact taxonomy (ringing, piercing, chipmunk, growl, temporal artifacts, and others) alongside quality metrics. Every trial is logged to CSV. The data can be compiled into an audiologist-ready report with effect sizes, trial counts, and specific frequency-band recommendations mapped to electrode regions.

The project exists because clinical audiograms and in-booth speech-in-noise tests don't capture the artifacts a CI user actually hears in daily life. A CI wearer who can describe "piercing on fricatives, ringing on nasals, recruitment on plosive onsets" with data to back it up is better positioned to request specific MAP adjustments than one who can only say "it sounds off."

Who this is for

  • CI users who want to systematically document what they hear and bring structured evidence to audiology appointments. You do not need to be technical. The app has a first-run wizard.
  • Clinical audiologists who want to see their patient's self-reported artifact pattern in a format that maps to programmable parameters.
  • Researchers in speech perception, psychoacoustics, or CI processing who may find the collection methodology (controlled stimuli, parametric EQ, paired-comparison protocols, phonetic word-tagging) relevant to their own work. The trial CSVs are designed for reuse.

What it produces

The primary output is a self-assessment report (assessment_reporter.py). A report is plain text, structured for readability, and includes:

  • Header with patient name, device, date, and total trials
  • Methodology section describing stimulus control, EQ parameters (biquad peak, Q=1.41), and rating scales
  • Findings section comparing EQ profiles against a flat baseline, with Cohen's d effect sizes and trial counts
  • Recommendations section with frequency-band actions (e.g., "Reduce gain in the 2 kHz region") supported by the evidence that motivated each one
  • Data confidence tier based on trial volume
  • A supporting data table of means and standard deviations by EQ profile

The raw trial CSVs are available in logs/ for independent analysis.

Install and run

Tested on Windows 11 with Python 3.11 and 3.12. Other platforms are untested.

git clone <repo-url>
cd hearing_explorer
pip install -r requirements.txt
python hearing_explorer.pyw

On first launch, a setup wizard guides you through volume safety, audio device selection, and profile setup. You will be asked to point the app at a directory of WAV files; any speech corpus works, but the app's phonetic tagging features are calibrated against the UW/NU speech corpus (see acknowledgements below).

Method in brief

Each trial is one listen-and-rate cycle. The user:

  1. Loads an audio source (pure tone at a specified frequency, or a speech WAV from a scanned library).
  2. Optionally applies a parametric EQ profile (9 bands from 125 Hz to 8 kHz, plus preamp).
  3. Plays the processed audio through a selected channel (implanted ear, contralateral, or bilateral).
  4. Rates artifact severity on 0-5 scales and quality metrics on 1-7 scales.
  5. Optionally tags specific words in the transcript with the artifacts they triggered (for speech trials).
  6. Logs the trial. All ratings, the full EQ profile, source metadata, and any notes are written to a session CSV.

Over enough trials, patterns emerge: which frequency regions generate which artifacts, which EQ adjustments improve quality without introducing new problems, which phonetic contexts are most affected. The stats module and reporter compile these patterns into the report described above.

A design document at DESIGN_NEXT_PHASES.md covers planned additions: a noise-floor module (testing whether tonic stimulation improves naturalness), a quick-rate triage mode, VAS sliders for quality metrics, embedded clinical session protocols, and a structured exercise module.

Safety and scope

  • This tool does not replace clinical audiometric evaluation. It is a self-assessment instrument. The report it produces is intended to support a conversation with a qualified audiologist, not substitute for one.
  • The rating scales are subjective. They capture the user's perception, which is the point, but they are not clinical measurements.
  • Volume safety is the user's responsibility. The setup wizard includes guidance; read it. CI users in particular should be cautious with unfamiliar EQ settings.
  • No validation study has been conducted. The methodology is considered sound by the author, but it has not been published or peer-reviewed, and the instrument has only been used by one person (the author) to date.
  • Data stays local. All trial logs, presets, and settings are written to your local disk and never transmitted anywhere. The application makes no network requests. You are responsible for the privacy of your own log files; they may contain device identifiers, audiogram data, or other personal information you've entered, and they are not encrypted at rest.

Status

Version 0.1.0-prerelease. Under active development by a single author who is also the primary user. The code works and has produced ~100 trials of real data, but expect rough edges, Windows-centric assumptions, and features described in DESIGN_NEXT_PHASES.md that are not yet implemented.

If you are a CI user or researcher who wants to try the tool and report back, contact is welcome (see below).

License

AGPL-3.0. The open core of this project will remain AGPL-3.0. Any future premium modules (if any) would be separately licensed and clearly marked; nothing is currently planned on that front.

Acknowledgements

Speech stimuli used during development were drawn from the UW/NU speech corpus, provided by Dr. Richard Wright (University of Washington, Department of Linguistics). The phonetic tagging interface is calibrated against its speakers and transcripts.

Contact

Ben Wade — bestbenwade@gmail.com ORCID: 0009-0009-5857-7447

About

Self-assessment instrument for cochlear implant users to systematically document perceptual artifacts and generate audiologist-ready reports.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages