Currently, if a browser cannot load the specific ML model required for the Proofreader, the API is expected to report itself as unavailable. Every modern browser already possesses a mature spellchecking engine (e.g., Hunspell, OS-level integration). By decoupling the Proofreader API from the underlying implementation mechanics (LLM vs. Dictionary), it would prevent developers from having to write two separate code paths: one for the Proofreader API and one for legacy spellchecking.
Proposal to be added to the spec
If the User Agent’s primary machine-learning model is unavailable, the User Agent SHOULD attempt to fulfill the request using the platform's built-in spellchecking facilities.
In such fallback scenarios, the User Agent MAY ignore the includeCorrectionExplanations option. Consequently, the explanation field in any resulting ProofreadCorrection objects MUST be returned as null or an empty string. The User Agent is not expected to synthesize explanations for dictionary-based corrections. When operating in this fallback state, the User Agent is NOT REQUIRED to provide granular CorrectionType classification. It MAY default all type values to 'spelling' or 'grammar' as appropriate, ignoring more specific enum values (such as 'preposition' or 'capitalization') if the underlying engine does not support semantic categorization.
Currently, if a browser cannot load the specific ML model required for the Proofreader, the API is expected to report itself as unavailable. Every modern browser already possesses a mature spellchecking engine (e.g., Hunspell, OS-level integration). By decoupling the Proofreader API from the underlying implementation mechanics (LLM vs. Dictionary), it would prevent developers from having to write two separate code paths: one for the Proofreader API and one for legacy spellchecking.
Proposal to be added to the spec