Today's #chi2020 virtual reading group paper is by @pokristensson - "A Design Engineering Approach for Quantitatively Exploring Context-Aware Sentence Retrieval for Nonspeaking Individuals with Motor Disabilities" - http://pokristensson.com/pubs/KristenssonEtAlCHI2020.pdf
OK, I have a lot of thoughts about this paper... am also curious what @shaunkane thinks of it...?
Also what does @merylalper think of this?
So, I think the authors make a fair point that having an AAC user switch to a new device for a long time to test a context-dependent system may be impractical and even unethical...
And I do understand that theoretical models have a place as an approach for estimating bounds of what may be possible. For example, I have found @pokristensson's theoretical model of maximum "eye swipe" typing performance helpful as an upper-bound in past work....
But I am very concerned that this particular paper makes so many assumptions that the resulting theoretical output may not be very useful in practice.
For instance, the assumption that context tags are always 100% correct! This is unlikely to be true in practice...
Also, the assumptions about how many potential sentences can fit on a display seem to embed a lot of assumptions about sentence length, hardware size, screen resolution, user visual acuity, user cognitive load, UX of a given AAC, etc.
Assumptions about the degree to which users are willing to accept not-quite-the-right sentence prediction are also potentially a problem. @shaunkane & I found AAC users care quite a bit about self-presentation https://www.microsoft.com/en-us/research/wp-content/uploads/2016/10/aacselfexpression-1.pdf
I am also not convinced the AAC dataset used to model the system is a good choice; that data is gathered by able-bodied crowdworkers imagining what AAC users would say, and may not represent the variety of actual AAC speech.
The idea of the context labels is quite interesting, but of course AI-based context labels may have complex privacy tradeoffs to consider. I don't know the right answer to this challenge, it is something I discuss here: https://www.microsoft.com/en-us/research/uploads/prod/2019/08/ai4a-ethics-CACM-viewpoint-arxiv-updated.pdf
There are other examples of context for AAC prediction in prior work beyond those discussed in the paper. For instance, @shaunkane explored using computer vision to propose sentences based on objects in the immediate environment: https://www.microsoft.com/en-us/research/wp-content/uploads/2017/04/scenetalk.pdf
Another way to generate context-appropriate sentences rather than an IR-based approach using context tags is co-construction of text with conversation partners: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/10/AACrobat-1.pdf
Also curious what @BeneteauErin thought of this paper and @svaleval ...
Also, thanks so much to @pokristensson for engaging in a lively discussion on this topic on this thread! I think that his comments here helped in positioning the paper's contribution!