Applying the Model
Nneka Nnagbo
How might the model contribute to the design of AAC devices?
Modelling dysarthric conversation can help support AAC devices’ in providing access to interactional forms of conversation during in-person, situational conversations. Such a model gives the AAC system a structure that guides it in terms of how to handle interactional forms of conversation for dysarthric speakers. In addition, this helps the system “know where it is in a conversation, how the various stages of a conversation fit together, and the sequence of events that are likely to occur during a conversation,” as well as the type of output items (e.g., digitized or synthetic speech) that will be needed in the upcoming stage of the conversation, so that it can prepare to offer these to the user for selection and use (Arnott & Alm 2013). The AAC system needs this in order to be able to optimize the selections and predictions that it makes as it tries to present its user with appropriate things to say as their next contribution to the conversation (Arnott & Alm 2013).
Once the model of dysarthric conversation is applied to the AAC system, it could be used to address specific interactional conversational barriers that dysarthric speakers face while using AAC devices in conversation, such as telling stories, jokes, and sharing personal experiences about where they are from and their hobbies, for instance. In addition, the model will help me, the researcher, understand how dysarthric speakers converse during in-person, situational conversation (e.g., what are the feelings, thoughts, and emotions they are trying to convey to their conversational partners? What are the types of interactional forms of conversations they participate in? What are the AAC tools/resources needed for better interactional conversation?
The idea
This idea is inspired by primitive and ancient tools of human communication; from cave paintings, the oldest methods of communicating, to rock carvings (petroglyphs), which consisted of drawn pictures using different signs and symbols to deliver messages and convey stories. The ancient Egyptians were amongst the first people to use symbols as a form of written communication through their revolutionary design of hieroglyphics, which later developed into the alphabet system that we know today, as well as papyrus, the precursor to modern paper and the earliest paper-like material known to humans. Our modern writing system is fairly recent in comparison to these earlier communication tools/methods from which it evolved. It’s weird to think of the system of writing as a technology because it’s been a part of our societies for so long, but it’s been important in shaping human history. I would argue that the practice of writing on paper was inspired by ancient Egyptian designs and technologies. Particularly, the technological tools of hieroglyphics, papyrus, and the reed pen, together, were the original systems of ‘writing on paper.’
There are many benefits of writing. Writing helps us think through things. It helps us process and get things out of our heads. In addition, there is something about writing with a good pen that feels very natural and fluid; it is enjoyable for its own sake, much like a good conversation. Handwriting could be a viable entry point to providing access to better interactional conversation in AAC devices for individuals who have dysarthria. The integration of handwriting into AAC systems, based on a dysarthric model of conversation, makes way for an additional, more expressive and personalized, mode of communication through AAC devices. People who have acquired dysarthria could make use of these handwriting capabilities in AAC systems during interactional conversations to have more meaningful interactions. This also creates an opportunity to enable them to make and preserve their own symbols and fonts (i.e., pictorial properties of a symbol) out of their handwriting.
This potential avenue of exploration also deemphasizes the use of natural speech during conversation as it presents alternative “design frames on communication that serve a wider range of functions beyond speech generation” (Ibrahim 2020). In addition, it disrupts the view that communication is necessarily organized around talk (i.e., verbal communication) (Ibrahim 2020).
The idea in conversation
Frame of mind
The following scenario showcases the idea (low fidelity prototype) in use. In this scenario, you will assume the role of a conversational partner to an individual who has acquired dysarthria and uses the prototype to engage you in an interactional conversation—telling a personal story. Transcripts and image descriptions are available directly after the interactive modules below.
Scenario
You’re having a conversation with a young woman named Chika who acquired dysarthria last year as a result of a brain stem injury which put her in a coma for a short time. Since her injury, Chika prefers to use writing as her primary form of communication as it’s always been much easier than trying to speak. You ask Chika where she is from.
Listen to Chika’s story
Using her speech-generating AAC device, Chika constructs a message by typing the story of where she is from. Her AAC device then reads out her story in the form of synthetic speech.
Transcript
My name is Chika; it is a Nigerian name which means ‘God is greater’ in Igbo. I am from Nigeria, the Anambra State, located in the southeastern region of the country. I was born in the capital city of Awka.
Read Chika’s story
Using her handwriting AAC device, Chika writes out the story of where she is from.
To interact with the various features of the prototype, click on the interactive prototype (image) below. Click on the green i (info) icons to open a description box about a specific feature.
Image description
My name is Chika; it is a Nigerian name which means ‘God is greater’ in Igbo. I am from Nigeria, the Anambra State, located in the southeastern region of the country. I was born in the capital city of Awka.
Explore the Project