Thursday, May 17, 2012

"Building the Musical Muscle"

A friend recently sent me this TED talk by Charles Limb about cochlear implants and music. At the very least, it's an informative look into the complex acoustics of music and how they might translate into CI electrode-speak.

Now, I won't say too much about what I perceive to be a slightly condescending note in Limb's attitude toward deaf people. (Or perhaps "condescending" isn't the right word, but it's - something. I suspect those who know me well will sense it, too.) I also won't say too much about some disagreements I have with some of the subjective-perception arguments he makes. As a cochlear implant user, the segments Limb describes as sadly, horribly "identical" for anyone with a CI... actually are different for me, too. I can tell a violin from a trumpet. I can identify the flattened state of the altered melodies he plays, even though I can't describe it much further than that. And, although I agree with the assessment that music rehabilitation is something that's left out of CI rehab in general (I myself have hardly done any), remember that this is me speaking as a profoundly prelingually deaf individual. That is, an individual with no auditory memory at all, no real prior experience with music from which to base those auditory judgments. And all CI users are different.

Those contentions aside, Limb's main point is correct: music does not sound the same for CI users as for typically hearing people. (Does anything? No.) It's a fact of where the technology and the field is right now: a cochlear implant, while absolutely amazing in what it is able to accomplish, is nowhere near as sophisticated as natural hearing. And, whatever we do know about the auditory system, it pales in comparison with the complexity of neuroscience. A good point to keep in mind when asking what CI users can and cannot hear! Defining precise "hearing," and attaining it, is such an elusive game. Music, I confess, does not sound amazing to me. It's there, for sure. It's present. It's noise with more shape and function and reason than it used to have, though it's far more nebulous than speech. I frustrate myself trying to articulate, to pinpoint exactly what it "means," in the same way that a word can "mean" something. It's decent, and at the most it's something I can appreciate.

But always the question:  What do these insights mean for me? How do I respond to this self-perception of my sensory insufficiency, of the inaccessible fragments of this rich world? While I do hope that researchers progress with both technology and rehabilitative strategies, the fact is I'm not going to stand before the abstract idea of music (for that's what it is to me still, something abstract) like Tantalus ever striving after a sip of that receding water. I'd like to keep listening and learning, but I refuse to dwell on irrevocable circumstance. I'm thankful for the other beauties there are in this world - a world, I assure you, very far from Tartarus, however excited I might be for future technological advancements.

Monday, May 7, 2012

Listening: A Collaborative Effort

So I should be working on my honors thesis right now (a major contributor to my silence as of late), but this was too cool not to share. This morning I went to my auditory therapy session, which as usual took on the role of helping me feel on track with, you know, actually improving with my listening skills. Too often I feel like the complex world of hearing (the world that hearing people live in, with background noise and multiple speakers and fast talkers and just too much speed and noise) leaves me floored and insufficient. Returning to basics shows that I'm doing better than I think.

We started out with a set of exercises we've done before, with completely open-set sentence discrimination. This is a level I've progressed to in the last few months, and considering everything it's an exciting milestone, even if it can be frustrating sometimes. (The sentences aren't complicated, but are more along the lines of "the sun shines during the day and the moon shines at night" or "you floss your teeth to get pieces of food that your toothbrush can't reach." They contain everyday objects and phenomena, but their subject matter could be coming from absolutely anywhere at all.)

First go: CI only, hearing aid turned off. I sat in my chair, admittedly tense as the words flooded past. As we talked about afterwards, the start of each sentence was like bracing myself for a wave. As it rushed by, I tried to stick my fingers down in the murky fluid, to grab onto passing fragments of water, seaweed, shells, sand - anything that would give me a clue to the overall meaning. This extended simile aside, I felt like I usually do during these exercises: like I was trying to glue together shattered pieces of glass. Isolated words jumped out at me, words like "teeth" and "sun," but most times it took many repetitions and much patience to get the whole thing. My analytical mind kicked into gear: noun here, verb there, that was a prepositional phrase. More often than not I'd emerge with the skeleton of the sentence, its overall structure, before I could flesh out what it meant.

Over several taxing minutes, we worked through ten sentences. Overall, not a bad effort, but not great either. I felt drained; there seemed like no way to prime my mind to sort through this open-set material, time after time. But now my therapist suggested that we try something else: turn my hearing aid on. Listen to ten more sentences with both, and see what happened.

Okay, why not? I've long known that listening with both ears is easier for me than listening with one, even though the sound input that I get from my hearing aid is far less than what I get from my CI. I expected that the results would be marginally better - a little easier, but not dramatically.

Here's what happened. The sentences, as random as before, flowed past - and time slowed. That's the immediate subjective description that comes to mind. With two ears working together, no longer was I crashing through that very fast-traveling wave, then gasping in retrospect and trying to figure out what I had heard. Time no longer split that way, with present perception floundering and auditory memory going into overdrive, juggling the present and the past in pursuit of meaning. Instead, I had the distinct experience of hearing and understanding the words in real time. It wasn't perfect, but the difference from trial #1 was astonishing. I heard, I understood, I moved on. I waited for the words to come to me instead of bracing myself for their impact. My hearing aid - my slivers of natural hearing, which in all honesty still sound smoother and more acoustically rich than the sometimes-harsh dynamism of the CI - made a world of difference.

Now, how could this be? I walked away thinking. It certainly never happened before with two hearing aids. When I take off my CI and listen with my hearing aid alone, it sounds awful. Squashed and diminished and flat. But when the two work together... wow. I was floored. I'd noticed the effect before, but not like this. What was hard for both ears separately (well, impossible for hearing aid, doable but challenging for CI) suddenly became relatively easy. How could my brain be so good at piecing together two very different ears to produce something cohesive and understandable? Where did it learn how to do that? How does something like that even work?

Now I think I can imagine what hearing people hear: that sort of dynamism (CI) combined with the smooth, lovely quality of normal hearing (HA). However my brain is combining those two elements, it's working. SO COOL.