I couldn’t pronounce the Thai word. We were using the classic ‘I say, now you repeat’ pattern where the learner can’t reproduce the sound because it doesn’t exist in any of the languages he knows. Unless one is good at generating novel random sounds by manipulating body and breath in ways that go against all muscle memory and everything one knows about speech, ‘listen and repeat’ is a way to learn slowly, poorly or not at all.
This time was different. The weirdest thing happened. Without knowing why, I formed my tongue into a bowl (low middle, high rims) and drew it down while saying the sound I couldn’t pronounce. Didn’t need her expression to tell me I got it, my ears were enough. That was the second big step helping convince me to learn some Thai. The first (I can’t remember when it happened) was hearing distinct language sounds in what had previously seemed continuous random noise. I’m going to give it a shot after the Adriatic trip. But I won’t be using ‘listen and repeat’.
Computer animation shows how tongue, lips, pallet, etc combine to produce a given sound. Check out this site. Click on the American flag. In the new window click on the blue square labeled ‘place’, then on the blue square in the row below labeled ‘glottal’, then on /h/. You’ll both see and hear how a sound is produced.
Don’t have a computer handy? The International Phonetic Language is an alphabetic system of phonetic notation, created to be a standardized representation of the sounds of oral language. All sounds. All languages. The first version was created in 1888. This isn’t rocket science.
Not using such tools to teach non-native sounds is like a skilled harmonica player blowing a single note, then telling the student to play the same note, without benefit of mentioning which hole he blew or drew through or that his tongue covered the holes he wasn’t using. A student could learn to play the right note eventually, but why would anyone teach that way unless they were being paid by the hour?