PostFrontal Forum
PostFrontal Forum
LK8000_Support_Forum | Profile | Register | Active Topics | Members | Search | FAQ
 All Forums
 Aeroclub
 Cremona Migliaro
 do the job

Note: You must be registered in order to post a reply.
To register, click here. Registration is FREE!

Screensize:
UserName:
Password:
Format Mode:
Format: BoldItalicizedUnderlineStrikethrough Align LeftCenteredAlign Right Horizontal Rule Insert HyperlinkInsert EmailInsert Image Insert CodeInsert QuoteInsert List
   
Message:

* HTML is OFF
* Forum Code is ON
Smilies
Smile [:)] Big Smile [:D] Cool [8D] Blush [:I]
Tongue [:P] Evil [):] Wink [;)] Clown [:o)]
Black Eye [B)] Eight Ball [8] Frown [:(] Shy [8)]
Shocked [:0] Angry [:(!] Dead [xx(] Sleepy [|)]
Kisses [:X] Approve [^] Disapprove [V] Question [?]

 
   

T O P I C    R E V I E W
r_milkway Posted - 08/03/2011 : 07:55:27
Here's a curious paradox related to American Sign Language, the system of hand-based gestures used by around 2 million deaf people in the US and elsewhere to communicate.

Almost 40 years ago, researchers discovered that although it takes longer to make signs than to say the equivalent words, on average sentences can be completed in about the same time. How can that be possible?

Today, Andrew Chong and buddies at Princeton University in New Jersey give us the answer. They say that the information content of the 45 handshapes that make up sign language is higher than the information content of phonemes, the building blocks of the spoken word. In other words, there is greater redundancy in spoken English than signed English.

In a way, that's a trivial explanation, a mere restatement of the problem. What's impressive about the Princeton contribution is the way they have arrived at this conclusion.

The team has determined the entropy of American Sign Language experimentally, by measuring the frequency of handshapes on video logs for deaf people uploaded to youtube.com, deafvideo.tv and deafread.com as well as from video recordings of signed conversations taken on campus.

It turns out that the information content of handshapes is on average just 0.5 bits per handshape less than the theoretical maximum. By contrast, the information content per phoneme in spoken English is some 3 bits lower than the maximum.

This raises an interesting question. The spoken word has all this redundancy for a reason: it allows us to be understood over a noisy channel. Lessen the redundancy and your capacity to deal with noise is correspondingly reduced.

Why would sign language need less redundancy? "Entropy might be higher for handshapes than English phonemes because the visual channel is less noisy than the auditory channel...so error correction is less necessary," say Chong and co.
<a href="http://www.chinafengye.com">ÍÐÅ̲øÈÆ»ú</a>
<a href="http://www.chinafengye.com">¹Ü²Ä²øÈÆ»ú</a>
<a href="http://www.chinafengye.com">À­ÉìĤ²øÈÆ»ú</a>
<a href="http://www.chinafengye.com">±¡Ä¤²øÈÆ»ú</a>
They go on to speculate that signers cope with errors in an entirely different way to speakers. "Difficulties in visual recognition of handshapes could be solved by holding or slowing the transition between those handshapes for longer amounts of time, while difficulties in auditory recognition of spoken phonemes cannot always be easily solved by speaking phonemes for longer amounts of time," they say.

And why is all this useful? Chong and friends say that if sign language is ever to be encoded and transmitted electronically, a better understanding of its information content will be essential for developing encoders and decoders that do the job. A worthy pursuit by any standards.

PostFrontal Forum © PostFrontal - La community del Volo a Vela Go To Top Of Page
This page was generated in 0.09 seconds. Snitz Forums 2000

Since 2006, owned and maintained by PostFrontal S.A.S. di Giuliano Golfieri & c. - VAT ID: IT05264240960
THIS WEBSITE ONLY USES FUNCTIONAL COOKIES
Privacy & Cookie Policy