Log in

View Full Version : Telecommunication (symbol)


gonbe_88
Nov 19, 2009, 10:49 AM
In telecommunication, what exactly the term "symbol" means. It has something to do with digital modulation stuff so on and so forth.. I have read all the books and still can't imagine what does it means... Perhaps I needed a brief, more simply, explanation...

Apart from tat, what does the bit rate means? How can it be linked to digital pulses?

ebaines
Nov 19, 2009, 12:58 PM
A "symbol" is a piece of digital information that is transmitted over a transmission line. For example: in the simplest of digital systems, each bit of information (a 1 or a 0) is transmitted by the presence or absence of a voltage on the line. To transmit a "1" the transmitter sends a pulse down the line, and to transmit a zero the transmitter sends nothing. So each symbol is either a 0 or a 1. In more advanced systems the transmitter may use multiple voltage levels to pack more information into a single multi-level pulse. For example a four level system can packinformation for two bits into each symbol: it might send a -3 volt pulse to represent the bits 00, a -1 volt pulse to represent 01, a +1 volt pulse for 10, and a +3 volt pulse for 11. Thus with one pulse (one symbol) the system sends information about two bits of data. There are ever-more sophistiated schemes for encoding information into denser and denser symbols - employing not only differing voltage levels but also phase angles into each symbol. This allows more data to be packed into a given digital stream. So, it takes fewer symbols to send more data.

The bit rate is a means of saying how much data is transmited per second. In a simple PCM system as outlined above it takes one pulse per bit. In the four level version I described each pulse carries information for two bits, so the bit rate is actually twice the symbol rate. With more complicated systems its possble for one pulse to carry 8 or even 16 bits of information.

Hope this helps.

KISS
Nov 19, 2009, 01:08 PM
A character set used to be composed of letter, numbers and symbols. For example, these are symbols: !@#$%^&*()_+=[{}]"':;

Early teletypes used modems that used frequency shift keying. 4 frequencies represented 0 1 in both directions. A characher was composed of a start bit, 7 data bits, an optional parity bit and 1 or 2 stop bits. Thus the transmissin rate was about 10 characters/second.

We then graduated to 56 bps modems, then 10baseT eternet, 100baseT and 1000baseT. 10baseT is 10 megabits persecond over twisted pair.

Te Nyquest theorem says that you have to digitize at 2x the frequencies of interest. Voice goes from about 20 to 4000 Hz. Each point has a resolution, say 12 bits including sign. So to digitize voice in this scenereo, you would need 2*4000*12 bits/sample assuming a sample a second.

This is without protocol overhead or compression. Ethernet wastes about 20% of it's bandwidth in overhead.

Is this what you are after?

gonbe_88
Nov 19, 2009, 06:16 PM
Erm.. I appreciate both parties (ebaines n KISS). I've got a clearly version on it already. But what about probabilities? I have studied digital pulses n so on. But how it is concerned with probabilities. Y is digital communication so different with analogue-type?

In my subject "Communication system", I have learnt fourier series n transform. After that, I dealt with probabilities. Later on, they apply probability into the digital modulation thingss...

But for the previous subject "Analogue Communication", I studied about AM, FM, PM, Which I can understand. It just deal with signal shift to a higher frequency for a better modulation.

However, I soon found that digital modulation, ASK, FSK, PSK, is a total different idea way of modulation..

The main problem is that, I don't understand how does probability got to do with digital modulation??

KISS
Nov 19, 2009, 06:36 PM
Digital can sometimes be thought of as noise. Take a look at you garage door opener remote. It says something like, the unit can generate interference and accept interference even if it causes unobjectionable operation.

Take a look at spread spectrum modulation methods where the frequency of transmission is constantly changing.

What about sun-spot activity.

In other words, you cannot guarantee the transmission is 100% reliable just like you cannot guarantee that a magnetic tape or disk is written 100% reliably. You have to use CRC or cyclic redunduncy checks which is an error correcting code and we used to use parity as way of flagging digital errors which we could not correct for.

ebaines
Nov 20, 2009, 11:01 AM
I'm not sure how probabilities come into play in a discusson of digital transmission, unless you're trying to determine how bit errors can creep in if the signal-to-noise ratio on tehtransmissionline decreases. There are some pretty sophisticated techniques for calculating expected error rates based on noise levels - as noise increases (or signal level drops) there is a chance that the receiver will mis-read a symbol. For example, it may misinterpret a noise spike as a valid pulse. The sources of noise could be external to the channel (electromagnetic interference or alien crosstalk from other digital lines) or internal (crosstalk between pair of the channel). It's important to keep the signal-to-noise ratio above a certain threshold to minimize the chance for such errors. Engineers therfore design their systems to have a certain signal-to-noise ratio, which then allows them to meet an expected bit error rate quality of service. There are techniques for estimating the bit error rate for a particular encoding technique given the signal-to-noise ratio at the receiver -- is that what you're getting at?

One other point - by their very nature digital systems can only provide a close representation of the given input signal, as they must digitize the input signal's amplitude, and as KISS pointed out you totally loose all information about frequencies that are higher than 1/2 the sampling rate.