RF fundamentals, how does bandwidth affect throughput

In summary: If you have channels that is only 5MHz BW, You get about 5 mega bits a second, not over 10 mega bits per second. ( I am not sure how you count bits stream to frequency. Because I am not sure you can cound two bits per cycle as each cycle has +ve and -ve!). But anyway, you are not going to get over 10 mega bits per second no matter what. With two channels, you can get over twice of the maximum limit of each channel. You are talking about apple and orange with the DAC.All things being equal, increasing the channel width will increase the throughput because it will enable more complex modulation. It isn't a simple question. You have to change how
  • #1
C. Darwin
18
0
I'm trying to get a basic understanding of RF DAC.

If I have a DAC that does 100 MS/s with an 8 bit resolution, this translates to a 800 Mbps throughput. Or is this too simple?

Now, how does the bandwith of a channel affect this? Say I have two 5 MHz channels. With two channels there should be 2x800 = 1600 Mbps. But does this change with two 10 MHz channels instead of two 5 MHz channels?
 
Engineering news on Phys.org
  • #2
C. Darwin said:
I'm trying to get a basic understanding of RF DAC.

If I have a DAC that does 100 MS/s with an 8 bit resolution, this translates to a 800 Mbps throughput. Or is this too simple?
Sounds reasonable.

Now, how does the bandwith of a channel affect this? Say I have two 5 MHz channels. With two channels there should be 2x800 = 1600 Mbps. But does this change with two 10 MHz channels instead of two 5 MHz channels?

If you have channels that is only 5MHz BW, You get about 5 mega bits a second, not over 10 mega bits per second. ( I am not sure how you count bits stream to frequency. Because I am not sure you can cound two bits per cycle as each cycle has +ve and -ve!). But anyway, you are not going to get over 10 mega bits per second no matter what. With two channels, you can get over twice of the maximum limit of each channel. You are talking about apple and orange with the DAC.
 
  • #3
Hmm, I don't really follow. Keeping the 100 MS/s and 8 bit res, let's say I have a channel from 110 MHz to 120 MHz (centered at 115 MHz). How does increasing the channel width affect the throughput, the resolution is still 8 bits...
 
  • #4
C. Darwin said:
Hmm, I don't really follow. Keeping the 100 MS/s and 8 bit res, let's say I have a channel from 110 MHz to 120 MHz (centered at 115 MHz). How does increasing the channel width affect the throughput, the resolution is still 8 bits...

The bandwidth does not imply the bitrate or vice-versa... that also depends on the coding scheme which you have not specified.

If you have a channel from 110 MHz to 120 MHz a 100 MS/s DAC may be overkill (assuming of course you have a TX mixer). You really only need a 25 - 40 MS/s DAC depending on your reconstruction filter.

All things being equal, increasing the channel width will increase the throughput because it will enable more complex modulation. It isn't a simple question. You have to change how you're coding your bits to take advantage of the extra bandwidth. It doesn't happen automatically.
 
Last edited:
  • #5
C. Darwin said:
Hmm, I don't really follow. Keeping the 100 MS/s and 8 bit res, let's say I have a channel from 110 MHz to 120 MHz (centered at 115 MHz). How does increasing the channel width affect the throughput, the resolution is still 8 bits...

I think you are mixing things up. You have an 8 bit DAC capable to run at 100MHz, so you are generating information of 8X100MHz worth of data. This means that you are capable to generate 100 mega data points per second, each data points with 8 bits of resolution. Nothing more.

But then you talk about two of the data channels ( which I don't even know exactly what you mean), each of only 5MHz BW. With this two channel, you can only sent 2X5M bits ( or say 2X10MHz maximum) of data bits worth of information per second. This and the DAC are two different unrelated things.

Is this something new or new terminology since the days when I designed data acquisition systems for LeCroy? What are you trying to do? Tell us more specific information first.
 
  • #6
C. Darwin said:
I'm trying to get a basic understanding of RF DAC.

If I have a DAC that does 100 MS/s with an 8 bit resolution, this translates to a 800 Mbps throughput. Or is this too simple?

Now, how does the bandwith of a channel affect this? Say I have two 5 MHz channels. With two channels there should be 2x800 = 1600 Mbps. But does this change with two 10 MHz channels instead of two 5 MHz channels?

Bandwidth has a number of definitions and we need to be a little careful how we use it. A signal has a bandwidth which is the difference between the highest and lowest sine waves that compose that signal. A channel has a channel width and in order for a signal to remain undistorted, the bandwidth of the signal must fit in the channel width.

In practice, the signal must be filtered so it doesn't intrude on adjacent channels. The filters don't have vertical edges but are sloped, so the attenuation of the edges of the signal must begin inside the channel width. This means that the bandwidth must be somewhat narrower than the channel width.

In order to find the channel width needed, you need to find the bandwidth of your signal. It is not as simple as saying it must be 800 MHz because you're sampling at 100 MS/sec at 8 bit resolution. Also there are many ways of compressing the data so you don't have to send so many bits. A simple one is if the data from one sample to the next varies by less than 8 bits, just transmit the difference between the samples.

The bandwidth of a signal is affected by many factors so it is generally measured with a spectrum analyzer rather than calculated.
 
Last edited:
  • #7
There's a basic fact that needs to be considered here. The form of modulation system that you use will determine the number of b/s for the given analogue (do you mean 3dB?) bandwidth and a given signal to noise ratio. Putting it crudely, if you have loads of signal to noise ratio, your analogue signal can carry any number of signal levels so you you are not restricted to binary signalling; four levels will double the bit rate. (in fact, WHO ever uses simple binary these days?). The choice should be between modulation and coding systems and not just one of them and, as mentioned earlier, the filtering can be important if other users are involved.
Whilst it's true that a simple binary system is easiest to design, as soon as you want to squeeze a useful amount of data through any channel, it's worth while considering something more sophisticated. It's what Digital TV, Radio and mobile comms all do.
 
  • #8
What is a channel, this is new to me!:confused:
 
  • #9
yungman said:
What is a channel, this is new to me!:confused:

Indeed, what is a channel? If a channel is described in terms of Hz, then we are talking about its spectral occupancy (in some way). In conventional terms that means the difference between the frequencies where the mean power is, say half power. But you'd also need to give some idea of signal to interference /noise ratio before the simple 'bandwidth' figure would be useful.
If a channel is described in terms of b/s then that's what it is. You don't need to specify any more about it until noise and interference are considered, so even in that case you would need to specify data rate and some measure of error rate /statistics blah blah.

It's a shame, imo, that digital capacity is referred to as 'bandwidth' and not just 'data-rate' because that would put things better into perspective. I'd bet that the first time the term was mis-used it wasn't by an Engineer! Smacks of Sales-person to me.
But I'm really only repeating what's been written earlier.
 
  • #10
I remember 10 years ago when I was working on SONET OC48 and OC192, we put a pseudo random signal through the transceiver link combo and displayed onto an ultra fast scope, you see the "eye" of the Eye Pattern open, you call it good! Never worry about the BW, it's the bit rate and how clear the eye is! When the "eye" is drooping, that's the end of the road.

So physically, channel is one single connection? Like a serial data link?
 
  • #11
yungman said:
I remember 10 years ago when I was working on SONET OC48 and OC192, we put a pseudo random signal through the transceiver link combo and displayed onto an ultra fast scope, you see the "eye" of the Eye Pattern open, you call it good! Never worry about the BW, it's the bit rate and how clear the eye is! When the "eye" is drooping, that's the end of the road.

So physically, channel is one single connection? Like a serial data link?

You are making this all too simple, I'm afraid. In conditions where there is a significant (but usable) error rate the eye pattern may be difficult to assess. The eye pattern is all about bandwidth. Clearly, you need your transmit and receive filters to produce a good eye pattern in a simple slicing circuit but I think you'll find that modern systems have gone beyond that.
Also, when you say "never mind about the B/W" you are forgetting the problem of outgoing interference and about being a good neighbour.

If "channel is one single connection" then one should describe it in terms of the data (information) it carries - which is not 'bandwidth'.

Btw, what was the modulation system you were using on your system and what coding? If you look for the equivalent to an eye pattern for PAM systems, it isn't necessarily so clear.
 
  • #12
I mainly deal with the very front end physical link, I don't even work with the modulation scheme. I was not even in the field long enough, I got in for a few months and got out. I found myself not interested in the telecom field. My interest is in RF, and I don't think I got a lot of it there even they are dealing with 2.5 to 10GHz at the time. All I see is buying the optical transceivers. Then the job is to interface from the transceiver. Laying out the differential pair is of utmost importance. I just talked to an engineer in Cisco, they are doing 15GHz and higher, even a via on the pcb is a big deal. They have to cut out part of the copper of the via feedthrough so they can have perfect impedance match when going through the via and checked by TDR. Jobs are very compartmentalized, each of us doing one particular thing. But again, I was barely scratching the surface of that field and maybe my comment are just because of my limited experience.

Eye Pattern was the single most import test for us. All the TDR boiled down to get a good eye pattern. That will ring out impedance mismatch, bandwidth, rise time already. Get that done first and then go into more detail. But again, I was in it for only 6 months, so I am no expert on this.
 
Last edited:
  • #13
OK - so you would find Bandwidth (transmitted and receiver) very relevant. Those two are measured in Hz - not bits per second. You would also know that the occupied bandwidth needed for a given demodulated signal to noise ratio will depend a lot on the form of modulation. For instance, wide band FM gives you a massive improvement in SNR but a low carrier to noise ratio threshold. It's horses for courses.
I just want to avoid confusion. (And to inject some of my own haha)
 
  • #14
sophiecentaur said:
OK - so you would find Bandwidth (transmitted and receiver) very relevant. Those two are measured in Hz - not bits per second. You would also know that the occupied bandwidth needed for a given demodulated signal to noise ratio will depend a lot on the form of modulation. For instance, wide band FM gives you a massive improvement in SNR but a low carrier to noise ratio threshold. It's horses for courses.
I just want to avoid confusion. (And to inject some of my own haha)

In a way, yes. We worry a lot about rise time which show in the eye. reflection, which is how clean is the eye. Rise time is directly relate to upper frequency limit...BW. We just never talk about BW...at least I wasn't! At very high bit rate, you don't get square eye...you don't get square pulses. They look more sine than anything. So we look for the link being able to swing full amplitude ( eye open)...yes BW! I guess I just don't think of it that way!

Ha Ha! This is really the first time I really relate rise time to BW in my mind. I designed all sort of sub nano second rise time HV pulsers, I really looking for rise time and settling time, never really worry about BW even though they directly relate. We/I just never look at it that way. You really don't, because you can have ringing even though you get the bandwidth and frequency response, and that will absolute screw everything up. So BW is really not the first thing come to mind. All I worry is whether I can get there on time ( rise time), then can I settle to the right voltage level on time ( settling time) so I can start acquiring data.
 
  • #15
sophiecentaur said:
OK - so you would find Bandwidth (transmitted and receiver) very relevant. Those two are measured in Hz - not bits per second. You would also know that the occupied bandwidth needed for a given demodulated signal to noise ratio will depend a lot on the form of modulation. For instance, wide band FM gives you a massive improvement in SNR but a low carrier to noise ratio threshold. It's horses for courses.
I just want to avoid confusion. (And to inject some of my own haha)

Actually you caught me by surprise. I thought about it in the shower. There is a very good reason we don't talk about BW is serial data link. As I explained that it's the rise time and settling time that is of utmost importance. You can have a very high frequency link but if it is underdamp, you get ringing and that will be deadly in a communication link. These are digital pulses, not analog. High speed interface like LVDS or ECL are characterized by rise time and settling time. Impedance matching and disturbance in the middle of the link affect the settling time. For digital data, we only worry about how fast the data settle to the correct level for sampling instead of the frequency response. So by specifying the bit rate, it implies that the data can settle within the bit interval to be sampled.

Even when I was working for LeCroy at the time, all we worry is how fast the output settle to the required accuracy, not the frequency response. DAC is kind of like a varying DC in the sense it can slew to a new level, settle to the required accuracy within the given specification. Analog bandwidth is secondary.

Also, frequency response and slew rate is totally different thing as it's very clear in opamp spec. You can have an opamp that has good BW but slow slew rate, so that amp can only have good frequency performance in small signal. If signal amplitude goes up, you run into slew rate limit way before frequency limit. In this kind of digital link, it's the slew rate and rise time that dominates.

Back to eye pattern, it can tell a lot just looking at it. If it is frequency limited,( more like slew rate limited) the eye closes in a smooth way like the sine wave getting smaller. But if you have reflection, you see kink in the eye, you tell right away you have impedance disruption. Ringing or reflection will also show up.

In conclusion, rise time, settling time is related to BW in only a limited sense, it's the rise time and settling time that matter, not BW.
 
Last edited:
  • #16
yungman said:
In conclusion, rise time, settling time is related to BW in only a limited sense, it's the rise time and settling time that matter, not BW.

I would say rise time and settling time are intimately related to BW, bUt you do need to be aware of slew rate.

Were you working at interfacing LVDS ICs and optical transceivers at the system level? I say this because I used to work on front ends for 10Gb optical ethernet chips and let me tell you we were OBSESSED with bandwidth. The design was all about squeezing every drop of bandwidth we could from the process with techniques like Cherry-Hooper amplifiers in front and parallel ADCs following. I also have LVDS interfaces on most of the chips I work on and bandwidth is key, especially in making sure they are stable.

My thinking is you are still concerned with bandwidth, but kind of by proxy.
 
  • #17
Time and frequency domains are appropriate in different contexts. For the best channel performance you need to address both aspects. The effect of channel noise and interference are largely dependent on the receiver BW and the general effect of transmissions on the surroundings is conveniently characterised in transmitter bandwidth (which corresponds to channel bandwidth)
The two BWs together will affect the pulse response which can be assessed by an eye pattern. But inter symbol interference (giving a dreadful eye pattern) can be dealt with by further baseband filtering (I.e. RF and baseband characteristics count). It's all this, plus the effects of the environment that give you your usable data rate.
 
  • #18
C. Darwin said:
I'm trying to get a basic understanding of RF DAC.

If I have a DAC that does 100 MS/s with an 8 bit resolution, this translates to a 800 Mbps throughput. Or is this too simple?

strictly speaking, it's not too simple. your bandwidth is 50 MHz and your signal-to-noise ratio is 28 to 1. but the ideal formula depicts the top theoretical limit to throughput.

the theoretical limit to channel capacity (in bits per unit time) of some communications system is:

[tex] C = B \ \log_2 \left( 1+\frac{S}{N} \right) [/tex]

or if the signal-to-noise ratio is not constant with frequency

[tex] C = \int_0^B \log_2 \left( 1+\frac{S(f)}{N(f)} \right) \ df [/tex]what this means for your DAC (followed by a matching ADC) is that the noise must be smaller than what would cause the number read by the ADC to be different that what was outputted by the DAC. if the noise exceeds that, you will have bit errors and your channel capacity is lower.
 
  • #19
yungman said:
These are digital pulses, not analog.
I have to pick you up on this statement (way back) that I just (re-)read. All pulses are analogue signals. They carry digital information. The shape of a pulse is a result of what the filtering has done to the initial digital values, which can be impulses or 'boxcar' (with near-integer but actually analogue values). The eye pattern is useful as it gives a view of how the symbols can interfere with each other, depending on the past and future values of the original samples. This is all analogue and, until your final decision circuit in the receiver, is a time varying, infinite-level signal. The decision circuit will produce a set of digital, quantised and re-timed values to which your receiver is 'committed' and which it processes digitally.
The distinction between digital and analogue regimes is often very blurred in peoples' minds. One day, circuitry may operate without those distinctions (as happens in our brains) but these days there is a real difference in electronics thinking. Digital implies quantisation - binary or n-ary.
 
  • #20
C. Darwin said:
I'm trying to get a basic understanding of RF DAC.

If I have a DAC that does 100 MS/s with an 8 bit resolution, this translates to a 800 Mbps throughput. Or is this too simple?
Have we been chasing the wrong hare, here?

Are you talking in terms of generating (synthesising) signals to fit into a 5MHz channel here, using the above DAC? What I have been writing is correct (afaik) but is not too relevant if this is the context.
If your DAC produces a set of samples at 100MS/s then all you need to do is to low pass filter in order to produce your wanted signal. There will, of course, be quantisation noise (error / distortion) and there are formulae which will tell you the equivalent level of white noise to this - given the number of levels and the sample rate. But I can't quite see how this interpretation actually ties in with the OP.
Could you enlighten me, please? the thread has been interesting enough but it needs closure, I think.
 
  • #21
carlgrace said:
I would say rise time and settling time are intimately related to BW, bUt you do need to be aware of slew rate.

Were you working at interfacing LVDS ICs and optical transceivers at the system level? I say this because I used to work on front ends for 10Gb optical ethernet chips and let me tell you we were OBSESSED with bandwidth. The design was all about squeezing every drop of bandwidth we could from the process with techniques like Cherry-Hooper amplifiers in front and parallel ADCs following. I also have LVDS interfaces on most of the chips I work on and bandwidth is key, especially in making sure they are stable.

My thinking is you are still concerned with bandwidth, but kind of by proxy.

Yes, they are intimately related, but we look at this much more so than BW. As I explained BW don't impply settling time, but rise time and settling time imply BW.

BW is the minimum requirement, you don't have BW, you don't even talk about rise time and settling time. BUT it is the rise time and settling time that matter.

As I said, BW has nothing to do with reflection, you can have high BW and have reflection and it destroy the waveform. You work with 10G bit stream, you should know how important the reflection is, how important impedance matching is. You can have the bandwidth, but if you have interruption in the middle of the line, you create reflection that bounce back and affect the level of the signal. These has nothing to do with BW, but everything to do with settling time. You can pass signal at 20GHz, but if you have reflection, you are going to have problem passing good digital data through.

In analog RF, you don't require to have source, load terminated at all frequency. We match perfect at one frequency. BUT for high speed digital link, you need to match wideband, not like analog RF. It require more, much more. That is because of settling time, not BW.

If you work on those LVDS connection, you should know if you have any reflection, you are done.
 
  • #22
sophiecentaur said:
I have to pick you up on this statement (way back) that I just (re-)read. All pulses are analogue signals. They carry digital information. The shape of a pulse is a result of what the filtering has done to the initial digital values, which can be impulses or 'boxcar' (with near-integer but actually analogue values). The eye pattern is useful as it gives a view of how the symbols can interfere with each other, depending on the past and future values of the original samples. This is all analogue and, until your final decision circuit in the receiver, is a time varying, infinite-level signal. The decision circuit will produce a set of digital, quantised and re-timed values to which your receiver is 'committed' and which it processes digitally.
The distinction between digital and analogue regimes is often very blurred in peoples' minds. One day, circuitry may operate without those distinctions (as happens in our brains) but these days there is a real difference in electronics thinking. Digital implies quantisation - binary or n-ary.

Agree, now a days, digital info are multi level and is analog. As I explained in my post before this, rise time and settling time implies BW. I just said the rise time and settling time is more important than BW alone. You can have all the BW, but if you have reflection, the signal will be destroy particular in multi level digital signals.

BW is the minimum requirement, if you don't have BW, you don't even need to talk about rise time or settling time. The frequency is relate to rise time as something like

[tex] \frac 1 f= 2.2 \times t_r\;\;\;\;\hbox {where t_r is the rise time}[/tex]

So rise time already implies frequencies.

Actually if you think about it, ultimately, it is the settling time that is the most important. Bottom line, can you get there on time. As long as you can be at the required accuracy at a given amount of time so you can sample it accurately, it's all good. Even if you have the BW and frequency response, it it is under damp like a lot of reactive circuits, it keep ringing and it never settle even though you get there really quick by having the excess BW and rise time.
 
Last edited:
  • #23
yungman said:
Yes, they are intimately related, but we look at this much more so than BW. As I explained BW don't impply settling time, but rise time and settling time imply BW.

BW is the minimum requirement, you don't have BW, you don't even talk about rise time and settling time. BUT it is the rise time and settling time that matter.

As I said, BW has nothing to do with reflection, you can have high BW and have reflection and it destroy the waveform. You work with 10G bit stream, you should know how important the reflection is, how important impedance matching is. You can have the bandwidth, but if you have interruption in the middle of the line, you create reflection that bounce back and affect the level of the signal. These has nothing to do with BW, but everything to do with settling time. You can pass signal at 20GHz, but if you have reflection, you are going to have problem passing good digital data through.

Of course impedance matching is important, but I think that's another subject entirely. Your point that it different applications you focus more on either time-domain or frequency measure is taken.

yungman said:
In analog RF, you don't require to have source, load terminated at all frequency. We match perfect at one frequency. BUT for high speed digital link, you need to match wideband, not like analog RF. It require more, much more. That is because of settling time, not BW.

If you work on those LVDS connection, you should know if you have any reflection, you are done.

The trend in analog RF is for wideband systems so we don't have the luxury of narrowband matching so much anymore.

Wideband matching is pretty easy in LVDS because it is so low frequency you can use resistors for matching, they are super wideband.
 
  • #24
carlgrace said:
Of course impedance matching is important, but I think that's another subject entirely. Your point that it different applications you focus more on either time-domain or frequency measure is taken.
It has everything to do with data link that we are talking about. You have to whole transmission line interface to deal with in a link. Even in the transceiver, the input and output is not ideal at very high frequency. You don't take for granted that the output and input remain ideal impedance. Just look at the S parameters of any transistors, they are nowhere ideal that output is low impedance or 50Ω, input is nowhere close to high impedance or 50Ω. This is just the nature of the game. That is where the whole world of RF come into play. If you don't have perfect impedance, you are going to have reflection on the line and all the bandwidth in the world become irrelevant as you cannot settle on time to be sampled.

Back to the original point about BW vs what I was talking about the rise time and settling time. No matter what kind of modulation scheme, unless something totally new since I left the electronic field, it is all about settling to a stable level so the receiver comparator can reliably sample the level. That involve a transmission that don't have reflection, transceiver circuit don't have ringing, don't have impedance mismatch that can change the level. Just BW alone don't guarantee any of this.
The trend in analog RF is for wideband systems so we don't have the luxury of narrowband matching so much anymore.
I don't think so, all the microwave involved in cell phone, 802...all are narrow band. They are totally different from what you are working on.
Wideband matching is pretty easy in LVDS because it is so low frequency you can use resistors for matching, they are super wideband.
LVDS and ECL are limited frequency, in my days, I think they ran out of steam about 2.5GHz, matching those are piece of cake. For short distance like a few inches, you can even get away with FR4.

In fact, realizing that working of the front end of the ultra fast link is only a very limit facet of RF that I decided to take a quick exit from telecom field. From the little I've seen, unless you design the inside of the transceiver, you are pretty much end up dealing with transmission line being in pcb or fiber. You buy the module and specialized chip set from companies and just work on the interface. It is even more layout then anything else. As I have described many times by now, the tx lines is so much more critical than in true RF radio stuff as you have to have no reflection bouncing around. You have to run TDR test that normal RF don't usually required. But I would have missed the whole world of designing the challenging aspect of RF design.

As I said I just talked with an engineer from Cisco, they are working on a differential via pair for 15GHz, and all the effort is to shave off part of the round pads of the via pair because they have to have perfect uninterrupted impedance along the line. Then worry about using very low loss materials etc. This is not what I want to do. After that, the digital programming of the physical, link layer are just like other digital interface! I got out of that so fast, put in my 6 months and got out.

BTW, what is the way to design device with high input impedance and low output impedance in GHz circuit? All transistor have those impedance vary with frequency as indicated in the S parameters. How does people manage to design wide band input output? I can still see you can use very small transistors at the input so the parasitic impedance is high enough that it disappeared when parallel with 50ohm. But how do you make the output to behave?

Remember, I am not expert in telecom, so what I said was only my experience and impression. I mainly speaking as a RF engineer and data acquisition engineer that had to acquire reliable data at a given bit rate.
 
Last edited:
  • #25
yungman said:
In fact, realizing that working of the front end of the ultra fast link is only a very limit facet of RF that I decided to take a quick exit from telecom field. From the little I've seen, unless you design the inside of the transceiver, you are pretty much end up dealing with transmission line being in pcb or fiber. You buy the module and specialized chip set from companies and just work on the interface. It is even more layout then anything else. As I have described many times by now, the tx lines is so much more critical than in true RF radio stuff as you have to have no reflection bouncing around. You have to run TDR test that normal RF don't usually required. But I would have missed the whole world of designing the challenging aspect of RF design.
I worked on the internals of the integrated circuit, and only on the RX side, so our experiences were very different! It doesn't sound like what you were doing was very exciting.


BTW, what is the way to design device with high input impedance and low output impedance in GHz circuit? All transistor have those impedance vary with frequency as indicated in the S parameters. How does people manage to design wide band input output? I can still see you can use very small transistors at the input so the parasitic impedance is high enough that it disappeared when parallel with 50ohm. But how do you make the output to behave?

Remember, I am not expert in telecom, so what I said was only my experience and impression. I mainly speaking as a RF engineer and data acquisition engineer that had to acquire reliable data at a given bit rate.[/QUOTE]

It depends on the protocol. In fiber optics you typically want low input impedance so you design a transimpedance-based front end. The output is some standard protocol like XFI or XAUI. For super fast links you general use equalization. First transmit-based eq schemes such as preemphasis were used alone, now there is a trend to use more and more adaptive equalization on the RX side. If you know the channel, and can keep the link short, you can deal with expected imperfections through equalization.

About an earlier comment you said about RF being narrowband, I said the *trend* was to go wideband. This is chiefly to reduce system cost and increase the level of integration. Narrowband systems typically use a physical inductor to get resonance and those are expensive (either in on-chip real estate or board space). I worked on an 802.11b/g transceiver in 2003 that used standard narrow band techniques. (LNA with inductive source degeneration). A few years later, I was working on a wideband RF IC that was multi-standard. The idea is that if you have a wideband front end, you can implement the different narrowband standards using programmable digital filters and greatly lower overall cost.
 
  • #26
carlgrace said:
It depends on the protocol. In fiber optics you typically want low input impedance so you design a transimpedance-based front end. The output is some standard protocol like XFI or XAUI. For super fast links you general use equalization. First transmit-based eq schemes such as preemphasis were used alone, now there is a trend to use more and more adaptive equalization on the RX side. If you know the channel, and can keep the link short, you can deal with expected imperfections through equalization.

About an earlier comment you said about RF being narrowband, I said the *trend* was to go wideband. This is chiefly to reduce system cost and increase the level of integration. Narrowband systems typically use a physical inductor to get resonance and those are expensive (either in on-chip real estate or board space). I worked on an 802.11b/g transceiver in 2003 that used standard narrow band techniques. (LNA with inductive source degeneration). A few years later, I was working on a wideband RF IC that was multi-standard. The idea is that if you have a wideband front end, you can implement the different narrowband standards using programmable digital filters and greatly lower overall cost.

Hi Carl

Thanks for the detail response. I just had my stomach scoped and I am still drugged! So I am not going to talk too much for now. I left the field in 2006, I guess I missed the latest wider band stuff. I designed a lot of transimpedance amp at lower speed. I did looked into designing a transimpedance amp close to 500MHz or above and I was looking at Smith chart in how to do a wider band equalization to get matching for extended frequency range and not having much luck. The matching network always goes the wrong way from the transistors Zin with frequency plot on the smith chart. Can you give me some example on how you do it? I read it after I have a good sleep.
 
  • #27
This thread is getting bogged down in details of various instances and it is losing its way.
Matching shouldn't be done with resistors when the RF carrier to noise ratio is compromised or you will just lose valuable received power.
Filtering can be either at RF (/IF) or baseband. The RF filtering defines the channel. Inter symbol interference , which is more than just "settling time" and can involve the very low frequency response of a channel. But it can't just be dismissed by considering just one factor. In the end, it is inter symbol interference that counts plus details of modulation and coding.
The terms 'wide band' and 'narrow band' are all relative - mostly, it is fractional bandwidth that counts for matching networks and amplifiers. There is also the question of linearity, which doesn't seem to have been mentioned. But all of the above are analogue considerations. When your demodulator / decoder has presented its data stream to the following digital circuits, the effect of noise, linear and non-linear distortion are dealt with with the error correcting system, which can reclaim some of the damage.
 
  • #28
I don't think we are out of the topic, in fact, it is very on topic. We are talking about BW and RF as the tittle of the thread. We are not talking about accuracy of the level after settling down. Yes, if you have error in the level due to the distortion of the transceiver, it is going to fail, but that is not in the subject of this thread.

I brought out the reason why BW alone is not the most important thing, it is implied by settling time. Maybe you have a different definition on what I called settling time, just like a lot of RF people called intermod peak as noise! I explained my view why high BW don't imply fast data rate if there is reflection that destroy the amplitude of the data. That is everything to do with BW and RF of this thread. And I explained the data link as was asked in the original thread MUST include the transmission media that can cause problem, it is more than the transceiver alone. That's where impedance mismatch, eye pattern become important.

RF by default entails pain staking details. If we don't consider RF, a lot of amplifiers are just common emitter/source amplifiers! Devils are in the detail, every little points on the S parameter, every little segment of trace counts and affect the outcome.

Yes, to get low BER, you need all, RF and BW is only the subset of the whole picture.
 
Last edited:

FAQ: RF fundamentals, how does bandwidth affect throughput

1. What is RF and why is it important in wireless communication?

RF (Radio Frequency) is a type of electromagnetic radiation that is used in wireless communication systems. It is important because it allows for wireless devices to send and receive information without the need for physical connections or wires.

2. What are the fundamentals of RF and how does it work?

The fundamentals of RF include frequency, wavelength, amplitude, and phase. RF waves are created when an electrical current is transmitted through an antenna, which then radiates electromagnetic energy into the air. This energy travels through the air in the form of waves, and can be received by another antenna tuned to the same frequency.

3. How does bandwidth affect the throughput of a wireless system?

Bandwidth refers to the range of frequencies that can be used to transmit data within a specific wireless network. The higher the bandwidth, the faster the data can be transmitted, resulting in a higher throughput. In other words, a wider bandwidth allows for more data to be transmitted at once, increasing the overall speed and efficiency of the wireless system.

4. What factors can affect the bandwidth of a wireless system?

There are several factors that can affect the bandwidth of a wireless system, including the physical distance between devices, interference from other electronic devices, and the number of devices connected to the network. Weather conditions, such as heavy rain or snow, can also impact the available bandwidth.

5. How can bandwidth be optimized to increase throughput in a wireless system?

To optimize bandwidth and increase throughput in a wireless system, proper network planning and management is crucial. This includes selecting the right frequency for the network, minimizing interference, and using advanced technologies such as MIMO (Multiple Input Multiple Output) to efficiently utilize the available bandwidth. Regular maintenance and upgrades can also help improve the overall performance of the wireless system.

Back
Top