Sample configuration files for all networks are available in the Examples Menu in NetSim Home Screen. These files provide examples on How NetSim can be used – the parameters that can be changed and the typical effect it has on performance.
Factors affecting WLAN PHY Rate #
The examples explained in this section focuses on the factors which affect the PHY Rate/Link Throughput of 802.11 based networks:
-
Transmitter power (More Tx power leads to higher throughput)
-
Channel Path loss (Higher path loss exponent leads to lower > throughput)
-
Receiver sensitivity (Lower Rx sensitivity leads to higher > throughput)
-
Distance (Higher distance between nodes leads to lower throughput)
Effect of Transmitter power#
Open NetSim and Select Examples->Internetworks->Wi-Fi-> Effect of Transmitter Power then click on the tile in the middle panel to load the example as shown in Figure 4‑1.
Figure 4‑1: List of scenarios for the example of effect of transmitter power
The following network diagram illustrates, what the NetSim UI displays when you open the example configuration file see Figure 4‑2.
Figure 4‑2: Network set up for studying the effect of transmitter power
Increase in transmitter power increases the received power when all other parameters are constant. Increased received power leads to higher SNR and hence higher PHY Data rates, lesser error and higher throughputs.
Network Settings
-
Environment Grid length: 500m x 500m
-
Distance between Access Point and the Wireless Node is set to 35m
-
Set transmitter power to 100mW under Interface Wireless > Physical layer properties of Access point
-
Set DCF as the medium access layer protocol under datalink layer properties of access point and wireless node.
-
Channel Characteristics: Path Loss Only, Path Loss Model: Log Distance, Path Loss Exponent: 2.5
-
Application Generation Rate: 10Mbps (Packet Size: 1460, Inter Arrival Time: 1168µs)
-
Click on the Application icon present in the top ribbon/toolbar and set Transport Protocol to UDP
-
Enable the Plots and Run the simulation for 10s
-
Go back to the scenario and decrease the Transmitter Power to 100, 60, 40, 20 and 10 respectively and run simulation for 10s. See that, there is a decrease in the Throughput gradually.
Results and Discussion
Transmitter Power (mW) | Throughput (Mbps) | Phy Rate (Mbps) |
---|---|---|
100 | 5.94 | 11 |
60 | 3.79 | 5 |
40 | 1.67 | 2 |
20 | 0.89 | 1 |
10 | 0.0 | 0 |
Table 4‑1: Result comparison of different transmitter power vs. throughput
Effect of AP-STA Distance on throughput#
Open NetSim and Select Examples > Internetworks > Wi-Fi > Effect of AP STA Distance on throughput then click on the tile in the middle panel to load the example as shown in Figure 4‑3.
Figure 4‑3: List of scenarios for the example effect of AP-STA distance on throughput
The following network diagram illustrates, what the NetSim UI displays when you open the example configuration file see Figure 4‑4.
Figure 4‑4: Network set up for studying the Effect of AP-STA Distance on throughput
As the distance between two devices increases the received signal power reduces as propagation loss increases with distance. As the received power reduces, the underlying PHY rate of the channel drops.
Network Settings
-
Environment Grid length: 500m x 500m
-
Distance between Access Point and the Wireless Node is set to 5m
-
Set DCF as the medium access layer protocol under datalink layer properties of access point and wireless node.
-
WLAN Standard is set to 802.11ac and No. of Tx and Rx Antenna is set to 1 in access point and No. of Tx is 1 and Rx Antenna is set to 2 in wireless node (Right-Click Access Point or Wireless Node > Properties > Interface Wireless > Transmitting Antennas and Receiving Antennas) and Bandwidth is set to 20 MHz in both Access-point and wireless-node Transmitter Power set to 100mW in both Access-point and wireless-node.
-
Wired Link speed was set to 1Gbps and propagation delay to 10 µs in wired links.
-
Channel Characteristics: Path Loss Only, Path Loss Model: Log Distance, Path Loss Exponent: 3.5.
-
Application Generation Rate: 100 Mbps (Packet Size: 1460, Inter Arrival Time: 116 µs)
-
Click on the Application icon present in the top ribbon/toolbar and set Transport Protocol to UDP
-
Enable the plots and run the simulation for 10s.
-
Go back to the scenario and increase the distance as per result table and Run simulation for 10s.
Results and Discussion
Distance (m) | Throughput (Mbps) |
---|---|
5 | 22.81 |
10 | 21.61 |
15 | 17.87 |
20 | 14.70 |
25 | 12.49 |
30 | 9.56 |
35 | 5.63 |
40 | 0 |
Table 4‑2: Result comparison of different distance vs. throughput
Plot
Figure 4‑5: Plot of Throughput vs Distance
Effect of Pathloss Exponent#
Open NetSim and Select Examples > Internetworks > Wi-Fi > Effect of Pathloss Exponent then click on the tile in the middle panel to load the example as shown in Figure 4‑6.
Figure 4‑6: List of scenarios for the example of effect of pathloss exponent
The following network diagram illustrates, what the NetSim UI displays when you open the example configuration file as shown Figure 4‑7.
Figure 4‑7: Network set up for studying the effect of pathloss exponent
Path Loss or Attenuation of RF signals occurs naturally with distance. Losses can be increased by increasing the path loss exponent (η). This option is available in channel characteristics. Users can compare the results by changing the path loss exponent (η) value.
Network Settings
-
Environment Grid length: 500m x 500m
-
Distance between Access Point and the Wireless Node is set to 20m
-
Set DCF as the medium access layer protocol under datalink layer properties of access point and wireless node. WLAN Standard is set to 802.11ac and No. of Tx and Rx Antenna is set to 1 in both access point and wireless node (Right-Click Access Point or Wireless Node > Properties > Interface Wireless > Transmitting Antennas and Receiving Antennas) and Bandwidth is set to 20 MHz in both Access-point and wireless-node and Transmitter Power set to 100mW in both Access-point and wireless-node.
-
Wired Link speed was set to 1Gbps and propagation delay to 10 µs in wired links.
-
Channel Characteristics: Path Loss Only, Path Loss Model: Log Distance, Path Loss Exponent: 2
-
Application Generation rate: 100 Mbps (Packet Size: 1460, Inter Arrival Time: 116 µs)
-
Click on the Application icon present in the top ribbon/toolbar and set Transport Protocol to UDP
-
In NetSim GUI Plots are Enabled and Run simulation for 10s.
-
Go back to the scenario and increase the Path Loss Exponent from 2 to 2.5, 3, 3.5, 4, and 4.5 respectively and Run simulation for 10s.
Results and Discussion
Path loss Exponent | Throughput (Mbps) |
---|---|
2.0 | 22.81 |
2.5 | 21.61 |
3.0 | 20.04 |
3.5 | 14.70 |
4.0 | 9.56 |
4.5 | 0 |
Table 4‑3: Result comparison of different pathloss exponent value vs. throughput
Plot
Figure 4‑8: Plot of Throughput vs Path loss Exponent
Queuing and buffer overflow in routers #
Open NetSim and Select Examples > Internetworks >Queuing and buffer overflow in routers then click on the tile in the middle panel to load the example as shown in Figure 4‑9.
Figure 4‑9: List of scenarios for the example of queuing and buffer overflow in routers
The following network diagram illustrates, what the NetSim UI displays when you open the example configuration file as shown Figure 4‑10.
Figure 4‑10: Network set up for studying the queuing and buffer overflow in routers
Network Settings
-
Click on the Application icon present in the top ribbon/toolbar and set Transport Protocol to UDP
-
Generation rate = 10Mbps for each application (Packet Size: 1460, Inter Arrival Time: 1168µs)
- Generation Rate (Mbps) = (Packet size (bytes) * 8) / Inter arrival time (µs))
-
The traffic generation rate can be modified by changing application properties. Note that the generation rate should be less than or equal to service rate for steady-state simulation, where the service rate is defined as the data rate supported by the Bottle-neck link. In this case, there is no bottle neck link since all links support up to 100 Mbps
-
Plots and Packet Trace is Enabled
-
Simulate for 100s and note down the throughput
-
Go back to the scenario and change the link speed (both Uplink and Downlink Speed) between Router_5 and Wired_Node_4 from the default 100 Mbps to 25 Mbps. In this case, the link between Router_5 and Wired_Node_4 becomes a Bottle-neck link, since the link rate (i.e., service rate) is less than the generation rate of 30 Mbps (10 * 3).
Discussion
Bottleneck link 100Mbps: In this scenario, router receives packets from three links at the rate of 10 Mbps each, a total of 30 Mbps. And the router-node link supports 100 Mbps. Hence there is no queuing / packet drop in the Router. The application throughput would be approximately equal to the generation rate.
Figure 4‑11: Application Metrics window for Bottleneck link 100Mbps
Bottleneck link 25Mbps: In this case, the bottleneck link supports only 25 Mbps. Due to this, packets get accumulated in the router's buffer, which overflows after reaching its limit and hence router starts dropping the packets. Application throughput would be approximately equal to the bottle neck link capacity.
Figure 4‑12: Application Metrics window for Bottleneck link 25Mbps
Frame aggregation in 802.11n #
Open NetSim and Select Examples > Internetworks > Wi-Fi > 802.11n Frame Aggregation then click on the tile in the middle panel to load the example as shown in Figure 4‑13.
Figure 4‑13: List of scenarios for the example of 802.11n frame aggregation
The following network topology is shown in NetSim UI as shown Figure 4‑14.
Figure 4‑14: Network set up for studying the 802.11n frame aggregation
Network Settings
-
In the Environment Settings, Grid length is set to 50m * 50m
-
Distance between Access Point and the Wireless Node is 20m
-
Set DCF as the medium access layer protocol under datalink layer properties of Access point and wireless node.
-
Click on the Application icon present in the top ribbon/toolbar
-
CBR Application with 100 Mbps Generation Rate (Packet Size: 1460, Inter Arrival Time: 116µs)
-
Set Transport Protocol to UDP
-
WLAN Standard is set to 802.11n and No. of Frames to Aggregate is set to 1 in both access point and wireless node (Right-Click Access Point or Wireless Node > Properties > Interface Wireless > No. of Frames to Aggregate)
-
Channel Characteristics: Path Loss Only, Path Loss Model: Log Distance, and Path Loss Exponent: 3. (Wireless Link Properties)
-
Enable the Plots and Packet trace and run the simulation for 10s. Then check the throughput in the results window
-
Go back to the scenario and increase the No. of Frames to Aggregate to 5 and 10 respectively and check the throughput in the results window.
Results and discussion
No of Frames Aggregated | Application Throughput |
---|---|
1 | 23.97 Mbps |
5 | 44.77 Mbps |
10 | 54.24 Mbps |
Table 4‑4: No of Frames Aggregated vs. Application Throughput
-
Frame aggregation is responsible for joining multiple MSDUs into a single MPDU that can be delivered to the physical layer as a single unit for transmission. As we increase the number of frames aggregated it results in lesser number of ack’s. Hence, more data frames are transmitted per unit time leading to a higher application throughput.
-
For No. of frames to Aggregate is set to 5, we get five successive frames followed by a WLAN_Block_Ack (which is used to acknowledge that five frames are received successfully). Users can observe this in Packet Trace by filtering packet status to successful and Tx_ID as Access Point and Wireless Node.
-
Note that in the early stages of the simulation the AP would transmit whatever the number of frames/packets in its buffer. It will not wait for 5 frames to be aggregated, if say number of frames to be aggregated is set as 5. If Access Point buffer has more than 5 frames, it will aggregate 5 frames and then send. After sending 5 frames it will receive one WLAN_Block_Ack.
Rate Adaptation in 802.11b #
NetSim rate adaptation is explained in section 3.1.21 of this document. This experiment can be performed with Standard and Pro version of NetSim since it involves code modification.
User should uncomment the following line (line #38) in IEEE802_11.h in 802.11 project and rebuild the code and then perform this example.
#define _RECALCULATE_RX_SENSITIVITY_BASED_ON_PEP_
Open NetSim, Select Examples > Internetworks > Wi-Fi > 802.11 Rate Adaptation then click on the tile in the middle panel to load the example as shown in Figure 4‑15.
Figure 4‑15: List of scenarios for the example of 802.11 rate adaptation
The following network diagram illustrates what the NetSim UI displays when you open the example configuration file for Rate Adaptation.
Figure 4‑16: Network set up for studying the Wi-Fi Rate Adaptation
Network Settings
-
Environment Grid length: 500m * 500m
-
Distance between AP and Wireless Node is 65.5m
-
Enabled Packet Trace and plot option
-
Set rate adaptation as Generic in datalink properties of access_point and wireless node
-
Set DCF as the medium access layer protocol under datalink layer properties of access_point and wireless node.
-
Click on the Application icon present in the top ribbon/toolbar and set Transport Protocol to UDP
-
Set WLAN Standard à 802.11b
-
Channel Characteristics à Path Loss only, Path Loss Model à Log Distance and Path loss Exponent à 3.25. (Wireless Link Properties)
-
CBR application with 10Mbps generation rate (Set Packet Size: 1460 Bytes, Inter Arrival Time: 1168 micro sec)
-
Simulate for 10 sec.
Results and Discussion
Open Packet Trace and filter Packet Type to CBR, Transmitter_ID to Access Point 3 andf filter the packet status to errored and successful then calculate Phy rate. Phy rate can be calculated using packet trace by using the formula shown below:
Phy rate (802.11b) = Phy_layer_payload * 8/(phy end time−phy arrival time−192)
192 μs is the approximate preamble time for 802.11b
Calculate PHY rate for all the data packets coming from Access Point to Wireless Node. For
doing this please refer NetSim User Manual > Section 8.4.2 How to set filters to NetSim Trace file.
Figure 4‑17: Packet Trace
The ‘Generic’ rate adaptation algorithm is similar to the Auto Rate Fall Back (ARF) algorithm. In this algorithm:
-
Rate goes up one step for 20 consecutive packet successes.
-
Rate goes down one step for 4 continuous packet failures
In the above screenshot, the Phy rate reduces from 11Mbps to 5.5Mbps, since there are 4 consecutive data error packets. Then the rate increases from 5.5Mbps to 11Mbps one there is 20 consecutive successful data packet transmissions.
802.11n MIMO #
Open NetSim, Select Examples -> Internetworks -> Wi-Fi -> 802.11n-MIMO then click on the tile in the middle panel to load the example shown in Figure 4‑18.
Figure 4‑18: List of scenarios for the example of 802.11n-MIMO
The following network diagram illustrates what the NetSim UI displays when you open the example configuration file for 802.11n-MIMO.
Figure 4‑19: Network set up for studying the 802.11n-MIMO
Network Settings
-
Environment Grid length: 50m * 50m
-
Distance between AP and Wireless node is 20m.
-
Set DCF as the medium access layer protocol under datalink layer properties of access_ point and wireless node
-
Click on the Application icon present in the top ribbon/toolbar and set Transport Protocol to UDP
-
WLAN Standard is set to 802.11n and No. of Tx and Rx Antennas is set to 1 in both access point and wireless node (Right-Click Access Point or Wireless Node > Properties > Interface Wireless > Transmitting Antennas and Receiving Antennas)
-
Channel Characteristics à Path Loss only, Path Loss Model à Log Distance and Path loss Exponent à 3. (Wireless Link Properties.
-
CBR application with 50Mbps generation rate. (Set Inter Arrival Time: 233 (micro sec)).
-
Enable Plots.
-
Simulate for 10 sec and check the throughput.
-
Go back to the scenario and increase the Number of Tx and Rx Antenna 1*1 to 2*2, 3*3, 4*4 respectively and check the throughput in the results window.
Results and Discussion
Number of Tx and Rx Antenna | Throughput |
---|---|
1 x 1 | 23.97 Mbps |
2 x 2 | 31.04 Mbps |
3 x 3 | 33.38 Mbps |
4 x 4 | 35.95 Mbps |
Table 4‑5: Number of Tx and Rx Antenna vs. Throughput
MIMO is a method for multiplying the capacity of a radio link using multiple transmit and receive antennas. Increasing the Transmitting Antennas and Receiving Antennas in PHY Data rate (link capacity) and hence leads to an increase in application throughput.
Effect of Bandwidth and Guard Interval in Wi-Fi 802.11ac #
Effect of Bandwidth#
Open NetSim and Select Examples > Internetworks > Wi-Fi > Effect of bandwidth in Wi-Fi 802.11ac then click on the tile in the middle panel to load the example as shown in Figure 4‑20.
Figure 4‑20: List of scenarios for the example of effect of bandwidth in Wi-Fi 802.11ac
The following network diagram illustrates what the NetSim UI displays when you open the example configuration file as shown Figure 4‑21.
Figure 4‑21: Network set up for studying the effect of bandwidth in Wi-Fi 802.11ac
Network Settings
-
Environment Grid length: 50m * 50m.
-
Click on the Application icon present in the top ribbon/toolbar and set Transport Protocol to UDP
-
Channel Characteristics: NO PATHLOSS in wireless link properties.
-
Set Bit Error rate and Propagation delay to zero under wired link properties
-
Set 802.11ac standard and Bandwidth to 20MHz under Wireless Interface->Physical Layer properties of the access point and wireless node.
-
Set DCF as the medium access layer protocol under Wireless Interface-> datalink layer properties of access point and wireless node
-
Set transmitter power as 40mW under Wireless Interface->Transmitter Power properties of the access point and wireless node.
-
Enable packet trace and plots.
-
Set generation rate as 100 Mbps under Application properties (Packet Size = 1460 Bytes, Interarrival time = 116 microseconds). Generation rate can be calculated by using the formula below:
$$Generation\ Rate\ (Mbps) = \ Packet\ Size\ (Bytes)*\frac{8}{Interarrival}time\ (µs)$$
= 1460 (Bytes)*8/116 (µs) \~ 100 Mbps
-
Run simulation for 10s and see Application Throughput in the Results > Window.
-
Go back to the scenario and increase the Bandwidth 20 to 40, 80, 160 > respectively and check the throughput in the results window.
Analytical Model
The average time to transmit a packet comprises of
-
DIFS
-
Backoff duration
-
Data packet transmission time
-
SIFS
-
MAC ACK transmission time
The timing diagram is as shown below Figure 4‑22.
Figure 4‑22: Timing diagram for WLAN
The Average throughput can be calculated by using the formula below:
$$Average\ Throughput\ (Mbps) = \ \frac{Application\ Payload(Bytes)}{Average\ Time\ per\ Packet(µs)}$$
Average time per packet (µs)= DIFS + Average Backoff time + Packet Transmission Time + SIFS + Ack Transmission Time
Packet Transmission Time (µs)= Preamble time + (MPDU Size/Data rate)
Average Backoff time (µs) = (CWmin/2) * Slot Time
Ack Transmission Time (µs)= Preamble time + (Ack Packet size/Ack data rate)
DIFS (µs)= SIFS + 2 * Slot Time
Average Backoff time (µs)= (CWmin/2) * Slot Time
Where,
Application payload = 1460 Bytes
Average time per packet = 34 + 67.5 + 185.36 + 16 + 212.88 = 513.74 µs
SIFS = 16 µs
Slot time = 9 µs
CWmin = 15 slots for 802.11ac
DIFS = SIFS + 2 * Slot Time = 16 µs + 2 * 9 µs = 34 µs
Average Backoff time = 67.5 µs
Packet Transmission Time = 44 µs + (1532 * 8/86.7 Mbps) = 185.36 µs
Preamble time = 44 µs for 802.11ac standard
MPDU Size = 1460 + 8 + 20 + 44 = 1532 Bytes
Ack Transmission Time = 44 µs + (152 Bytes * 8 / 7.2Mbps) = 212.88 µs
Average throughput = 1460 * 8/ (513.74) = 22.7 Mbps
Similarly calculate throughput theoretically for other samples by changing bandwidth and compare with Simulation throughput. Users can get the data rate by using the formula given below:
𝑃ℎ𝑦 𝑟𝑎𝑡𝑒 (802.11ac) = 𝑃ℎ𝑦_𝑙𝑎𝑦𝑒𝑟_𝑝𝑎𝑦𝑙𝑜𝑎𝑑 ∗ 8/(𝑝ℎ𝑦 𝑒𝑛𝑑 𝑡𝑖𝑚𝑒 − 𝑝ℎ𝑦 𝑎𝑟𝑟𝑖𝑣𝑎𝑙 𝑡𝑖𝑚𝑒 − 44)
Results and Discussion
Bandwidth (MHz) | Analytical Estimate of Throughput (Mbps) | Simulation Throughput (Mbps) |
---|---|---|
20 | 22.70 | 22.80 |
40 | 33.77 | 33.94 |
80 | 43.39 | 43.64 |
160 | 49.35 | 49.75 |
Table 4‑6: Result comparison of different bandwidth vs. Analytical Estimate of Throughput and Simulation Throughput
One can observe that there is an increase in throughput as we increase the bandwidth from 20MHz to 160MHz.
Effect of Guard Interval#
Open NetSim and click on Examples > Internetworks > Wi-Fi > Effect of Guard Interval in Wi-Fi 802.11ac then click on the tile in the middle panel to load the example as shown in Figure 4‑23.
Figure 4‑23: List of scenarios for the example of effect of guard interval in Wi-Fi 802.11ac
The following network diagram illustrates what the NetSim UI displays when you open the example configuration file as shown Figure 4‑24.
Figure 4‑24: Network set up for studying the guard interval in wi-fi in Wi-Fi 802.11ac
Network Settings
-
Environment Grid length: 50m * 50m.
-
Click on the Application icon present in the top ribbon/toolbar and set Transport Protocol to UDP
-
Channel Characteristics: NO PATHLOSS in wireless link properties.
-
Set 802.11ac standard and Bandwidth to 20MHz under Wireless Interface->Physical Layer properties of the access point and wireless node.
-
Set DCF as the medium access layer protocol under Wireless Interface-> datalink layer properties of access point and wireless node.
-
Set Bit Error rate and Propagation delay to zero under wired link properties.
-
Set transmitter power as 40mW under Wireless Interface->Transmitter Power properties of the access point and wireless node.
-
Enable plots.
-
Set Guard interval to 400ns under Wireless Interface->Physical Layer properties of access point and wireless node.
-
Set generation rate as 100 Mbps under Application properties (Packet > Size = 1460 Bytes, Interarrival time = 116 microseconds). > Generation rate can be calculated by using the formula below:
$$Generation\ Rate\ (Mbps) = \ Packet\ Size\ (Bytes)*\frac{8}{Interarrival}time\ (µs)$$
= 1460 (Bytes)*8/116 (µs) \~ 100 Mbps
-
Run simulation for 10s and note down the throughput.
-
Go back to the scenario and increase the Guard interval to 400 to 800 and check the throughput in the results window
Calculate throughput theoretically as explained above and compare with Simulation throughput.
Results and Discussion
Guard Interval (ns) | Theoretical Throughput (Mbps) |
Simulation Throughput (Mbps) |
---|---|---|
400 | 17.76 | 22.80 |
800 | 16.87 | 21.38 |
Table 4‑7: Result comparison of different Guard Interval vs. Theoretical Throughput and Simulation Throughput
Peak UDP and TCP throughput 802.11ac and 802.11n #
Open NetSim, Select Examples ->Internetworks-> Wi-Fi -> Peak UDP and TCP throughput 802.11ac and 802.11n then click on the tile in the middle panel to load the example as shown Figure 4‑25.
Figure 4‑25: List of scenarios for the example of Peak UDP and TCP throughput 802.11ac and 802.11n
The following network diagram illustrates, what the NetSim UI displays when you open the example configuration file as shown Figure 4‑26.
Figure 4‑26: Network set up for studying the Peak UDP and TCP throughput 802.11ac and 802.11n
IEEE802.11n#
Network Settings
- Set the following property as shown in below given Table 4‑8.
Interface Parameters | |
---|---|
Standard | IEEE802.11n |
No. of Frames to aggregate | 64 |
Standard Channel | 36 (5180MHz) |
Rate Adaptation | False |
Short Retry Limit | 7 |
Long Retry Limit | 4 |
Dott11_RTSThreshold | 3000bytes |
Medium Access Protocol | DCF |
Buffer Size | 100MB |
Guard Interval | 400ns |
Bandwidth | 40 MHz |
Frequency Band | 5 GHz |
Transmitter Power | 100mW |
Antenna Gain | 0 |
Antenna height | 1m |
Reference distance (d0) | 1m |
Transmitting Antennas | 4 |
Receiving Antennas | 4 |
Table 4‑8: Detailed Network Parameters for IEEE802.11n
-
Set wired link properties as shown below.
-
Uplink speed and Downlink speed (Mbps)- 1000 Mbps.
-
Uplink BER and Downlink BER – 0.
-
Uplink and Downlink Propagation Delay(µs) – 10.
-
The Channel Characteristics were set as No pathloss in wireless link properties.
-
Set Downlink application source node as Wired Node destination node as Wireless Node.
Application Properties | |
---|---|
App1_CBR | |
Packet Size (Byte) | 1450 |
Inter Arrival Time (µs) | 11.6 |
Transport Protocol | UDP |
Table 4‑9: Application Parameters
-
Plots are enabled in NetSim GUI.
-
Run simulation for 5 sec. After simulation completes go to metrics window and note down throughput value from application metrics.
Go Back to 802.11n UDP Scenario and Change Transport protocol to TCP, Window scaling is set to True and Scale shift count set to 5 in the transport layer of Wired node and Wireless node for the other sample (i,e 802.11n TCP), run the simulation for 5 sec and note down throughput value from application metrics.
Results and Discussion
Transport Protocol | Throughput (Mbps) |
---|---|
UDP | 464.41 |
TCP | 386.12 |
Table 4‑10: Results comparison of TCP and UDP throughputs for IEEE802.11n
Plot
Figure 4‑27: Plot of Throughput (Mbps) Vs. Transport Protocol for IEEE802.11n
IEEE802.11ac#
Network Settings
- Set the following property as shown in below given table:
Interface Parameters | |
---|---|
Standard | IEEE802.11ac |
No. of Frames to aggregated | 1024 |
Standard Channel | 36 (5180MHz) |
Rate Adaptation | False |
Short Retry Limit | 7 |
Long Retry Limit | 4 |
Dott11_RTSThreshold | 3000bytes |
Medium Access Protocol | DCF |
Buffer Size (Access Point) | 100MB |
Guard Interval | 400ns |
Bandwidth | 160 MHz |
Frequency Band | 5 GHz |
Transmitter Power | 100mW |
Antenna Gain | 0 |
Antenna height | 1m |
Reference distance (d0) | 1m |
Transmitting Antennas | 8 |
Receiving Antennas | 8 |
Table 4‑11: Detailed Network Parameters for IEEE802.11ac
-
Set wired link properties as shown in below.
-
Uplink speed and Downlink speed (Mbps)- 10000 Mbps.
-
Uplink BER and Downlink BER – 0.
-
Uplink and Downlink Propagation Delay(µs) – 10.
-
The Channel Characteristics were set as No pathloss in wireless link properties.
-
Set Downlink application source node as Wired Node destination node as Wireless Node.
Application Properties | |
---|---|
App1_CBR | |
Packet Size (Byte) | 1450 |
Inter Arrival Time (µs) | 1.93 |
Transport Protocol | UDP |
Table 4‑12: Application Parameters
-
Plots are enabled in NetSim GUI.
-
Run simulation for 10 sec. After simulation completes go to metrics window and note down throughput value from application metrics.
Go Back to the 802.11ac UDP Scenario and Change Transport protocol to TCP, Window scaling is set to True and Scale shift count set to 5 in the transport layer of Wired node and Wireless node for the other sample (i,e 802.11ac TCP), run the simulation for 10 sec and note down throughput value from application metrics.
Results and Discussion
Transport Protocol | Throughput (Mbps) |
---|---|
UDP | 5361.42 |
TCP | 3207.06 |
Table 4‑13: Results comparison of TCP and UDP throughputs for IEEE802.11ac
Plot
Figure 4‑28: Plot of Throughput (Mbps) Vs. Transport Protocol for IEEE802.11ac
TCP Window Scaling #
Open NetSim, Select Examples->Internetworks->TCP Window Scaling then click on the tile in the middle panel to load the example as shown in Figure 4‑29.
Figure 4‑29: List of scenarios for the example of TCP Window Scaling
The following network diagram illustrates what the NetSim UI displays when you open the example configuration file for TCP Window scaling as shown Figure 4‑30.
Figure 4‑30: Network set up for studying the TCP Window Scaling
The TCP throughput of a link is limited by two windows: the congestion window and the receive window. The congestion window tries not to exceed the capacity of the network (congestion control); the receive window tries not to exceed the capacity of the receiver to process data (flow control).
The TCP window scale option is an option to increase the receive window size allowed in Transmission Control Protocol above its former maximum value of 65,535 bytes.
TCP window scale option is needed for efficient transfer of data when the bandwidth-delay product is greater than 64K. For instance, if a transmission line of 1.5 Mbit/second was used over a satellite link with a 513 milliseconds round trip time (RTT), the bandwidth-delay product is 1500000 × 0.513 = 769, 500 bits or about 96,187 bytes.
Using a maximum window size of 64 KB only allows the buffer to be filled to $\frac{65535}{96187} = 68\ \%$ of the theoretical maximum speed of 1.5 Mbps, or 1.02 Mbps.
By using the window scale option, the receive window size may be increased up to a maximum value of 1,073,725,440 bytes. This is done by specifying a one-byte shift count in the header options field. The true receive window size is left shifted by the value in shift count. A maximum value of 14 may be used for the shift count value. This would allow a single TCP connection to transfer data over the example satellite link at 1.5 Mbit/second utilizing all of the available bandwidth.
Network Settings
-
Wired_Node_1 in Transport Layer TCP Window Scaling à FALSE (by default) and Congestion plot set as TRUE.
-
Application Generation rate à 10Mbps (Set Inter arrival time = 1168)
-
Bit error rate (Uplink and Downlink) à 0 in all wired links
-
Enabled Wireshark Capture in General Properties Wired Node 1 à Set as Offline
-
Link1 & Link3 Propagation delay (uplink and downlink) à5(Microsec) (by default)
-
Change the Link2 speed à 10Mbps, Propagation delay (uplink and downlink) ->100000 (Microsec)
-
Simulate for 100sec and note down the throughput
-
Now change the Window Scaling à TRUE (for all wired nodes)
-
Simulate for 100s and note down the throughput.
Results and Discussion
Window Scaling | Application Throughput (Mbps) |
---|---|
FALSE | 2.5 |
TRUE | 8.7 |
Table 4‑14: Results comparison for with/without Window Scaling
Throughput calculation (Without Window Scaling)
Theoretical Throughput = Window size / Round trip time = $\frac{65525*8\ Bytes}{200ms}\ $ = 2.62 Mbps
Go to the simulation result window -> plots -> TCP Congestion Plot Figure 4‑32/Figure 4‑33.
Figure 4‑31: Result window
Figure 4‑32: TCP Congestion Plot for wired node_1
In Window Scaling False, the Application_Throughput is 2.5 Mbps less than the theoretical throughput since it initially takes some time for the window to reach 65535 B.
Figure 4‑33: TCP Congestion Plot for wired node_2
In Window Scaling TRUE, From the above screenshot, users can notice that the window size grows up to 560192Bytes because of Window Scaling. This leads to a higher Application_Throughput compared to the case without window scaling.
We have enabled Wireshark Capture in the Wired Node 1. The PCAP file is generated silently at the end of the simulation. Double click on WIRED NODE1_1.pcap file available in the result window under packet captures, In Wireshark, the window scaling graph can be obtained as follows. Select any data packet with a left click, then, go to Statistics > TCP Stream Graphs > Window Scaling > Select Switch Direction.
Figure 4‑34: Wireshark Window when Window Scaling is TRUE
IP Addressing in NetSim#
When you create a network using the GUI, NetSim will automatically configure the IP address to the devices in the scenario. Consider the following scenarios:
If you create a network with two wired nodes and L2_Switch, the IP addresses are assigned as 11.1.1.1 and 11.1.1.2 for the two wired nodes. The default subnet mask is assigned to be 255.255.0.0. It can be edited to 255.0.0.0 (Class A) or 255.255.255.0 (Class C) subnet masks. Both the nodes are in the same network (11.1.0.0).
Similarly, if you create a network with a router and two wired nodes, the IP addresses are assigned as 11.1.1.2 and 11.2.1.2 for the two wired nodes. The subnet mask is default as in above case, i.e., 255.255.0.0. The IP address of the router is 11.1.1.1 and 11.2.1.1 respectively for the two interfaces. Both the nodes are in different networks (11.1.0.0 and 11.2.0.0) in this case.
The same logic is extended as the number of devices is increased.
Configuring Static Routing in NetSim #
Static Routing
Routers forward packets using either route information from route table entries that configured manually or the route information that is calculated using dynamic routing algorithms. Static routes, which define explicit paths between two routers, cannot be automatically updated; you must manually reconfigure static routes when network changes occur. Static routes use less bandwidth than dynamic routes. No CPU cycles are used to calculate and analyze routing updates.
Static routes are used in environments where network traffic is predictable and where the network design is simple. You should not use static routes in large, constantly changing networks because static routes cannot react to network changes. Most networks use dynamic routes to communicate between routers but might have one or two static routes configured for special cases. Static routes are also useful for specifying a gateway of last resort (a default router to which all unrouteable packets are sent).
Note that the static route configuration running with TCP protocol requires reverse route configuration.
How to Setup Static Routes
In NetSim, static routes can be configured either prior to the simulation or during the simulation.
Static route configuration prior to simulation:
-
Via static route GUI configuration
-
Via file input (Interactive-Simulation/SDN)
Static route configuration during the simulation:
-
Via device NetSim Console (Interactive-Simulation/ SDN)
Static route configuration via GUI
Open NetSim, Select Examples->Internetworks->Configuring Static Route then click on the tile in the middle panel to load the example as shown in Figure 4‑34.
Figure 4‑35: List of scenarios for the example of Configuring Static Route
The following network diagram illustrates what the NetSim UI displays when you open the example configuration file for Configuring Static Routing in NetSim as shown Figure 4‑35.
Without Static Route#
Figure 4‑36: Network set up for studying the Configuring Static Route
Network Settings
-
Environment Grid length: 500m * 500m.
-
Create a Scenario as shown in above screenshot.
-
Generate CBR Traffic Between Wired node 6 and Wired node 7 and set the transport layer protocol as UDP.
-
The default routing protocol is OSPF in application layer of Routers.
-
Wired link Properties are default.
-
Enable packet trace and plots.
-
Run simulation for 10 seconds.
-
Observe in Animation Window packet flows from Wired Node 6 -> Router 1-> Router 5-> Router 4-> Wired Node 7 as shown in below Figure 4‑36.
Figure 4‑37: Animation Window packet flows from Wired Node 6àRouter 1àRouter 5àRouter 4 à Wired Node 7
With Static Route#
Static routing configuration
- Open Router 1 properties->Network_Layer. Enable - Static IP Route ->Click on Configure Static Route IP and set the properties as per the screenshot below and click on Add and then click on OK.
Figure 4‑38: Static IP Routing Dialogue window
This creates a text file for every router in the temp path of NetSim which is in the format below:
Router 1:
route ADD 11.7.0.0 MASK 255.255.0.0 11.1.1.2 METRIC 1 IF 1
route ADD destination_ip MASK subnet_mask gateway_ip METRIC metric_value IF Interface_Id
where
route ADD: command to add the static route.
destination_ip: is the Network address for the destination network.
MASK: is the Subnet mask for the destination network.
gateway_ip: is the IP address of the next-hop router/node.
METRIC: is the value used to choose between two routes.
IF: is the Interface to which the gateway_ip is connected. The default value is 1.
- Similarly Configure Static Route for all the routers as given in below Table 4‑15.
Devices | Network Destination | Gateway | Subnet Mask | Metrics | Interface ID |
---|---|---|---|---|---|
Router 1 | 11.7.0.0 | 11.1.1.2 | 255.255.0.0 | 1 | 1 |
Router 2 | 11.7.0.0 | 11.2.1.2 | 255.255.0.0 | 1 | 2 |
Router 3 | 11.7.0.0 | 11.3.1.2 | 255.255.0.0 | 1 | 2 |
Router 4 | 11.7.0.0 | 11.7.1.2 | 255.255.0.0 | 1 | 3 |
Table 4‑15: Static Route configuration for routers
-
After configuring the router properties.
-
Run the simulation for 10 seconds and check packet animation window.
-
Observe in Animation Window packet flows from Wired Node 6 -> Router 1-> Router 2-> Router 3-> Router 4-> Wired Node 7 as shown in Figure 4‑38.
Figure 4‑39: Observe in Animation Window packet flows from Wired Node 6àRouter 1àRouter 2àRouter 3àRouter 4àWired Node 7
Disabling Static Routing
-
If static routes were configured via GUI, it can be manually removed > prior to the simulation from the Static IP Routing Dialogue or > from the file input.
-
If static routes were configured during the run time, the entries > can be deleted using route delete command during runtime.
Different OSPF Control Packets #
There are five distinct OSPF packet types.
Type | Description |
---|---|
1 | Hello |
2 | Database Description |
3 | Link State Request |
4 | Link state Update |
5 | Link State Acknowledgement |
Table 4‑16: Different OSPF Control Packets
- The Hello packet
Hello packets are OSPF packet type 1. These packets are sent periodically on all interfaces in order to establish and maintain neighbor relationships. In addition, Hello Packets are multicast on those physical networks having a multicast or broadcast capability, enabling dynamic discovery of neighboring routers. All routers connected to a common network must agree on certain parameters (Network mask, Hello Interval and Router Dead Interval). These parameters are included in Hello packets, so that differences can inhibit the forming of neighbor relationships.
- The Database Description packet
Database Description packets are OSPF packet type 2. These packets are exchanged when an adjacency is being initialized. They describe the contents of the link-state database. Multiple packets may be used to describe the database. For this purpose, a poll-response procedure is used. One of the routers is designated to be the master, the other the slave. The master sends Database Description packets (polls) which are acknowledged by Database Description packets sent by the slave (responses). The responses are linked to the polls via the packets DD sequence numbers.
- The Link State Request packet
Link State Request packets are OSPF packet type 3. After exchanging Database Description packets with a neighboring router, a router may find that parts of its link-state database are out-of-date. The Link State Request packet is used to request the pieces of the neighbor’s database that are more up to date. Multiple Link State Request packets may need to be used. A router that sends a Link State Request packet has in mind the precise instance of the database pieces it is requesting. Each instance is defined by its LS sequence number, LS checksum, and LS age, although these fields are not specified in the Link State Request Packet itself. The router may receive even more recent instances in response.
- The Link State Update packet
Link State Update packets are OSPF packet type 4. These packets implement the flooding of LSAs. Each Link State Update packet carries a collection of LSAs one hop further from their origin. Several LSAs may be included in a single packet. Link State Update packets are multicast on those physical networks that support multicast/broadcast. In order to make the flooding procedure reliable, flooded LSAs are acknowledged in Link State Acknowledgment packets. If retransmission of certain LSAs is necessary, the retransmitted LSAs are always sent directly to the neighbor.
- The Link State Acknowledgment packet
Link State Acknowledgment Packets are OSPF packet type 5. To make the flooding of LSAs reliable, flooded LSAs are explicitly acknowledged. This acknowledgment is accomplished through the sending and receiving of Link State Acknowledgment packets. Multiple LSAs can be acknowledged in a single Link State Acknowledgment packet.
Open NetSim, Select Examples->Internetworks->Different OSPF Control Packets then click on the tile in the middle panel to load the example as shown in Figure 4‑39.
Figure 4‑40: List of scenarios for the example of different ospf control packets
The following network diagram illustrates what the NetSim UI displays when you open the example configuration file for Different-OSPF-Control-Packets in NetSim as shown Figure 4‑40.
Figure 4‑41: Network set up for studying the different ospf control packets
Network Settings
-
Set OSPF Routing protocol under Application_Layer properties of a router.
-
Configured CBR application with default properties and set application Start Time(s) to 30Sec.
-
Enabled Packet Trace and Plot.
-
Simulate for 100 sec.
Results and Discussion
Open Packet animation window and click on play button. Users can observe all the OSPF packets. OSPF neighbors are dynamically discovered by sending Hello packets out each OSPF-enabled interface on a router. Then Database description packets are exchanged when an adjacency is being initialized. They describe the contents of the topological database. After exchanging Database Description packets with a neighboring router, a router may find that parts of its topological database are out of date. The Link State Request packet is used to request the pieces of the neighbor's database that are more up to date. The sending of Link State Request packets is the last step in bringing up an adjacency. A packet that contains fully detailed LSAs, typically sent in response to an LSR message. LSAck is sent to confirm receipt of an LSU message.
Figure 4‑42: different ospf control packets in the animation window
The same can be observed in Packet trace by filtering CONTROL_PACKET_TYPE/ APP_NAME to OSPF_HELLO, OSPF_DD, OSPF_LSACK, OSPF_LSUPDATE and OSPF_LSREQ packets as shown below Figure 4‑42.
Figure 4‑43: different ospf control packets in the packet Trace
An enterprise network comprising of different subnets and running various applications #
Create a simple enterprise network, comprising of, two branches, head-quarters and a data center. Branches and headquarters are connected to the data center over the public cloud. In NetSim, users can model the network, by just adding the network elements, through click and drop, and renaming them suitably as shown below Figure 4‑44.
Open NetSim, Select Examples->Internetworks->Enterprise Network then click on the tile in the middle panel to load the example as shown in Figure 4‑43.
Figure 4‑44: List of scenarios for the example of enterprise networks
The following network diagram illustrates what the NetSim UI displays when you open the example configuration file for Enterprise Network in NetSim as shown
Figure 4‑45: Network set up for studying the enterprise network
Network Settings for Enterprise Network I
-
We have changed the link rate for the outbound link i.e. Link 28 from Branch 1 as 2Mbps.
-
Configured one FTP application from 14 to the file server 39 (File Size (Bytes) to 80000 and Inter_Arrival_Time (Sec) to 1), a Database application from 15 to the Database server 41, and eight email applications running between 16, 17, 18, 26, 27, 28, 29, 30 and the Email server 40.
-
Enabled Plots and Simulated for 100s.
Enterprise Network II
- In this sample, we have added more nodes via the switch and configured 3 FTP applications from systems 43, 45, 46 to FTP server 39 (File Size (Bytes) to 80000 and Inter_Arrival_Time (Sec) to 1), as shown in Figure 4‑45.
Figure 4‑46: Configuring FTP applications from systems 43,45,46 to FTP server 39
- Simulated for 100 seconds.
Enterprise Network III
- In this sample, we have changed the outbound link speed i.e. Link 28 to 4Mbps and simulated for 100 seconds.
Enterprise Network IV
- In this sample, we have changed the outbound link speed i.e., Link 28 to 2Mbps and configured Voice applications from 14, 15, 46, 45 and 43 to Head office 10 as shown in the below screenshot.
Figure 4‑47: Configuring voice applications from 14, 15, 43, 45, 46 to HO 10
- Also changed Scheduling type to Priority under Network Layer Properties of Router33 Interface_WAN properties as shown below Figure 4‑47.
Figure 4‑48: WAN Interface – Network layer properties window
- Simulated for 100 seconds.
Enterprise Network V
-
In this sample, we have changed the start time for Voice and FTP applications to 40 seconds, email application to 30 seconds and database application to 40 seconds.
-
Enabled Plots and simulated for 100 seconds.
Results and Discussion
Enterprise Network I Open metrics window and calculate the average delay for e-mail application present under Application properties shown below in Figure 4‑48.
Figure 4‑49: Application metrics table for Enterprise Network I
The average delay experienced by the e-mail applications is 1.90 s
Enterprise Network II: In this Sample, the average delay for email applications has increased to 11.64 s due to the impact of additional load on the network.
Enterprise Network III: In this Sample, the average delay for e-mail applications has dropped down to 0.89 s due to the increased link speed.
Enterprise Network IV: In this Sample, the average delay for the e-mail application has increased to 6.33 s since voice has a higher priority over data, and the routers will first serve the voice packets in its queue and only then route the data packets.
Enterprise Network V: In this Sample, users can notice that email application sees good throughput initially after which it is flat. On the other hand, the voice application throughput is NIL till 31 seconds, since it has no traffic, and starts picking up 31 seconds onwards.
Figure 4‑50: EMAIL Throughput Plot for Application 8
Figure 4‑51: VOICE Throughput Plot for Application 14