What do you call the amount of data transmitted for a given amount of time?

The Real-Time Polling Service (rtPS) class provides QoS assurance to real-time network services generating variable size packets on a periodic basis, while requiring strict data rate and delay levels. In rtPS, the WiMAX BS can use unicast polling so that mobile hosts can request bandwidth. Latency requirements are met when the provided unicast polling opportunities are frequent enough. The rtPS service class is more demanding in terms of request overhead when compared to UGS, but is more efficient for variable size packet flows. The Extended Real-Time Polling Service (ertPS) combines the advantages of UGS and rtPS. This QoS service class enables the accommodation of packet flows whose bandwidth requirements vary with time. The ertPS QoS class parameters include maximum latency, tolerated jitter, and minimum and maximum reserved traffic rate.

It is important to keep in mind that these QoS classes can assure the required QoS levels only over the WiMAX link, not the end-to-end delay. For example, maximum latency here refers to the period between the time that a packet is received by the convergence sublayer (WiMAX MAC) and until the packet is handed over to the PHY layer for transmission.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065245810780049

Performance Evaluation

José Duato, ... Lionel Ni, in Interconnection Networks, 2003

The Effects of Locality

An uncached message transmission was modeled as the sum of the messaging layer overhead (including the path setup time) and the time to transmit a message. The overhead includes both the actual system software overhead and the network interface overhead. For example, this overhead was measured on an Intel Paragon, giving an approximate value of 2,600 simulator cycles. However, the selected value for the overhead was 100 simulator cycles, corresponding to the approximate reported measured times for Active Message implementations [105]. A cached message transmission is modeled as the sum of the time to transmit the message (the actual data transmission time) and a small overhead to model the time spent in moving the message from user space to the network interface and from the network interface back into the user space. In addition, there is overhead associated with each call to set up a virtual circuit. This overhead is equivalent to that of an uncached transmission. There is also overhead associated with the execution of the directive to release a virtual circuit.

The effect of VCC is shown in Table 9.4. The differences are rather substantial, but there are several caveats with respect to these results. The VCC latencies do not include the amortized software overhead, but rather only network latencies. The reason is the following. The full software overhead (approximately 100 cycles) is only experienced by the first message to a processor. Subsequent messages experience the latencies shown in Table 9.4. Moreover, if virtual circuits are established before they are needed, it is possible to overlap path setup with computation. In this case, the VCC latencies shown in Table 9.4 are the latencies actually experienced by messages. Since these traces were simulated and optimized by manual insertion of directives in the trace, almost complete overlap was possible due to the predictable nature of many references. It is not clear that automated compiler techniques could do nearly as well, and the values should be viewed in that light. The VCC entries in the table are the average experienced latencies without including path setup time. The non-VCC latencies show the effect of experiencing overhead with every transmission. Although EP, FFT, and MG do not exhibit a great degree of locality, the results exhibit the benefit of overlapping path setup with computation. Finally, the overhead of 100 cycles is only representative of what is being reported these days with the Active Message [105] and the Fast Message [264] implementations. Anyway, the latency reduction achieved by using VCC in multicomputers is higher than the reduction achieved by other techniques.

Table 9.4. The effect of virtual circuit caching. Latencies are measured in cycles.

ProgramEPFFTKalmanMGMMVCC Latency32.2730.1747.0032.2727.26Non-VCC Latency133.26137.64147.99133.42174.28

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558608528500122

Integrating time-sensitive networking

Marian Ulbricht, Javier Acevedo, in Computing in Communication Networks, 2020

25.1 Introduction

Ethernet is the most important technology for wired and wireless networking. Consequently, it is the first choice when data need to be transferred between computer systems. Adapting heterogeneous ecosystems to a common information technology saves resources and reduces costs. Due to its robust design, Ethernet allows hot plug and play of devices, regardless of dropped packets and transmission delays. Nevertheless, for some applications, this undeterministic behavior is not acceptable. In industrial implementation scenarios, for example, applications based on bus systems, for example, CAN, Ethercat, or Profinet, must fulfill time-critical constraints to provide real-time communication between devices. This chapter describes the TSN technology, an extension of the Ethernet standard, which enables the deployment of time-critical applications through real-time data transmissions. The main motivation for the development of TSN is adapting Ethernet technology to fulfilling latency and redundancy requirements in industrial applications. TSN is a generic term used for many of the IEEE standards to describe time-sensitive extensions of the Ethernet technology. Many device vendors employ the term TSN-ready to promote their products, but they support only a subset of the standards described within this chapter. A mandatory feature for managing time-aware network devices is a common time base. The methods described in the IEEE802.1AS standard provide the time synchronization requirements to select and distribute the best clock reference through a time-sensitive network. The functionality of TSN is based on the Time-Aware Shaper (TAS), which is fully described in the IEEE802.1Qbv standard. The TAS supports a time-controlled and cyclic opening and closing device ports. This makes a network deterministic as it can be programmed when a device opens or closes the gate of a transmit data queue. Then, for a known network routing implementation, the maximum latency of each packet is defined if the TAS configuration of each device is known. The TAS handles only forwarding of outgoing packets. Consequently, in theory, TSN employing only this technology can be flooded by any device connected to the input ports of any TSN switch. The flooding packets simply fill the transmit queues, whereas the deterministic behavior of the network is not guaranteed anymore. To tackle this issue, the Per-Stream Filtering and Policing (PSFP) is employed, which is defined in the IEEE802.1Qci standard. The PSFP implements a gatekeeper mechanism that protects the TSN network nodes against packets that arrive at a time out of their assigned time slot. Time slot reservation generates gaps during the transmission of packets, especially if full-sized Ethernet frames are considered. An additional time slot, named guard-band, is needed to separate them from packets belonging to other queues. The Frame preemption, defined in the IEEE802.1Qbu standard, introduces the possibility to stop the transmission of a large Ethernet frame until the time slot of the frame is opened again in the next cycle. This chapter provides an overview of the main TSN standards and their relationships. At the end of the chapter, a hands-on experiment will show the time shaping protocols in action within the ComNetsEmu.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128204887000414

Survey of approaches for wireless communication networks supported by ground robots☆

Hailong Huang, ... Chao Huang, in Wireless Communication Networks Supported by Autonomous UAVs and Mobile Ground Robots, 2022

2.2.3 Open issues and future research directions

This section presents an analysis on the relationship between the approaches in different categories and some open issues.

2.2.3.1 Discussion

We have discussed various available schemes for the different roles that robots play in WSNs. The relationship between the approaches within each category has been presented. In this subsection, we go one step further into these approaches and see their similarities across the categories.

The fundamental goal of a WSN is to gather the information of interest, no matter whether robots are used or whether extra energy is added. In conventional static WSNs, the routing strategy plays a critical role. When mobility is taken into account, there are two approaches for data collection: (1) ignore the routing strategy and use only single-hop communication between sensor nodes and robots or (2) develop a mobility-aware routing strategy and use multihop communication. For simplicity, consider a single-robot case. The data collection delay in the first approach would be the tour time of the robot, while in the second approach it would be the time the robot spends on traveling between two locations. The saved delay comes with the cost of a large amount of communication overhead to announce the position of the robot. When wireless charging is taken into consideration, there are also two approaches for data collection: (3) the data center remains at the static base station and the robot is only used to charge sensor nodes or (4) the charging task and the collection task are combined at the robot. The differences between these approaches are quite clear; now we discuss some similarities between them:

When wireless charging is considered, all the sensor nodes should be visited sooner or later to make the network survive forever. From this viewpoint, both (3) and (4) need to charge sensor nodes in proximity, which is another single-hop communication between sensor nodes and robots. The difference lies in the content direction: in (1) data packets are sent to robots by sensor nodes, while in (3) and (4) energy is sent to sensor nodes by robots.

When sensor nodes have various energy consumption rates, not all sensor nodes are required to be charged in the same robot tours, i.e., robots only need to charge on-demand nodes. In this scenario, one similarity of (2) and (4) is that they both need to consider the trajectory design and the mobility-aware routing strategy.

More specifically, in many realistic applications, the wireless data transmission time should be considered, especially in cases of large data packets, such as video. Besides, the experimental results show that the wireless charging of a sensor node to full battery capacity usually needs hours [62]. Both the wireless data transmission time and the wireless charging time may influence the decision on the sojourn time a robot spends at a sensor node in (1) and (3). In particular, the larger the packet, the more sojourn time is required in (1), while the less the residual energy at a node, the more sojourn time is required in (3).

From the above analysis, it becomes clear that collection and delivery are two opposite functions in some circumstances.

2.2.3.2 Open issues

This subsection highlights some open topics in the area of using robots in WSNs.

3D WSNs. Most of the available approaches consider the case where sensor nodes are deployed on a 2D plane and usually ground robots or vehicles are used as mobile platforms to carry robots. There are also some publications about the use of UAVs to execute the data collection task [5,44]. The UAVs do not change altitude, which suggests it is in essence still a 2D problem. References [84,85] consider the problem of using UAVs to monitor ground targets, and the UAVs can be regarded as mobile sensor nodes in 3D space. In some applications, such as crucial infrastructure monitoring [86], the static sensor nodes are deployed on the surface or inside the infrastructures. Another scenario is the underground or underwater data collection and/or charging system [87], where the sensor nodes are also deployed in 3D environments. In these cases, the 2D-based approaches are not sufficient.

Thus, it would be necessary to design schemes suitable for 3D WSNs. The approaches for 3D WSNs would be more complex than the existing ones for 2D WSNs, and those for the applications inside buildings would be more challenging than the underwater scenarios, since the environment inside buildings may be more cluttered, resulting in more constraints on robot movement.

Charging mobile sensors. In this survey, we mostly focus on the scenario with static sensor nodes. In some applications, sensor nodes may need to track targets and gather data from the targets [88]. In these cases, normal sensor nodes can be mobile. This feature also leads to many other problems such as the coverage of the area of interest, including barrier coverage, blanket coverage, and sweep coverage [89–92]. When the sensor nodes execute the tracking task, the sensors' movements must be carefully considered, since it relates to the energy efficiency of the system and network lifetime [93]. Some aspects such as the connectivity of the sensor nodes need to be taken into account as well when they are mobile. Furthermore, to the best of our knowledge, the currently available energy delivery approaches all focus on the scenario with static sensor nodes, and cannot be applied directly to the cases with mobile sensor nodes. Though harvesting energy by mobile sensor nodes is a promising direction [94], another research direction is to make use of robots to charge mobile sensor nodes in a wireless manner such that the network can operate for a long time, which will be less impacted by the environment than the case where energy harvesting is adopted.

Sensor-actuator networks with robots. Another interesting direction of future research is to study the use of robots in sensor-actuator networks. In such networks, some nodes are sensors and some are actuators (also called actors), whereas other nodes are endowed with both sensing and actuating capacities. Actuating capabilities are utilized to dispense control signals with the goal of achieving certain control objectives. Moreover, some nodes of such wireless networks may be mobile robots. Many modern engineering applications include the use of such networks to provide efficient and effective monitoring and control of industrial and environmental processes. These networks are able to achieve improved performance, along with a reduction in power consumption and production cost. An important open area of research is the study of coverage control problems for wireless sensor/actuator networks. In particular, one open problem is the novel problem of termination of a moving unknown environmental region by a sensor-actuator network with some mobile robotic nodes introduced in [95]. In real-world applications, this moving region may represent an oil spill or an area contaminated with a hazardous chemical or biological agent. In this problem, we assume that part of the nodes in the network are mobile autonomous robots. Furthermore, they are equipped with not only sensors but also actuators that release a neutralizing chemical to control the shape of the polluted region. In other words, some nodes of the network are capable to not just measure the moving region in their neighborhoods, but also to terminate parts of this region. Moreover, in some problems, the field can terminate the sensors/actuators as well. In such applications, actuation plays a major role. The goal is to achieve the complete termination of moving hazardous fields in realistic situations. In particular, a challenging open problem is to develop termination algorithms with guaranteed termination of the moving region under certain assumptions on the unknown environmental field in a finite time. Moreover, the time-optimal termination problem for various classes of time-varying environmental fields should be studied. In other words, the problem here is to find a decentralized strategy for the wireless sensor/actuator network that achieves termination in the minimum possible time. From a control-theoretic viewpoint some theoretical results on the control of sensor/actuator networks were obtained in [96], but these results are still far from application in real engineering problems.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323901826000070

New applications for submarine cables

Stephen Lentz, in Undersea Fiber Communication Systems (Second Edition), 2016

8.1 Introduction

In the years since the introduction of analog coaxial systems, a wide variety of alternative applications for submarine telecommunications cables has emerged in response to military, scientific and industrial needs. The earliest instance is the creation of hydrophone listening arrays in the 1950s to monitor Soviet submarines. Subsea cables were used to facilitate scientific research as early as the 1960s. Cabled tsunami and earthquake detection networks were installed off the coast of Japan beginning in 1978. In the 2000s, a number of significant scientific projects were undertaken, including neutrino telescopes and regional scale ocean observatories. Beginning in the 1990s, composite cables were installed to support offshore oil and gas platforms; by the 2000s, communication to these platforms via fiber optic cables had become a practical alternative to satellite and microwave links. Optical fiber is now applied in the offshore oil and gas industry to monitor wellheads, pipelines and other infrastructure. Looking forward, it is proposed that environmental sensors be installed in or alongside the repeaters of conventional telecommunications cables, giving rise to the term “green” systems.

Each of these applications is motivated by a need to remotely observe or measure the ocean or to provide communications to a remote facility at sea. Telecommunications cables represent a readily available, robust, and proven technology. The combination of a communications channel, to provide real-time data transmission, and power delivery, which eliminates the need for batteries or other power sources, offers many advantages over other methods of observation or communication.

Alternative applications can be categorized into tsunami and earthquake warning systems, cabled ocean observing systems, communications systems for offshore oil and gas platforms, monitoring and sensor systems for offshore energy production, “green” systems, and systems for military use. There is some overlap between these categories and “mixed use” systems incorporating two or more applications have been created.

New applications are created using the basic capabilities of a telecommunications system and combining additional elements. Each application thus shares some elements with telecommunications systems; at a minimum the cable itself is used. Elements such as repeaters, branching units, power feed equipment, and line terminating equipment are commonly employed. New elements are incorporated to achieve the desired purpose, many of which have been adapted from other uses. Examples include underwater matable connectors, subsea frames and platforms, pressure housings, and scientific sensors. For some applications, new system elements have been developed; examples include power converters, subsea nodes, cable termination assemblies, and dynamic riser cables.

The system architectures for alternative applications likewise share features with telecommunications systems, but often incorporate additional capabilities. Fiber routing and optical add drop multiplex (OADM) branching units (BUs) are used to connect multiple sites or nodes. Underwater matable connectors provide another means of extending systems. Ethernet switches (Open Source Interconnection (OSI) Model Layer 2) and routers (OSI Model Layer 3) are incorporated into the submerged plant to allow creation of ring, star, and mesh architectures.

Design trade-offs differ from telecommunications systems. Maximizing bandwidth is rarely a concern; features such as flexibility, geographic extension, or electrical power delivery may take precedence. Reliability objectives must be handled on a case-by-case basis. Some alternative systems are intentionally placed in hazardous locations, or deliberately include elements that are known to have lower reliability, often to trial a new technology. Other alternative applications, such as earthquake and tsunami warning systems, must be built to the same or higher reliability requirements as commercial cable systems.

This chapter will present the history and origins of alternative systems, discuss the features and characteristics of the most prevalent types, review applications and design considerations, and identify special deployment or operational methods. New applications and new designs continue to emerge, making it impossible to anticipate every possibility that may arise. By reviewing successful system design methods, a framework for development of future systems can be established.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128042694000088

Internet of wearable low-power wide-area network devices for health self-monitoring

Raluca Maria Aileni, ... Rajagopal Maheswar, in LPWAN Technologies for IoT and M2M Applications, 2020

14.1 Self-monitoring solutions, strategies, and risks

The popularity of the low-power wide-area network (LPWAN) concept is continuously increasing in the IoT world mainly due to its low operating and development cost, its low power consumption, and its long transmission range. For those key advantages to be obtained, this concept should trades the low bit transmission rates. The applications, where the data volume to be transmitted is not large, or the data transmission time is not relevant, can benefit from LPWAN technology. In addition, the big data application, for example, real-time health monitoring (e.g., fall detection), can be implemented, deployed, and can benefit from LPWAN technology by applying Fog/Edge computing together with LPWAN, those real-time health monitoring applications (e.g., fall detection) [1].

Over the last decade, significant improvements in the wireless sensor network (WSN) field were achieved. New wireless communication protocols such as long range (LoRa) and SigFox were designed, implemented, and continuously improved.

One of the most popular LPWAN protocols, with numerous applications implemented worldwide, is LoRa/LoRa for wide-area networks (LoRaWAN). Developed in France from 2010 onward, it is based on a physical layer with a proprietary modulation scheme owned by Semtech, a chip manufacturer. The frequency bands it uses are less than 1 GHz—more specifically 868 MHz in Europe and 915 MHz in the United States. In open wide spaces, the protocol’s range is up to 10 km. Frequency shift key modulation is one of the possible modulations to use with LoRa, but the defining LoRa modulation is Chirp spread spectrum (CSS) [2]. LoRa is using coding gain, spread spectrum modulation techniques, to improve the receiver’s sensitivity, and uses the full channel bandwidth to transmit a signal. In this way, the channel becomes robust to noise and insensitive to the frequency compensations caused by the use of low-cost crystals [3].

LoRa defines only the lower physical layer of the network. LoRa Alliance is a society specially created to support LoRaWAN. LoRaWAN was developed to define the upper layers of the network, and it is a media access control (MAC) layer protocol. However, its role is close to a routing protocol, acting as a network layer protocol for managing communication between LPWAN gateways and end-node devices. The LoRa Alliance maintains these devices. Version 1.0 of the LoRaWAN specification was released in June 2015 [4]. One of the essential advantages of LoRaWAN is its scalability. Depending on the message size, the number of LoRa channels, and the number of modulation channels used, LoRaWAN has the potential of connecting millions of devices.

Similar to LoRa, the Sigfox technology was developed in France and is similar narrow bands 863–870 MHz in European Telecommunications Standards Institute (ETSI) and Association of Radio Industries and Businesses (ARIB) regions and at 902–928 MHz in federal communications commission region [5].

Sigfox protocol is described as “ultra-narrowband (UNB)” and consists of three layers: frame, medium access control (MAC), and physical. The main advantage of the UNB concept Sigfox uses is deficient noise levels, leading to minimize power consumption, high receiver sensitivity, and low-cost antenna design [6]. The one-hop star, topology is used and also a mobile operator is needed to carry the generated traffic [7].

The signal can quickly cover large areas and can reach underground objects [8]. Differential binary phase-shift keying modulation is used to uplink while Gaussian frequency shift keying modulation is used to downlink. Only the uplink transmission was used at first and later the bidirectional communication was developed.

Another narrowband LPWAN protocol that is becoming popular is the narrowband-Internet of things (NB-IoT) protocol. It is designed for indoor use and high connection density. Its main advantages are low cost and low energy consumption. Another feature of NB-IoT is the integration in long-term evolution (LTE) or GSM (under licensed frequency bands). For example, in LTE, a narrow band of 200 kHz is used. In terms of modulation, orthogonal frequency-division multiplexing (OFDM) is used for downlink and single-carrier frequency-division multiple access (SC-FDMA) is used for uplink [9].

NB-Fi is an open protocol that has a similar range in urban areas to LoRa, and it operates under an unlicensed radio band (also similar to LoRa). NB-Fi protocol was developed by WAVIoT [10]. The company developed a transceiver for the protocol defining its physical layer. The main advantages of the transceiver are the low cost, very low-power consumption and high availability, being manufactured with widespread electronic components. The topology used is the one-hop star.

Security is one of the critical issues in the IoT fields. Both low-range protocols and LPWAN technologies are still vulnerable to cyberattacks. For example, the most massive distributed denial-of-service attack ever recorded was launched through an IoT botnet [11]. In addition, several IoT devices (cardiac devices, baby heart monitors) presented huge vulnerabilities that could allow third-party entities to take control of the devices [12].

In terms of security, LoRa uses AES-128 algorithm for message encryption. The network and application key secure the packets of data. However, a key issue (the length of the message being the same before and after the encryption) is making LoRa vulnerable to jamming, wormhole, and replay attacks.

NB-IoT consists of three layers: perceptron, transmission, and application, inheriting LTE’s authentication and encryption [13]. Each of the three layers of NB-IoT architecture can be exploited in different ways. To prevent this, data should be encrypted with cryptographic algorithms. Sigfox presents additional security through unique symmetrical authentication key and cryptographic tokens [14].

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128188804000156

Precision farming and IoT case studies across the world

Guido Fastellini, ... Eiji Morimoto, in Agricultural Internet of Things and Decision Support for Precision Smart Farming, 2020

7.5.1 Introduction

The entry of agriculture precision tools to our country dates back to 1996 with the idea of gathering information through a yield monitor and then analyzing it in a computer and comparing it with the soil characteristics to deliver a variable rate input application. Reality indicated that processes were not so simple and that applying the correct seed density and fertilization doses was not something that could be repeated automatically every time, even when seeding the same crop in the same field. After some experiences during the first years in Argentina carrying seeding tests in crops such as soybean, corn, wheat, sorghum and peanut among others, it was possible to understand the utter importance in the determination of input's performance in relation to weather changing every year. It was then that we began to fully understand that prescriptions should be very variable from year to year, depending on the fertilizer and the crops to which these inputs were applied.

On the other hand, it is always important to realize how the agricultural system, of each country where some technology is developing, works. Basically, Argentina has an extensive crop seeding system where the majority of production is made by people who do not own the fields and that can make use of the field for very short periods of time, mostly only 1 year.

Moreover, there is a job known as farm contractor (custom hiring). These contractors provide the seeding, spraying and harvesting services for the farmers or tenants who manage the fields of former farmers. The farm contractor carries almost the 90% of the extensive production activities.

Regardless of the degree of adoption of technological tools in Argentina, every proposal concerning precision agriculture made by the companies to Argentinean farmers, contractors or custom hiring was always adopted with great decision.

7.5.1.1 Market development

The market of PA gear has continued to grow in Argentina from 1998 to 2017, the last time at which sales were measured. There is a large offer of national and international brands available, and this can be observed in well-defined segments as much as in more innovative segments. There are control tools to help reduce ordinary operation failures; there are action tools that can ease the operator's tasks, especially when handling large machines; and there are also high-complexity tools that demand data analysis to be able to apply the necessary inputs for each productive ambient within a field.

To backup this affirmation, we can take a look at the accumulated sales until 2017 both in the segments that have been in the market for several years such as seed monitor, yield monitor, variable rate input application, etc., and in the tools that came up some years later, such as the automatic guide, section control, real-time data transmission from a machine to a computer or smartphone, drones, real-time weed control, etc. It is worth highlighting the interest shown in the acquisition of real-time weed sensors due to the resistance/tolerance of weeds and also to the increase of input costs during the last years (Chart 7.5.1).

What do you call the amount of data transmitted for a given amount of time?

Chart 7.5.1. Evolution of PA technological tools (accumulated sales) in Argentina during the last 18 years.

Source: Technology Module for Precision Agriculture and Livestock Equipment (INTA EEA Manfredi). September 2018.

7.5.1.2 Growth according to tools

Yield monitors are still a necessity to optimize harvest works. In 2017, an investigation reported that there were 14,050 equipment in the Argentinean market. Moreover, it is estimated that there are 26,937 yield monitors; this is an utterly important asset in the Argentinean agricultural production. The most adopted component, together with the yield monitor, is light bar and autoguide. In 2017, 20,307 equipment for sprayers machinery were counted. But if we consider automatic guide in tractors for sowing machines and others, that would lead to 12,680 more units.

Variable rate input application counts sales for almost 4,000 units. It represents a product whose growth stagnated due to the country's production conditions and the lack of information regarding precision. Section controllers also evolved, thanks to the incorporation of automatic pilots and satellite signal correction systems. In 2017, 4,668 units were estimated, 263 equipment in sowing machines and approximately 4,405 in sprayers. This segment is of great utility to optimize the use of inputs because it diminishes the superpositions of the applications both in seeding and in spraying. Telemetry gear showed a growth during the last years, counting more than 837 installed units as well as real-time selective weed control systems with more than 230 units in our country. This enables an enhancement in the application of herbicides, as it significantly decreases the quantities applied. Hand nitrogen sensors enable the operator to carry a manual control if he/she whishes to improve the application of nitrogen fertilizers in grass crops, and there are now more than 100 units in farmers' hands. Drones by specific agricultural companies amount to more than 155 sold unities, but there are thousands of commercial drones that producers use to monitor field with simple cameras. There are also some improvements for drones achieved by the company Cicare www.cicare.com.ar that enables to spray more than 100 L of chemical products for robotized application.

7.5.1.3 Real equipment

However, beyond the total amount of PA equipment in Argentina, calculated from the accumulated sales during the last 20 years, it is interesting to do a thorough and deep analysis of the really available equipment at present. Observation indicates that we must discriminate equipment that are effectively being used from those products that were sold but are currently inactive, due to technological obsolescence or maybe simply because they were acquired but not used by the operators. To estimate more accurately the degree of potential coverage that the PA tools can provide to the national production at present, a more exhaustive analysis should be carried out taking into account not only sales but also the actual use of equipment and 5-year payback periods.

7.5.1.4 Key items

The study paid special attention to the three most relevant items of PA equipment: yield monitor, seed monitor and light bars or autoguide for sprayers. Amortization parameters were considered in every case, as well as the estimated rates regarding the nonuse of instruments by the operators.

In general, we could observe a decrease of more than the 20% (due to the fact that amortization is calculated over 5 years for each PA component) of the total available tools, as derived from total accumulated sales. Translated into figures, there are currently 11,240 yield monitors at work, in contrast to the 14,050 units sold since 1998. The really available quantities of seed monitors would amount to 21,550 units, less than the accumulated sales of 26,937 units. As regards light bars, we estimated a really available number of 16,245 units, against 20,307 sold.

It is worth underlining that all these are estimations because there are equipment of more than 13-year old that are still working, and also, on the other hand, there are 1-year old units that are not being used because of the lack of operator's knowledge.

7.5.1.5 Evolution of the area with precision agriculture tools

Considering the information provided by the Argentina Federation of Rural Machines Contractors (AFRMC), we can project these final estimations to the potential hectares in which PA available tools can be applied.

As for harvesting machine, for example, it is estimated that, to get an adequate amortization, it must work, approximately, up to 3000 ha a year. If we transfer this potential to the 11,240 yield monitors that are currently operating, we obtain a total of 33,7 million hectares. This figure represents a monitoring potential of almost the 99.9% of the country's cultivated area in 2012/2013. This means that if we applied the presently available PA tools, we could provide yield maps for nearly the totality of the fields in our country. And if the producer could use information derived from maps to get more and better knowledge of his/her lots, within a couple of years he/she would be able to enhance the efficiency of machine use and input application. According to the AFRMC, a seeder covers 1,500 ha a year on average. If we multiply that value by the 21,550 available monitors, we get a coverage potential of 32.3 million hectares. Also, in this case, the available technology would potentially guarantee an adequate implantation for the totality of the national agricultural area. In spraying, estimations are not so simple because a machine can make many applications in one lot. Considering this, over an annual capacity estimated by the AFRMC of 18.000 ha for every autopropelled sprayer, the use of the 16,245 available light bars and autoguides would enable to cover up to 292 million hectares. Although considering an average of four applications per lot, the actual estimated area gives us a result of 73 million hectares. In conclusion, provided the total amount of satellite guides in sprayers, the application area could be easily and readily doubled in our country. It is worth mentioning that the calculations on covered hectares with a PA machine was estimated using the information given by the AFRMC, but it may happen that some machines cover less hectares than those calculated.

Read moreNavigate Down

View chapterPurchase book

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012818373100007X

Cloud Computing

Caesar Wu, Rajkumar Buyya, in Cloud Data Centers and Cost Modeling, 2015

1.5 Parallel Computing

Parallel computing is opposed to serial computing. It can perform multiple tasks at the same time. From a hardware perspective, hardware parallelism increases processing speed. In contrast to the “one by one” method of serial computing, parallel computing can implement multiple computational tasks simultaneously or in an overlaping fashion. The hardware-oriented definition is just the intuitive way of describing parallel computing. The comprehensive method [28] of expressing parallel computing actually covers a broad range of topics. It includes program algorithms, applications, programming languages, operating systems, and computer architecture, which includes multicores, multithreaded, multiprocessor, or multisocket (CPU) and multinode hardware. Each of these components underpins parallel computing, and should be implemented harmonically in order to support streaming and highly efficient parallel computational workloads.

During the 1950s, parallel meant to share common memory with multiple processors. Today, parallel computing doesn’t only mean hardware parallelism but also software and application parallelism. It has become the mainstream computing architecture from laptops to the top end of computing. What is a new ingredient in today’s parallel computing is the extension of it so that it is ubiquitous and available to everyone.

The main driving force behind parallelism is to continuously improve computing performance or speed. Over the last 50 years or so, CPU clock speed has been doubled almost every 18 months based on the famous Moore’s law. As a result of rising CPU speeds, the power density has also increased dramatically (see Figure 1.7). For example, if CPU clock speed is increased by 20%, the power consumption of the CPU would rise almost five times or 100%.

What do you call the amount of data transmitted for a given amount of time?

Figure 1.7. The power density trend of CPUs [29].

If we continue to improve performance or increase CPU clock speed using the serial computation from 1990s, the surface heat of a CPU will eventually reach the sun’s temperature [29], which is also called the power wall barrier. Clearly, it is not a sustainable solution to continuously increase CPU clock speed. This physical limitation has presented three issues for consideration for computer hardware engineering:

How to solve transistor density; in an integrated circuit this limits CPU clock speed.

How to break the barrier of the speed of light; this limits data transmission time or latency.

How to reduce the manufacturing cost of hardware; this limits the complexity.

The sufficient way to resolve these issues or overcome these physical limitations is to adopt the parallel computation model. From an overall perspective, the use of parallelism can solve the following computing performance problems:

Reducing CPU surface heat: This is an alternative solution to improve CPU performance or CPU clock speed further by controlling CPU surface temperature and managing heat dissipation.

Escaping the serial computation limit: Parallelism can handle large and complex computing problems that serial computation cannot solve.

Saving computing cost: The parallel approach has made it possible for computing resources to become a cheap commodity product. In contrast to traditional serial computation, with parallel it is possible to have more resource inputs and shorter times for tasks, with potential cost savings.

Saving time: A serial computation model can only perform one task at a time. Parallel computing can perform multiple tasks simultaneously.

Using integrated resources: With parallel computation, the computing resource pool has been extended from a local data center to nonlocal computer resources. The concept underpins distributed computing.

In order to solve the above performance problems, people classify the parallel computing into two basic categories: hardware parallelism and software parallelism.

1.5.1 Hardware Parallelism

Based on the hardware architecture, we can also divide hardware parallelism into two types of parallelism: Processor parallelism and memory parallelism. Again, the main objective of hardware parallelism is to increase the processing speed.

1.5.1.1 Processor parallelism

Process parallelism means that the computer architecture has multiple nodes, N-ways, multiple CPUs or multiple sockets, multiple cores, and multiple threads. Today, multiple processors for a computer are very pervasive from laptops to mainframe computers. It has become a mainstream CPU hardware architecture. We will give a detailed explanation of these terms in Chapter 11.

1.5.1.2 Memory parallelism

Memory parallelism means shared memory, symmetric multiprocessors, distributed memory, hybrid distributed shared memory, multilevel pipelines, etc. Sometimes, it is also called a parallel random access machine (PRAM). “It is an abstract model for parallel computation which assumes that all the processors operate synchronously under a single clock and are able to randomly access a large shared memory. In particular, a processor can execute an arithmetic, logic, or memory access operation within a single clock cycle” [33] (see Figure 1.8). This is what we call using overlapping or pipelining instructions to achieve parallelism.

What do you call the amount of data transmitted for a given amount of time?

Figure 1.8. Parallel random access machines.

1.5.2 Software Parallelism

Software parallelism can be further classified as algorithm, programming, data size, and architecture balance parallelism. Algorithm parallelism is a process of algorithm implementation for software or an application. The traditional algorithm is based on the concept of sequential processing. Since task execution is linear, the traditional approach will become very counterproductive. In comparison with hardware parallelism, the progress of parallel software development is very slow. It suffers from all the problems inherent in sequential programming.

1.5.2.1 Algorithm parallelism

Algorithm parallelism means the computer implements “a prescribed set of well-defined rules or processes for the solution of a problem in a finite number of steps” at the same time. It means the adopted algorithms must avoid dependence among operations that force one step to follow another, which is a serial method. An example of algorithm parallelism is an interactive program that has many sequential algorithms where each algorithm can be executed independently and simultaneously. Fayez Gebali [30] summarized the details of parallel computing with five layers to illustrate the relationship of algorithm, programming, software and hardware parallelism (see Figure 1.9).

What do you call the amount of data transmitted for a given amount of time?

Figure 1.9. Phases or layers of implementing an application in software or hardware using parallel computing [30].

1.5.2.2 Programming parallelism

Programming parallelism is facilitated by what are called concurrency platforms, which are tools that help the programmer manage the threads and the timing of task execution on the processors (see Figure 1.9). The practical aim of programming parallelism is to decompose a large and complex problem into the number of units for parallel execution, which is referred to as a threading arrangement. It can also be considered as one of the six types of parallel models, which we will give further details about in the following section.

1.5.2.3 Data parallelism

Data parallelism represents the number of independent data structures and the size of each one, which are indicators of the degree of available parallelism in a computation. A successful parallel computation requires data locality in that program references stay relatively confined to the data available in each processor—otherwise too much time may be consumed, ruining parallel performance, which is expressed in Amdahl’s law (see Figure 1.10).

What do you call the amount of data transmitted for a given amount of time?

Figure 1.10. Amdahl’s law of parallel computing [29].

1.5.2.4 Architecture balance parallelism

In order to achieve better parallel performance, the architecture of parallel computing must have enough processors, and adequate global memory access and interprocessor communication of data and control information to enable parallel scalability. When the parallel system is scaled up, the memory and communication systems should also be increased by the architecture of the design. This is what we call it Gustafson-Barsis’ law (see Figure 1.11).

What do you call the amount of data transmitted for a given amount of time?

Figure 1.11. Amdahl’s law vs. Gustafson-Barsis’ law.

Both Amdahl’s law and Gustafson-Barsis’ Law explain the characteristics of parallelism. Amdahl’s law emphasizes the fixed workload, but Gustafson-Barsis’ law focuses on the fixed time while the workload is scaled up. IT trends indicate that the growth of the workload has increasingly become much larger and complex. From a long-term perspective, Gustafson-Barsis’ law is aligned with the historic trend. Nevertheless, Amdahl’s law is still held to be true if you would like to speed up for a particular workload.

1.5.3 Different Types of Parallel Models

We can achieve parallelism with six different approaches. These approaches give programmers choices about the kind of parallel model to be implemented in their programming code

Distributed parallelism: This executes application tasks in the boundary of different physical nodes of a cluster of computers. The Message Passing Interface (MPI) is one of the popular parallel environments for creating applications. It supports operations such as “send” and “receive” for messages in order to distribute and manage tasks of the applications.

Virtualization: This method runs a number of operating systems on one machine. This function is closely associated with hardware architecture or multicore CPUs (see Figure 1.12). For example, a four-core socket or CPU can host four virtual machines (VMs). Each core can dedicate resources to one VM and operate one operating system (OS). A hypervisor sits between the OS and hardware and manages VM and hardware resources including I/O, network and switch ports, memory, and storage.

What do you call the amount of data transmitted for a given amount of time?

Figure 1.12. Multicore CPU and multisocket motherboard.

Task-level parallelism: This uses tasks rather than software thread execution. For example, an application may have 10 tasks but only have 3 threads. The parallel mechanism can execute tasks with each task being scheduled by a runtime scheduler. A thread is an independent flow of control that operates within the same address space as other independent flows of control within a process. Traditionally, thread and process characteristics are grouped into a single entity called a process.

Thread-level parallelism: In contrast to the hardware parallelism approach, this achieves parallelism within a program, with each of the parallel parts running on a separate thread. In a multicore environment, each thread can run on a separate core. The Oracle/Sun RISC server uses this approach (see Figure 1.13 and Table 1.1). For example, one machine has four cores but runs two logical threads per core. Table 1.2 shows some details of common Oracle/Sun RISC server hardware.

What do you call the amount of data transmitted for a given amount of time?

Figure 1.13. SPARC RISC CPU with four cores.

Table 1.1. Configuration Tasks for Oracle SPARC ISC CPU: Two Logical Threads

Physical Core NumberThread 0Thread 1Core 0CPU 0 (general usage)CPU 4 (general usage)Core 1CPU 1 (general usage)CPU 5 (general usage)Core 2CPU 2 (writer thread)CPU 6 (reader thread)Core 3CPU 3 (engine thread)CPU 8 (engine thread)

Table 1.2. Oracle/Sun SPARC Server

Hardware ModelProcessor (Socket)CoresThreadsL2 CacheUnit Cost*T-3 (1.65 GHz)4645126$95,000T2000 (1.0 or 1.2 GHz)46 or 848 or 643$27,600T5220 (1.2 or 1.4 GHz)14, 6, or 864 (max)4$35,000

Note: Here, the price is just an indication. The real price is a variable of time.

Instruction-level parallelism: This parallel function is executed at the instruction level of a CPU because most CPUs have several execution units. Very often, the CPU does it automatically, but it can be controlled by the layout of a program’s code.

Data-level parallelism: This form of parallelism relies on the CPU supporting single instruction, multiple data (SIMD) operations, such as those that can be found in various streaming SIMD extensions (SSE).

You are not limited to one type of parallel mechanism. Actually, you can use all six of these parallel models together.

However, based on Amdahl’s law, there is a limit to the number of processors to be parallelized. After 1,024 processors, the performance of a computer will be improved very little or trivially and after 2,048 processors, there will be no performance improvement (see Figure 1.10).

The formula for Amdahl’s law is the following:

Sp=P[f×P+(1−f)]=1[f+1−fP]

where Sp is the speed and f is the sequential function of a process; this calculates the magnitude of theoretical speed improvement that can be achieved for a given process by having more concurrent processors available (or parallel). In other words, it is percentage of sequential code. P represents the number of processors or cores or threads.

In order for further clarification of network communication parallelism and architecture balance parallelism terms, we have to bring another concept into the discussion, namely, distributed computing.

What is amount of data that can be transmitted?

The correct answer is Bandwidth. Bandwidth is the measurement of the amount of data that can be transmitted over a network at any given time.

What is the process of transmitting data from one device to another called?

Data communication is used for transfer of data from one device to other via some form of transmission media..
The exchange of data between two devices via some form of transmission medium such as cable wire..

Is the amount of data that can be transmitted in a fixed amount of time through a network connection?

Bandwidth is the maximum amount of data that can be transmitted over a network path in a fixed amount of time. Bandwidth is measured in bits per second (bps).

What is known as the amount of data transmitted in a time interval?

The maximum amount of data transmitted over an internet connection in a given amount of time. Bandwidth is often mistaken for internet speed when it's actually the volume of information that can be sent over a connection in a measured amount of time – calculated in megabits per second (Mbps).