The webinar introduces the concept of Digital RF Memory Jammers, describes their technology and the respective test and measurement challenges and solutions from Rohde & Schwarz.
The DRFM jammer has become a highly complex key element of the EA suite. It has evolved from a simple repeater with some fading capabilities to a complex electronic attack asset. Some of the more critical tests are verifying proper operation and timing of the deception techniques on the system level, qualifying the individual components, submodules and modules at the RF/IF level, and last but not least making sure that clock jitter and power integrity are addressed early at the design stage. For all these requirements, Rohde & Schwarz offers cutting edge test and measurement solutions. The webinar introduces the concept of Digital RF Memory Jammers, describes their technology and the respective Test and Measurement challenges and solutions from Rohde & Schwarz.
Please note: By downloading a webinar, you’re contact information will be shared with the sponsoring company, Rohde & Schwarz GmbH & Co.KG and the Rohde & Schwarz entity or subsidiary company mentioned in the imprint of www.rohde-schwarz.com, and you may be contacted by them directly via email or phone for marketing or advertising purposes.
The USRP N3xx product family supports three different methods of baseband synchronization: external clock and time reference, GPSDO module, and Ethernet-based timing protocol
USRP N3xx Synchronization Options
The USRP N3xx product family supports three different methods of baseband synchronization: external clock and time reference, GPSDO module, and Ethernet-based timing protocol. Using an external clock and time reference source, such as theCDA-2990accessory, offers a precise and convenient method of baseband synchronization for high channel count systems where devices are located near each other, such as in a rackmount configuration. Using the GPSDO module enables synchronization when the devices are physically separated by large distances such as in small cell, RF sensor, TDOA, and distributed testbed applications. However, the GPSDO method typically has more skew than the other two methods and requires line of sight to satellites. Therefore, indoor, urban, or hostile environments restrict the use of GPSDO. Ethernet-based synchronization enables precise baseband synchronization over large distances in GPS-denied environments. However, this method consumes one of the SFP+ ports of the USRP N3xx devices and therefore reduces the number of connectors available for IQ streaming. This application note provides instructions for synchronizing multiple USRP N3xx devices using the Ethernet-based method.
Ethernet-Based Synchronization Overview
The USRP N3xx product family supports Ethernet-based synchronization using an open source protocol known as White Rabbit. White Rabbit is a fully deterministic Ethernet-based network protocol for general purpose data transfer and synchronization. This project is supported by a collaboration of academic and industry experts such as CERN and GSI Helmholtz Centre for Heavy Ion Research.
White Rabbit is an extension of the IEEE 1588 Precision Time Protocol (PTP) standard, which distributes time references over Ethernet networks. In addition, White Rabbit uses Synchronous Ethernet (SyncE) to distribute a common clock reference over the network across the Ethernet physical layer to ensure frequency syntonization between all nodes. This combination of SyncE and PTP, in addition to further measurements, provides sub-nanosecond synchronization over distances of up to 10 km. The White Rabbit extension of the IEEE 1588-2008 standard is in the final stages of becoming generalized as the IEEE 1588 High Accuracy profile.
The USRP N3xx product family implements the White Rabbit protocol using a combination of the FPGA and dedicated clocking resources. The USRP N3xx operates as a slave node, a White Rabbit master node is required in the network. Seven Solutions provides White Rabbit hardware that works with the USRP N3xx devices to create synchronous clock and time references that are precisely aligned across all devices in the network. See the “Required Accessories” section for details on the required external hardware. The USRP N3xx devices do not support IQ sample streaming over this protocol. Therefore, only one of SFP+ ports is available for streaming when using White Rabbit synchronization.
For more information on the White Rabbit project, visit the links below:
White Rabbit synchronization utilizes specific optical SFP transceivers and single mode fiber optic cables to achieve precise time alignment, as documented on the project website. The USRP N3xx was tested to work as a White Rabbit slave using the AXGE-1254-0531 SFP transceivermarked in blue, the AXGE-3454-0531 SFP transceivermarked in purple, and a G652 type single mode fiber optic cable.
Seven Solutions is a provider of White Rabbit equipment, including the WR-LEN and the White Rabbit Switch (WRS). The USRP N3xx was tested to work with both the WR-LEN and the WRS products. All accessories required for White Rabbit operation can be purchased directly from the Seven Solutions website. The AXGE SFP transceivers and fiber optic cables are only listed on the website as part of the “KIT WR-LEN” product, but they can also be purchased individually by contacting Seven Solutions.
For more information on White Rabbit accessories, visit the links below:
The White Rabbit feature of the USRP N3xx product family is based on standard networking technology, therefore many system topologies are possible. However, the USRP N3xx device only works as a downstream slave node and must receive its synchronization reference from an upstream master node. This section shows examples of typical configurations used to synchronize a network of multiple USRP N3xx devices.
Figure 1 shows a WRS operating as the master node connected to several USRP N3xx devices. Note that a master SFP port requires the purple SFP transceiver mentioned in the previous section, and a slave SFP port requires the blue SFP transceiver. The USRP N3xx use the SFP+ 0 port for White Rabbit and SFP+ 1 port for IQ streaming. This port configuration requires the White Rabbit “WX” FPGA bitfile.
Download all FPGA images for the version of the USRP Hardware Driver (UHD) installed on the host PC by running the following command in a terminal:
Using the UHD API, configure the USRP application to use “internal” clock source and “sfp0” time source:
The White Rabbit IP running on the FPGA disciplines the internal VCXO of the USRP N3xx to the clock reference from the upstream master node in the network. See theUSRP N3xx block diagramfor reference.
The WRS/WR-LEN device needs to be configured as a master on the ports connected to the USRP N3xx modules. Users can make this configuration with the WR-GUI application provided by Seven Solutions, or with a serial console connection to the WRS/WR-LEN device. See the WRS/WR-LEN manual for detailed instructions. After White Rabbit lock is achieved, the standard USRP N3xx synchronization process completes and the devices are ready for use.
In addition to operating as a master, the WRS and WR-LEN devices can operate as a grandmaster by receiving clock and time references from an external source. This feature is useful for situations where the entire White Rabbit network needs to be disciplined to GPS or other high accuracy synchronization equipment such as a rubidium source. See the WRS/WR-LEN documentation for more information on grandmaster mode.
This section provides an example measurement of the timing alignment between multiple USRP N3xx devices synchronized using White Rabbit, with varying fiber cable lengths. As shown in Figure 3, a White Rabbit Switch in master mode is connected to one USRP N3xx device using a 5 km spool of fiber, and to another USRP N3xx device using 1 m of fiber. The synchronization performance was measured by probing the exported PPS signal, which is in the sample clock domain on both USRP N3xx devices thereby demonstrating sample clock and timestamp alignment. The time difference between each PPS edge was measured with an oscilloscope at room temperature in a laboratory environment. As shown in Figure 4, the resulting measurement shows about 222 ps of skew between the two USRP N3xx devices, thereby demonstrating the sub-nanosecond synchronization of White Rabbit over long distances.
The frequency accuracy of the internal oscillator of each USRP N3xx slave node is derived from the frequency accuracy of the upstream master node, in a manner similar to disciplining to an external clock reference source connected to the REF IN port. By connecting a high accuracy frequency source such as a rubidium reference to the master White Rabbit device in grandmaster mode, all USRP N3xx devices in the White Rabbit network would inherit this frequency accuracy.
Release 15 compliant 5G New Radio non-standalone system can help customers deliver 5G commercial wireless to market faster
NI (Nasdaq: NATI), the provider of platform-based systems that help engineers and scientists solve the world’s greatest engineering challenges, today announced a real-time 5G New Radio (NR) test UE offering. The NI Test UE offering features a fully 3GPP Release 15 non-standalone (NSA) compliant system capable of emulating the full operation of end-user devices or user equipment (UE).
With the 5G commercial rollout this year, engineers must validate the design and functionality of 5G NR infrastructure equipment before productization and release. Based on the rugged PXI Express platform, the NI Test UE offering helps customers test prototypes in the lab and in the field to evaluate them on service operators’ networks. In addition, customers can perform InterOperability Device Testing (IoDT), which is a critical part of the commercialization process to ensure that network equipment works with UE from any vendor and vice versa. The NI Test UE offering can also be used to perform benchmark testing to evaluate the full capabilities of commercial and precommercial micro-cell, small-cell and macro-cell 5G NR gNodeB equipment.
Spirent has worked with NI to add 5G NR support to its existing portfolio of products. “As 5G was picking up steam, we looked to find a world-class 5G NR platform that would outperform the market today and continue to do so as the 5G market matures,” said Clarke Ryan, senior director of Product Development at Spirent. “As a leader in SDR-based radios since 2011, NI was the natural choice to ensure we have the best radio with the best testing capabilities to stay ahead of the curve for our customers.”
The NI Test UE offering provides a flexible system for evaluating 5G technology. Customers can use the SDR front ends to select the sub-6 GHz frequency of their choice. The system scales up to one 100 MHz bandwidth component carrier and can be configured for up to 4×2 MIMO to achieve a maximum throughput of 2.3 Gb/s. The 5G NR Release 15 software includes complete protocol stack software that can connect with a 5G gNodeB while providing real-time diagnostic information. Customers can log diagnostic information to a disk for post-test analysis and debugging and can view it on the software front panel for a real-time visualization of the link’s performance.
“The industry is on the cusp of 5G commercial deployments and mobile operators need to ensure that their infrastructure is 5G enabled in a virtualized, programmable, open and cost-efficient way,” said Neeraj Patel, Vice President and General Manager, Software and Services, Radisys. “NI is leveraging our first-to-market 5G Software Suite as the engine for its Test UE offering. Our complete 5G source code solution for UE, gNB and 5GCN represents a disruptive end-to-end enabling technology for customers to build 5G NR solutions. By powering such first to market test applications together with NI and Spirent, we are accelerating 5G commercialization that will change how the world connects.”
NI (ni.com) develops high-performance automated test and automated measurement systems to help you solve your engineering challenges now and into the future. Our open, software-defined platform uses modular hardware and an expansive ecosystem to help you turn powerful possibilities into real solutions.
This whitepaper takes you back to basics to look at key factors to be considered when selecting a connector solution
This whitepaper explains the key factors to consider when specifying the best connector solution for the application. Topics include; electrical properties, mechanical and environmental considerations, physical space issues, designing for manufacture and servicing, standards and certifications. Also includes a checklist to assist in the shortlisting and justification process.
Validation of embedded connectivity in V2X designs and self-driving cars mark an inflection point in automotive wireless designs, opening the door to modular solutions based on off-the-shelf hardware and flexible software
Automotive OEMs and Tier-1 suppliers are now in the thick of the technology world’s two gigantic engineering challenges: connected cars and autonomous vehicles. That inevitably calls for more flexible development, test, validation, and verification programs that can quickly adapt to the changing technologies and standards.
For a start, the vehicle-to-everything (V2X) technology, which embodies the connected car movement, is also the point where the wireless industry most intersects with autonomous vehicles (AVs). Collision detection and avoidance is a classic example of this technology crossover between the AV and connected car technologies.
It shows how vehicles and infrastructure work in tandem for the creation of a smart motoring network. Here, at this technology crossroads, when V2X technology converges and collides with AV connectivity, reliability and low latency become requirements that are even more critical.
It is worth mentioning here that the demand for extreme reliability in stringent environments is already a precondition in automotive designs. When added to the connected car and self-driving vehicle design realms, well-tested connectivity becomes a major stepping stone.
The immensely complex hardware and software in a highly automated vehicle connected to the outside world also opens the door to malicious attacks from hackers and spoofs. And that calls for future-proof design solutions that demonstrate safeguards against hacking and spoofing attacks.
Not surprisingly, the convergence of connected cars and self-driving vehicles significantly expands development, test, validation and verification requirements. And that makes it imperative for engineers to employ highly-integrated development frameworks for multiple system components like RF quality and protocol conformance.
Modular Test Solutions
Take the example of a V2X system that requires certification for radio frequency identification tags and readers in electronic toll collection systems. Here, design engineers must also ensure that this connected car application protects data privacy and prevents unauthorized access.
However, being a new technology, this could entail a higher cost for validation at different development stages. The testing of communication equipment supporting different regional V2X standards could also lead to the purchase of measurement instruments for each standard and design layer.
That is why test and prototype solutions based on modular hardware and software building blocks can prove more efficient and cost-effective, as they can explore new technologies, standards and architectural options (see Figure 2). Software defined radio (SDR)-based test solutions, in particular, are flexible and cheaper in the long run.
The following sections will show how modular hardware and flexible software can help create RF calibration and validation solutions for autonomous and connected vehicles. It will explain how these highly customized systems can validate embedded wireless technology in V2X communications designed to save lives on the road.
URLLC Experimental Testbed
The immense amount of compute power involved in autonomous and connected car designs may lead to the expanded use of flexible SDR platforms for tackling the increasing complexity of embedded software and the rising number of usage scenarios, especially when automotive engineers must carefully balance extreme-reliability demands with low-latency requirements.
It is a vital design consideration amid the massive breadth of inputs and outputs for multiple RF streams serving a diverse array of cameras and sensors in autonomous vehicles. Moreover, SDR platforms can efficiently validate connected vehicles’ RF links to each other and to roadside units for information regarding traffic and construction work.
That is why the ultra-reliable low-latency communications mechanism is becoming so critical in both V2X systems and autonomous vehicles. It boosts system capacity and network coverage by reporting 99.999% reliability with a latency of 1 ms.
The URLLC reference design, for instance, enables engineers to create physical layer mechanisms for different driving environments and then compares them via simulation to analyze trade-offs between latency and reliability. So, a real-time experimental testbed like this one can substitute for expensive and cumbersome on-road testing to prove that the vehicle is safe for autonomous driving and V2X communications.
Shanghai University has joined with National Instruments to create a URLLC experimental testbed for advanced V2X services like vehicle platooning. The URLLC reference design and vehicle-to-vehicle communication are built around National Instruments’ SDR-based hardware for rapid prototyping of mobile communication channels.
Vector Signal Transceiver
Another design platform worth mentioning in the context of AV connectivity and connected car technology is Vector Signal Transceiver (VST). It is a customizable platform that combines an RF and baseband vector signal analyzer and generator with a user-programmablefield-programmable gate array (FPGA) and high-speed interfaces for real-time signal processing and control.
That enables comprehensive RF characteristic measurements and features like dynamic obstacle generation in a variety of road conditions. A VST system, for example, can simulate Doppler effect velocity from multiple angles or simulate scenarios such as a pedestrian walking across the street and a vehicle changing lanes.
It is another customized system that combines flexible, off-the-shelf hardware with a software development environment like LabVIEW to create user-defined prototyping and test solutions. That allows design engineers to transform VST into what they need it to be at the firmware level and address the most demanding development, test and validation challenges.
National Instruments introduced the first VST system in 2012 with an FPGA programmable with LabVIEW to accelerate design time and lower validation cost. Fast forward to 2019, the second-generation VST is ready to serve the autonomous and connected car designs where bandwidth and latency are crucial factors.
Automotive Testing’s Inflection Point
Industry observers call 2019 the year of V2X communications, while self-driving cars are still a work in progress. In the connected car realm, engineers are busy testing and validating the dedicated short-range communications-based vehicle-to-vehicle and vehicle-to-infrastructure devices to ensure that the V2X communication will work all the time and in all possible scenarios.
It is clear that both connected cars and self-driving vehicles share similar imperatives: they must be trustworthy and they must be credible. What is also evident by now is that these high-tech vehicles require high-tech prototyping and validation tools.
That marks an inflection point in automotive design and validation where two manifestations of smart mobility are striving to make traffic safer and more efficient. And the industry demands efficient and cost-effective test and verification solutions for these rapidly expanding automotive markets.
If these systems are based on modular hardware and flexible software, they can be efficiently customized for the autonomous and connected car designs. More importantly, these verification arrangements can significantly lower the design cost at different development stages.
University of Maryland engineer wants to equip ambulances with medical robots enhanced by machine learning to help trauma patients
At the moment of traumatic injury, no physician is present. Emergency medical technicians respond first—they stabilize the patient during ambulance transport, while specialized trauma teams prepare to receive the patient at a hospital.
That is, if the patient makes it there.
“The ride to the hospital is the riskiest part for the trauma patient,” says Axel Krieger, assistant professor of mechanical engineering at the University of Maryland, who specializes in medical robotics and computer vision. Krieger says that estimates suggest one-third of trauma fatalities likely would have survived if they had access to hospital-level of care sooner. He aims to help make that level of care standard on the ambulance ride—a long way from his undergraduate days in Germany, where he studied automotive engineering.
To improve the health-giving capacity for trauma patients during the ambulance ride, Krieger wants to equip the ambulance with a medical robot enhanced by machine learning (ML). “One of the biggest dangers during the ambulance ride is undiagnosed, internal hemorrhagic bleeding,” he says. “It’s currently undetectable with methods available on the ambulance ride. You can’t see it.”
But a robot can.
“Imagine you have a patient in the emergency vehicle, and a robot scans the patient and obtains ultrasound images,” says Krieger, who is a member of the Maryland Robotics Center. “This can provide a critical level of life-saving diagnosis and care not yet possible during an emergency ambulance ride.”
The robot scans and visualizes the injury, then compares and analyzes the scans with its ML algorithm—which was trained using data from similar real-life patient images. It focuses on anatomic areas known to be especially vulnerable to hidden injury and bleeding—such as the pelvic area and space between the lungs, spleen, and liver—to determine severity of wounds based on location, depth, and interaction with vital anatomy; compute volume of blood loss; and assess hemorrhagic potential. Analyzing these characteristics en route would help produce an injury profile useful in triaging the patient so he or she can receive appropriate care as soon as possible—perhaps in the ambulance, and most certainly upon arrival at the hospital.
To develop this ML-based intelligent scanning robot, Krieger and several A. James Clark School of Engineering graduate students collaborated with trauma experts at the University of Maryland Medical Center’s R Adams Cowley Shock Trauma Center.
The research is still experimental and not yet approved for clinical use with patients—but Krieger believes it will be soon.
“It’s the translational aspect to patient care that really excites me,” he says. “If we can help more people survive, this is the best use of our work.”
Simulink optimizes system behavior by simulating multiple design options from a single environment.
Learn how to: use simulation to develop a digital controller for a DC-DC power converter; model passive circuit elements, power semiconductors, power sources and loads; simulate continuous and discontinuous conduction modes; simulate power losses and thermal behavior; tune controller gains to meet design requirements; and generate C code for a TI C2000 microcontroller.
At the time, ML was not widely used in materials science. “Now, it’s all the rage,” says Takeuchi, who also holds an appointment with the Maryland Energy Innovation Institute. Its current popularity is due in part to the deep learning revolution of 2012 and related advances in computer chip speed, data storage options, and rapid refinement of the science that drives its predictive analytics of algorithms.
ML-based discovery in materials science is not just a lab exercise. It can provide production solutions to geopolitical challenges—as in the case of deteriorating trade with China about a decade ago, which prompted a supply-chain crisis for electric vehicle motor development in the U.S. Key materials were no longer available to American producers to make the neodymium rare-earth permanent magnet that helps power the vehicles.
The solution: Takeuchi’s team applied ML to discover and develop new, alternative magnet materials so research for electric vehicle motors could continue.
And they bootstrapped it. In the beginning, Takeuchi and his team didn’t have any curated data to feed their ML algorithm. So they built the database themselves. They taught machines to read troves of scientific papers and parse data in search of patterns and predictions. From those papers, they extracted meaningful chemical details on rare-earth magnet performance, properties, and functions. This became the database they needed to enlist the aid of yet another ML algorithm. This time, the task was to identify alternative candidate materials with the desired traits for fabricating rare-earth permanent magnets.
According to Takeuchi, researchers increasingly search for novel materials with specific attributes. “ML helps us in our searches in a way that is computationally inexpensive and highly efficient, so we can understand composition–structure relationships and functional properties.’’
In Takeuchi’s lab, searches for new materials are done with accelerated synthesis of large numbers of compounds called high-throughput experiments, which produce up to 1,000 materials at a time and generate immense quantities of data. “We were inundated with data,” Takeuchi says. Yet prior to applying ML, they lacked a means for leveraging of all that data potential.
ML not only makes sense of enormous datasets—it extends discovery by allowing the algorithm to make predictions from “leads” it discovers in the data. The machine automatically discovers hidden relationships between materials and their properties, which is the knowledge Takeuchi and his team are ultimately seeking.
Takeuchi’s lab continues to innovate with ML-based discovery. Their newest development sprang from the question: “In the search to discover new materials with particular attributes, why don’t we let the computer analyze all the attributes and decide how the experiment should run?”
This new model of autonomous active learning is fast, inexpensive, and highly efficient, because the power and predictive ability of ML minimizes the number of experiments required to solve a problem.
“With an autonomous active learning approach, you don’t need to do 1,000 experiments as we did with high-throughput approaches,” Takeuchi says. “We need only to do about one-tenth or one-fifth of all experiments, because we let the algorithm decide where to go next. You see what the machine comes up with—without you. It predicts, and then we test. We think this is the future.”
A University of Maryland-developed microscopy technique could eliminate the “surgery” aspect of LASIK
Fischell Department of Bioengineering (BIOE) researchers have developed a microscopy technique that could one day be used to improve LASIK and eliminate the “surgery” aspect of the procedure. Their findings were published in March in Physical Review Letters.
In the 20 years since the FDA first approved LASIK surgery, more than 10 million Americans have had the procedure done to correct their vision. When performed on both eyes, the entire procedure takes about 20 minutes and can rid patients of the need to wear glasses or contact lenses.
While LASIK has a very high success rate, virtually every procedure involves an element of guesswork. This is because doctors have no way to precisely measure the refractive properties of the eye. Instead, they rely heavily on approximations that correlate with the patient’s vision acuity—how close to 20/20 he or she can see without the aid of glasses or contacts.
In search of a solution, BIOE Assistant Professor Giuliano Scarcelli and members of his Optics Biotech Laboratory have developed a microscopy technique that could allow doctors to perform LASIK using precise measurements of how the eye focuses light, instead of approximations.
“This could represent a tremendous first for LASIK and other refractive procedures,” Scarcelli said. “Light is focused by the eye’s cornea because of its shape and what is known as its refractive index. But until now, we could only measure its shape. Thus, today’s refractive procedures rely solely on observed changes to the cornea, and they are not always accurate.”
The cornea—the outermost layer of the eye—functions like a window that controls and focuses light that enters the eye. When light strikes the cornea, it is bent—or refracted. The lens then fine-tunes the light’s path to produce a sharp image onto the retina, which converts the light into electrical impulses that are interpreted by the brain as images. Common vision problems, such as nearsightedness or farsightedness, are caused by the eye’s inability to sharply focus an image onto the retina.
To fix this, LASIK surgeons use lasers to alter the shape of the cornea and change its focal point. But, they do this without any ability to precisely measure how much the path of light is bent when it enters the cornea.
To measure the path light takes, one needs to measure a quantity known as the refractive index; it represents the ratio of the velocity of light in a vacuum to its velocity in a particular material.
By mapping the distribution and variations of the local refractive index within the eye, doctors would know the precise degree of corneal refraction. Equipped with this information, they could better tailor the LASIK procedure such that, rather than improved vision, patients could expect to walk away with perfect vision—or vision that tops 20/20.
Even more, doctors might no longer need to cut into the cornea.
“Non-ablative technologies are already being developed to change the refractive index of the cornea, locally, using a laser,” Scarcelli said. “Providing local refractive index measurements will be critical for their success.”
Knowing this, Scarcelli and his team developed a microscopy technique that can measure the local refractive index using Brillouin spectroscopy—a light-scattering technology that was previously used to sense the mechanical properties of tissue and cells without disrupting or destroying either.
“We experimentally demonstrated that, by using a dual Brillouin scattering technology, we could determine the refractive index directly, while achieving three-dimensional spatial resolution,” Scarcelli said. “This means that we could measure the refractive index of cells and tissue at locations in the body—such as the eyes—that can only be accessed from one side.”
In addition to measuring corneal or lens refraction, the group is working on improving its resolution to analyze mass density behavior in cell biology or even cancer pathogenesis, Scarcelli said.
Along with Scarcelli, BIOE Ph.D. student Antonio Fiore (first author) and Carlo Bevilacqua, a visiting student fromthe University of Bari Aldo Moro in Bari, Italy, contributed to the paper.
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.