Tag Archives: Computing/Networks

SANS and AWS Marketplace Webinar: Learn to improve your Cloud Threat Intelligence program through cloud-specific data sources

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/sans-and-aws-marketplace-webinar-learn-to-improve-your-cloud-threat-intelligence-program-through-cloudspecific-data-sources

SOC efficiency

You’re Invited!

SANS and AWS Marketplace will discuss CTI detection and prevention metrics, finding effective intelligence data feeds and sources, and determining how best to integrate them into security operations functions.

Attendees of this webinar will learn how to:

  • Understand cloud-specific data sources for threat intelligence, such as static indicators and TTPs.
  • Efficiently search for compromised assets based on indicators provided, events generated on workloads and within the cloud infrastructure, or communications with known malicious IP addresses and domains.
  • Place intelligence and automation at the core of security workflows and decision making to create a comprehensive security program.

Presenters:

Dave Shackelford

Dave Shackelford, SANS Analyst, Senior Instructor

Dave Shackleford, a SANS analyst, senior instructor, course author, GIAC technical director and member of the board of directors for the SANS Technology Institute, is the founder and principal consultant with Voodoo Security. He has consulted with hundreds of organizations in the areas of security, regulatory compliance, and network architecture and engineering. A VMware vExpert, Dave has extensive experience designing and configuring secure virtualized infrastructures. He previously worked as chief security officer for Configuresoft and CTO for the Center for Internet Security. Dave currently helps lead the Atlanta chapter of the Cloud Security Alliance.

Nam Le

Nam Le, Specialist Solutions Architect, AWS

Nam Le is a Specialist Solutions Architect at AWS covering AWS Marketplace, Service Catalog, Migration Services, and Control Tower. He helps customers implement security and governance best practices using native AWS Services and Partner products. He is an AWS Certified Solutions Architect, and his skills include security, compliance, cloud computing, enterprise architecture, and software development. Nam has also worked as a consulting services manager, cloud architect, and as a technical marketing manager.

New Electronic Warfare Signal Generation Solutions Reduce Costs

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/new-electronic-warfare-signal-generation-solutions-reduce-costs

Learn about the available technological approaches for electronic warfare signal and environment simulation, and the latest progress in flexible, high-fidelity solutions. Innovations in digital-to-analog converters (DACs) bring direct digital synthesis (DDS) signal generation into EW applications through advances in bandwidth and signal quality. DDS solutions and other innovations in agile frequency and power control allow you to improve your design phase EW engineering accuracy and productivity. Get the EW Signal Generation application note today.

Discover how AWS Marketplace seller solutions can help you scale and bring productivity to your SOC

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/discover-how-aws-marketplace-seller-solutions-can-help-you-scale-and-bring-productivity-to-your-soc

You’re Invited!

Join this webinar to learn how AWS customers are using automation and integrated threat intelligence to increase efficiency and scale their cloud security operations center (SOC).

In this webinar:

SANS and AWS Marketplace will explore real-world examples and offer practical guidance to help equip you with the needed visibility and efficiencies to scale. You will learn how to limit alert fatigue while enhancing SOC productivity through automating actionable insights and removing repetitive manual tasks. 

Attendees of this webinar will learn how to:

  • Structure a cloud SOC to scale through technology
  • Integrate threat intelligence into security workflows
  • Utilize automated triaging and action playbooks
  • Leverage AWS services and seller solutions in AWS Marketplace to help achieve these goals

Register Now

5G, Robotics, AVs, and the Eternal Problem of Latency

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/computing/networks/5g-robotics-avs-and-the-eternal-problem-of-latency

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

In the winter of 2006 I was in Utah reporting on a high-speed broadband network, fiber-optic all the way to the home. Initial speeds were 100 megabits per second, to rise tenfold within a year.

I remember asking one of the engineers, “That’s a billion gigabits per second—who needs that?” He told me that some studies had been done in southern California, showing that for a orchestra to rehearse remotely, it would need at least 500 megabits per second to avoid any latency that would throw off the synchronicity of a concert performance. This was fourteen years before the coronavirus would make remote rehearsals a necessity.

You know who else needs ultra-low latency? Autonomous vehicles. Factory robots. Multiplayer games. And that’s today. What about virtual reality, piloting drones, or robotic surgery?

What’s interesting in hindsight about my Utah experience is what I didn’t ask and should have, which was, “So what you really need is low latency, and you’re using high bandwidth as a proxy for that?” We’re so used to adding bandwidth when what we really need is to reduce latency that we don’t even notice we’re doing it. But what if enterprising engineers got to work on latency itself? That’s what today’s episode is all about.

It turns out to be surprisingly hard. In fact, if we want to engineer our networks for low latency, we have to reengineer them entirely, developing new methods for encoding, transmitting, and routing. So says the author of an article in November’s IEEE Spectrum magazine, “Breaking the Latency Barrier.”

Shivendra Panwar is a Professor in the Electrical and Computer Engineering Department at New York University’s Tandon School of Engineering. He is also the Director of the New York State Center for Advanced Technology in Telecommunications and the Faculty Director of its New York City Media Lab. He is also an IEEE Fellow, quote, “For contributions to design and analysis of communication networks.” He joins us by a communications network, specifically Skype.

Steven Cherry Shiv. Welcome to the podcast.

Shivendra Panwar Thank you. Great to be here.

Steven Cherry Shiv, in the interests of disclosure, let me first rather grandly claim to be your colleague, in that I’m an adjunct professor at NYU Tandon, and let me quickly add that I teach journalism and creative writing, not engineering.

You have a striking graph in your article. It suggests that VoIP, FaceTime, Zoom, they all can tolerate up to 150 milliseconds of latency, while for virtual reality it’s about 10 milliseconds and for autonomous vehicles it’s just two. What makes some applications so much more demanding of low latency than others?

Shivendra Panwar So it turns out, and this was actually news to me, is that we think that the human being can react on the order of 100 or 150 milliseconds.

We hear about fighter pilots in the Air Force who react within 100 ms or the enemy gets ahead of them in a dogfight. But it turns out human beings can actually react at even a lower threshold when they are doing other actions, like trying to touch or feel or balance something. And that can get you down to tens of milliseconds. What has happened is in the 1980s, for example, people were concerned about applications like the ones you mentioned, which required 100, 150 ms, like a phone call or a teleconference. And we gradually figured out how to do that over a packet-switched network like the Internet. But it is only recently that we became aware of these other sets of applications, which require an even lower threshold in terms of delay or latency. And this is not even considering machines. So there are certain mechanical operations which require feedback loops of the order of milliseconds or tens of milliseconds.

Steven Cherry Am I right in thinking that we keep throwing bandwidth at the latency problem? And if so, what’s wrong with that strategy?

Shivendra Panwar So that’s a very interesting question. If you think of bandwidth in terms of a pipe. Okay, so this is going back to George W. Bush. If you don’t remember this famous interview or debate he had and he likened the Internet to be a set of pipes and everyone made fun of him. Actually, he was not far off. You can make the analogy that the Internet is a set of pipes. But coming back to your question, if you view the Internet as a pipe, there are two dimensions to a pipe, it’s the diameter of the pipe, how wide it is, how fat it is, and then there’s the length of the pipe. So if you’re trying to pour … if you think of bits as a liquid and you’re trying to pour something through that pipe, the rate at which you’d be able to get it out at the other end or how fast you get it out of the other end depends on two things—the width of the pipe and the length of the pipe. So if you have a very wide pipe, you’ll drain the liquid really fast. So that’s the bandwidth question. And if you shorten the length of the pipe, then it’ll come out faster because it has less length of pipe to traverse. So both are important. So bandwidth certainly helps in reducing latency if you’re trying to download a file, for example, because the pipe width will essentially make sure you can download the file faster.

But it also matters how long the pipe is. What are the fixed delays? What are the variable delays going through the Internet? So both are important.

Steven Cherry You say one big creator of latency is congestion delays. To use the specific metaphor of the article, you describe pouring water into a bucket that has a hole in it. If the flow is too strong, water rises in the bucket and that’s congestion delay, water droplets—the packets, in effect—waiting to get out of the hole. And if the water overflows the bucket, if I understand the metaphor, those packets are just plain lost. So how do we keep the water flowing at one millimeter of latency or less?

Shivendra Panwar So that’s a great question. So if you’re pouring water into this bucket with a hole and you want to keep—first of all, you want to keep it from overflowing. So that was: Don’t put too much water because even in the bucket, the hole will gradually fill up and overflow from the top. But the other and equally important issue is you want to fill the bucket, maybe just the bottom of the bucket. You know, just maybe a little bit over that hole so that the time it takes for water that you are pouring to get out is minimized.

And that’s minimizing the queuing delay, minimizing the congestion and minimizing ultimately the delay through the network. And so if you want it to be less than a millisecond, you want to be very careful pouring water into that bucket so that just fields or uses the capacity of that hole but not starts filling up the bucket.

Steven Cherry Phone calls used to run on a dedicated circuit between the caller and the receiver. Everything runs on TCP now, I guess in hindsight, it’s remarkable that we can even have phone calls and Zoom sessions at all with our voices and video chopped up into packets and sent from hop to hop and reassembled at a destination. At a certain point, the retuning that you’re doing of TCP starts to look more and more like a dedicated circuit, doesn’t it? And how do you balance that against the fundamental point of TCP, which is to keep routers and other midpoints available to packets from other transmissions as well as your own?

Shivendra Panwar So that is the key point that you have mentioned here. And this was, in fact, a hugely controversial point back in the ’80s and ’90s when the first experiments to switch voice from circuit-switched networks to packet-switched networks was first considered. And there were many diehards who said you cannot equal the latency and reliability of a circuit switch network. And to some extent, actually, that’s still right. The quality on a circuit switch line, by the time of the 1970s and 1980s, when it had reached its peak of development was excellent.

And sometimes we struggle to get to that quality today. However, the cost issue overrode it. And the fact that you are able to share the network infrastructure with millions and now billions of other people made the change inevitable. Now, having said that, this seems to be a tradeoff between quality and cost. And to some extent it is. But there is, of course, a ceaseless effort to try and improve the quality without giving up anything on the cost. And that’s where the engineering comes in. And that’s where monitoring what’s happening to your connection on a continuous basis so that whenever you sense that congestion is building up, what TCP does in particular is to back off or reduce the rate so that it does not contribute to the congestion. And its vital bits get through in time.

Steven Cherry You make a comparison to shoppers at a grocery store picking which checkout lane has the shorter line or is moving faster. Maybe as a network engineer, you always get it right, but I often pick the wrong lane.

Shivendra Panwar That is that is true in terms of networking as well, because there are certain things that you cannot predict. You might go to two lines in a router, for example, and one may look a lot shorter than yours. But there is some hold up, right? A packet may need extra processing or some other issue may crop up. And so you may end up spending more time waiting on a line which initially appeared short to you. So there is actually a lot of randomness in networking. In fact, a knowledge of probability theory, queuing theory, and all of this probabilistic math, is the basis of engineering networks.

Steven Cherry Let’s talk about cellular for a minute. The move to 5G will apparently help us reduce latency by reducing frame durations, but apparently it also potentially opens us up to more latency because of its use of millimeter waves?

Shivendra Panwar That is indeed a tradeoff. The engineers who work at the physical layer have been working very hard to increase the bandwidth to get us into gigabits per second at this point in 5G and reduce the frame lengths—the time you spend waiting to put your bits onto the channel is reduced. But in this quest for low bandwidth, they moved up the electromagnetic spectrum to millimeter waves, which have a lot more capacity but have poorer propagation characteristics. In the millimeter waves, what happens is it can no longer go through the wall of a building, for example, or even the human body or a tree. If you can imagine yourself, let’s say you’re in Times Square before code and you’re walking with your 5G phone, every passerby, or every truck rolling by, would potentially block the connection between your cell phone and the cell tower. Those interruptions are driven by the physical world. In fact, I joke this is the case of Sir Isaac Newton meeting Maxwell, the electromagnetic guru. Because what happens is those interruptions, since they are uncontrollable essentially, you can get typical interruptions of the order of half a second or a second before you switch to another base station, which is the current technology, and find another way to get your bits through. So those blockages, unfortunately, can easily add a couple of hundred milliseconds of delay because you may not have an alternate way to get your bits through to the cell tower.

Steven Cherry I guess that’s especially important, not so much for phone conversations and Web use or whatever we’re using our phones for, where, as we said before, a certain amount of latency is not a big problem. But 5G is going to be used for the Internet of Things. And there, there will be applications that require very low latency.

Shivendra Panwar Okay, so there are some relatively straightforward solutions. If your application needs relatively low bandwidth, so, many of the IoT applications need kilobits per second, which is a very low rate. What you could do is you could assign those applications to what is called sub-six gigahertz. That is the frequency that we currently use. Those are more reliable in the sense that they penetrate buildings, they penetrate the human body.

And as long as your station has decent coverage, you can have more predictable performance. It is only as we move up the frequency spectrum and we try and send both broadband applications—applications that use a gigabit per second or more—and we want the reliability and the low latency that we start running into problems.

Steven Cherry I noticed that, as you alluded to earlier, there are all sorts of applications where we would benefit from very low latency or maybe can’t even tolerate anything but very low latency. So to take just one example, another of our colleagues, a young robotics professor at NYU, Tandon is working on exoskeletons and rehabilitation robots for Parkinson’s patients to help them control hand tremors. And he and his fellow researchers say, and I’m quoting here, “a lag of nearly 10 or 20 milliseconds can afford effective compensation by the machine and in some cases may even jeopardize safety.”

So are there latency issues even within the bus of an exoskeleton or a prosthetic device that they need to get down to single-digit millisecond latency?

Shivendra Panwar That sounds about right in terms of the 10 to 20 milliseconds or perhaps even less. There is one solution to that, of course, is to make sure that all of the computational power—all of the data that you need to transmit—stays on that human subject (the person who’s using the exoskeleton) and then you do not depend on the networking infrastructure. So that will work. The problem with that is the compute power and communications will, first of all, be heavy, even if we can keep reducing that thanks to Moore’s Law, and also drains a lot of battery power. One approach is seeing if we can get the latency and reliability right, is to offload all of that computation to, let’s say, the nearest base station or a wireless Wi-Fi access point. This will reduce the amount of weight that you’re carrying around in your exoskeleton and reduce the amount of battery power that you need to be able to do this for long periods of time.

Steven Cherry Yeah, something I hadn’t appreciated until your article was, you say that ordinary robots as well could be lighter and have greater uptime and might even be cheaper with ultra low latency.

Shivendra Panwar That’s right. Especially if you think of flying robots. Right? You have UAVs. And there, weight is paramount to keep them up in the air.

Steven Cherry As I understand it, Shiv, there’s a final obstacle, or limit at least, to reducing latency. And that’s the speed of light.

Shivendra Panwar That’s correct. So most of us are aware of the magic number, which is like 300 000 km/s. But that’s in a vacuum or through free space. If you use a fiber-optic cable, which is very common these days, that goes down to 200 000 km/s. So you used to always take that into account but it was not a big issue.

But now, if you think about it, if you are trying to aim for a millisecond delay, let’s say, that is a distance of quote unquote, only 300 km that light controls in free space or even less on a fiber optic cable—down to 200 kilometers. That means you cannot do some of the things we’ve been talking about sitting in New York, if it happens to be something that we are controlling in Seoul, South Korea, right? The speed of light takes a perceptible amount of time to get there. Similarly, what has been happening is the service providers who want to host all these new applications now have to be physically close to you to meet those delay requirements. Earlier, we didn’t consider them very seriously because there are so many other sources of delays and delays were of the order of a hundred milliseconds—up to a second, even, if you think further back—that a few extra milliseconds didn’t matter. And so you could have a server farm in Utah dealing with the entire continental U.S., that would be sufficient. But that is no longer possible.

And so a new field has come up—edge computing, which takes applications closer to the edge in order to support more of these applications. The other reason to consider mobile computing is you can keep the traffic off the Internet core, if you push it closer to the edge. For both those reasons, computation may be coming closer and closer to you in order to keep the latency down and to reduce costs.

Steven Cherry Well, Shiv, it seems we have a never-ending ebb and flow between putting more of the computing at the endpoint, and then more of the computing at the center, from the mainframes and dumb terminals of the 1950s, to the networked workstations ’80s, to cloud computing today, to putting AI inside of IoT nodes tomorrow, but all through it, we always need the network itself to be faster and more reliable. Thanks for the thankless task of worrying about the network in the middle of it all, and for being my guest today.

Shivendra Panwar Thank you. It’s been a pleasure talking to Steve.

Steven Cherry We’ve been speaking with IEEE Fellow Shivendra Panwar about his research into ultra-low-latency networking at NYU’s Tandon School of Engineering.

This interview was recorded October 27, 2020. Our thanks to Mike of Gotham Podcast Studio for our audio engineering; our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

For Radio Spectrum, I’m Steven Cherry.

Make IoT Devices Certifiably Safe—and Secure

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/networks/make-iot-devices-certifiably-safeand-secure

After unboxing a new gadget, few people stop to consider how things could go horribly wrong when it’s plugged into the wall: A shorted wire could, for example, quickly produce a fire. We trust that our machines will not fail so catastrophically—a trust developed through more than a century of certification processes.

To display the coveted “approved” logo from a certification agency—for example, UL (from a U.S. organization formerly known as Underwriters Laboratories), CE (Conformité ­Européenne) in Europe, or Australia’s Regulatory Compliance Mark—the maker of the device has to pull a production unit off the manufacturing line and send it to the testing laboratory to be poked and prodded. As someone who’s been through that process, I can attest that it’s slow, detailed, ­expensive—and entirely necessary. Many retailers won’t sell uncertified devices to the ­public. And for good reason: They could be dangerous.

Sure, certification carries certain costs for both the manufacturer and consumer, but it prevents much larger expenses. It’s now considered so essential that the biggest question these days isn’t whether an electrical product is certified; it’s whether the certification mark is authentic.

Certification assures us we can plug something in without worry that it will electrocute somebody or burn down the house. That’s necessary but, in today’s thoroughly connected era, insufficient. The consequences of plugging a compromised device into a home network are not as catastrophic as shock or fire, but they are still bad—and they’ve gone largely unappreciated.

We need to change our thinking. We need to become far more circumspect when we plug a new device into our networks, asking ourselves if its maker has given as much thought to cybersecurity as to basic electrical safety.

The answer to that question will almost invariably be no. A recent report detailing a security test of home Wi-Fi routers by Germany’s Fraunhofer Institute FKIE showed every unit tested to have substantial security flaws, even when upgraded to the latest firmware.

Although security researchers plead with the public to keep the software on their connected devices up-to-date, it appears even that sort of digital hypervigilance isn’t enough. Nor should this burden rest on the consumer’s shoulders. After all, manufacturers don’t expect consumers to do periodic maintenance on their blenders and electric toothbrushes to prevent them from catching fire or causing an electric shock.

The number of connected devices within our homes has grown by an order of magnitude over the last decade, enlarging the attack surfaces available to cyber-miscreants. At some point in the not-too-distant future, the risks will outweigh the benefits. Consumers will then lose their appetites for using such devices at all.

How could we prevent this impending security catastrophe? We can copy what worked once before, crafting a certification process for connected devices, one that tests and prods them and certifies only those that can resist—and stay ahead of—the black hats. A manufacturer does that by designing a device that can be easily and quickly updated—so easily that it can perform important updates unattended. Success here will mean that connected devices will cost more to design, and prices will rise for consumers. But security is never cheap. And the costs of poor security are so much higher.

This article appears in the xDATEx print issue as “Certifiably Secure IoT.”

SANS and AWS Marketplace Webinar: How to Protect Your AWS Environment

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/sans_and_aws_marketplace_webinar_how_to_protect_your_aws_environment

With the expanding scale of modern networks, security teams often face challenges around maintaining control and visibility across multiple virtual private clouds (VPCs) and network segments. Software-defined networks (SDNs) provide centralized management of your cloud fabric, enabling higher granularity of control over north-south and east-west traffic flows between VPCs. This allows for the selective blocking of potentially malicious inbound and outbound traffic while continuing the flow of normal traffic. Leveraging SDN fabrics alongside solutions such as cloud-based firewalls and tools such as VPC Flow Logs can enhance traffic visibility and control while upholding your security posture.

In this webinar, SANS and AWS Marketplace provide guidance on creating and implementing a policy-driven SDN architecture in the cloud. Additionally, they will present real-world use cases of successful implementations that have been deployed in Amazon Web Services (AWS) environments.

Crowdsourcing Package Deliveries Using Taxis

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/networks/proposed-crowdsourcing-platform-uses-taxis-to-deliver-packages

Journal Watch report logo, link to report landing page

For urban areas where the demand for package delivery is only increasing, a group of researchers is proposing an intriguing solution: a crowdsourcing platform that uses taxis to deliver packages. In a study published 29 April in IEEE Transactions on Big Data, they show that such an approach could be used to deliver 9,500 packages on time per day in a city the size of New York.

Their platform, called CrowdExpress, allows a customer to set a deadline by which they want a package delivered. Taxis, which are already traveling in diverse trajectories across urban areas with passengers, are then crowdsourced to deliver the packages. Taxi drivers would be responsible for picking up and dropping off the packages, but only before and after dropping off their passengers, which avoids any inconvenience to passengers.

Chao Chen, a researcher at Chongqing University who was involved in the study, says this proposed approach offers several advantages. “For customers, packages can be delivered more economically, but [also] more speedily,” he says. “For taxi drivers, more money can be earned when taking passengers to their destinations, with only a small additional effort.”

Since the average taxi ride tends to be only a few kilometers, Chen’s group acknowledges that more than one taxi may be required to deliver a package. Therefore, with their approach, they envision a network of relay stations, where packages are collected, temporarily stored, and transferred.

In developing CrowdExpress, they first used historical taxi GPS trajectory data to figure out where to place the nodes on the package transport network. Next, they developed an online scheduling algorithm to coordinate package deliveries using real-time requests for taxis. The algorithm calculates the probability that a package will be delivered on time using a taxi that is currently available, or taxis that are likely to be available in the near future. In this way, the algorithm prioritizes human pick-ups and drop-offs while still meeting package delivery deadlines set by customers.

In their study, Chen and his colleagues evaluated the platform using the real-world taxi data generated during a single month by over 19,000 taxis in New York City. The results show that this technique could be used to ferry up to 20,000 packages a day—but with a deadline success rate of 40 percent at that volume. If CrowdExpress were used to transport 9,500 packages a day across the Big Apple, the success rate would reach 94 percent.

“We plan to commercialize CrowdExpress, but there are still many practical issues to be addressed before truly realizing the system,” Chen admits. “The maintenance cost and the potential package loss or damage at the package stations is one example,” Chen says.

“One promising solution to address the issue is to install unmanned and automatic smart boxes at the stations. In this way, packages can be safely stored and drivers are required to enter a one-time password to get them.”

How Network Science Surfaced 81 Potential COVID-19 Therapies

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/computing/networks/network-science-potential-covid19-therapies

IEEE COVID-19 coverage logo, link to landing page

Researchers have harnessed the computational tools of network science to generate a list of 81 drugs used for other diseases that show promise in treating COVID-19. Some are already familiar—including the malaria and lupus treatments chloroquine and hydroxychloroquine—while many others are new, with no known clinical trials underway.

Since the concept was first proposed in 2007, network medicine has applied the science of interconnected relationships among large groups to networks of genes, proteins, interactions and other biomedical factors. Both Harvard and MIT Open Courseware today offer classes in network medicine, while cancer research in particular has experienced a proliferation of network medicine studies and experimental treatments.

Albert-László Barabási, distinguished university professor at Northeastern University in Boston, is generally considered the founder of both network medicine and modern network science. In a recent interview via email, Barabási said COVID-19 represents a tremendous opportunity for a still fledgling science.

“In many ways, the COVID offers a great test for us to marshal the set of highly predictive tools that we as a community [have developed] in the past two decades,” Barabási said.

Last month, Barabási and ten co-authors from Northeastern, Harvard and Brigham and Women’s Hospital in Boston published a pre-print paper proposing a network medicine-based framework for repurposing drugs as COVID-19 therapies. The paper has not been submitted for peer-review yet, says Deisy Morselli Gysi, a postdoctoral researcher at Northeastern’s Network Science Institute.

“The paper is not under review anywhere,” she said. “But we are planning of course to submit it once we have [laboratory] results.”

The 81 potential COVID-19 drugs their computational pipeline discovered, that is, are now being investigated in wet-lab studies.

The number-one COVID-19 drug their network-based models predicted was the AIDS-related protease inhibitor ritonavir. The U.S. Centers for Disease Control’s ClinicalTrials.gov website lists 108 active or recruiting trials (as of May 6) involving ritonavir, with a number of the current trials being for COVID-19 or related conditions.

However, the second-ranked potential COVID-19 drug their models surfaced was the antibacterial and anti-tuberculosis drug isoniazid. ClinicalTrials, again as of May 6, listed 65 active or recruiting studies for this drug — none of which, however, were for coronavirus. The third and fourth-ranked drugs (the antibiotic troleandomycin and cilostazol a drug for strokes and heart conditions) also have no current coronavirus-related clinical trials, according to ClinicalTrials.gov.

Barabási said the group’s study took its lead from a massively-collaborative paper from March 27 which identified 26 of the 29 proteins that make up the SARS-CoV-2 coronavirus particle. The study then identified 332 human proteins that bind to those 26 coronavirus proteins.

Barabási, Gysi and co-researchers then mapped those 332 proteins to the larger map of all human proteins and their interactions. This “interactome” (a molecular biology concept first proposed in 1999) tracks all possible interactions between proteins.

Of those 332 proteins that interact with the 26 known and studied coronavirus proteins, then, Barabási’s group found that 208 of them interact with one another. These 208 proteins form an interactive network, or what the group calls a “large connected component” (LCC). And a vast majority of these LCC proteins are expressed in the lung, which would explain why coronavirus manifests so frequently in the respiratory system: Coronavirus is made up of building blocks that each can chemically latch onto a network of interacting proteins, most of which are found in lung tissue.

However, the lung was not the only site in the body where Barabási and co-authors discovered coronavirus network-based activity. They also discovered several brain regions whose expressed proteins interact in large connected networks with coronavirus proteins. Meaning their model predicts coronavirus could manifest in brain tissue as well for some patients.

That’s important, Gysi said, because when their models made this prediction, no substantial reporting had yet emerged about neurological COVID-19 comorbidities. Today, though, it’s well-known that some patients experience a neurological-based loss of taste and smell, while others experience strokes at higher rates.

Brains and lungs aren’t the only possible hosts for the novel coronavirus. The group’s findings also indicate that coronavirus may manifest in some patients in reproductive organs, in the digestive system (colon, esophagus, pancreas), kidney, skin and the spleen (which could relate to immune system dysfunction seen in some patients).

Of course the first drug the FDA approved for emergency use specifically for COVID-19 is the protease inhibitor remdesivir. However Barabási and Gysi’s group did not surface that drug at all in their study.

This is for a good reason, Gysi explained. Remdesivir targets the SARS-CoV-2 virus specifically and not any interactions between the virus and the human body. So remdesivir would not have showed up on the map of their network science-based analysis, she said.

Barabási said his team is also investigating how network science can assist medical teams conducting contact tracing for COVID-19 patients.

“There is no question that the contact tracing algorithms will be network science based,” Barabási said.

How to Implement a Software-Defined Network (SDN) Security Fabric in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how_to_implement_a_software_defined_network_sdn_security_fabric_in_aws

AWS Webinar

You’re Invited! Join SANS and AWS Marketplace to learn how implementing an SDN can enhance visibility and control across multiple virtual private clouds (VPCs) in your network. With an SDN fabric, you can enable higher granularity of control over lateral traffic between VPCs, blocking malicious traffic while maintaining normal traffic flow.

Rapid Matchmaking for Terahertz Network Transmitters and Receivers

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/networks/terahertz-linkdiscovery

Scientists may have solved a fundamental problem that would have put a snag in efforts to build wireless terahertz links for networks beyond 5G, a new study finds.

Terahertz waves lie between optical waves and microwaves on the electromagnetic spectrum. Ranging in frequency from 0.1 to 10 terahertz, they could be key to future 6G wireless networks that will transmit data at terabits (trillions of bits) per second.

But whereas radio waves can transmit data via omnidirectional broadcasts, higher frequency waves diffract less, so communications links involving them employ narrow beams. This makes it more challenging to quickly set up links between transmitters and receivers.

Optical Labs Set Terabit Transmission Records

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/computing/networks/optical-labs-set-terabit-transmission-records

Press releases about fiber optics, like baseball stories, report many types of records. The two new fiber-optics records reported last month at Conference on Optical Fiber Communications (OFC 2020) are definitely major league. A team from Japan’s National Institute for Communications Technology (NICT) has sent a staggering 172 terabits per second through a single multicore fiber—more than the combined throughput of all the fibers in the world’s highest-capacity submarine cable. Not to be outdone, Nokia Bell Labs reported a record single-stream data rate of 1.52 terabits per second, close to four times the 400 gigabits per second achieved by the fastest links now used in data centers.

The IEEE-cosponsored OFC 2020 could have used some of that capacity during the 9–12 March meeting in San Diego. Although many exhibitors, speakers, and would-be attendees dropped plans for the show as the COVID-19 virus spread rapidly around the world, the organizers elected to go ahead with the show. In the end, a large fraction of the talks were streamed from homes and offices to the conference, then streamed to remote viewers around the world.

Telecommunications traffic has increased relentlessly over recent decades, made possible by tremendous increases in the data rates that fiber-optic transmitters can push through the standard single-mode fibers that have been used since the 1990s. But fiber throughputs are now approaching the nonlinear Shannon limit on information transfer, so developers are exploring ways to expand the number of parallel optical paths via spatial-division multiplexing.

Spatial-division multiplexing is an optical counterpart of MIMO, which uses multiple input and output antennas for high-capacity microwave transmission. The leading approaches: packing many light-guiding cores into optical fibers or redesigning fiber cores to transmit light along multiple parallel paths through the core that can be isolated at the end of the fiber.

Yet multiplying the number of cores has limits. They must be separated by at least 40 micrometers to prevent noise-inducing crosstalk between them. As a result, no more than five cores can fit into fibers based on the 125-micrometer diameter standard for long-haul and submarine networks. Adding more cores can allow higher data rates, but that leads to fiber diameters up to 300 µm, which are stiff and would require expensive designs for cables meant for submarine applications.

In a late paper, Georg Rademacher of NTIC described a new design called close-coupled multi-core fibers. He explains that the key difference in that design is that “The cores are close to each other so that the signals intentionally couple with each other. Ideally, the light that is coupled into one core should spread to all other cores after only a few meters.” The signals resemble those from fibers in which individual cores carry multiple modes, and require MIMO processing to extract the output signal. However, because signals couple between cores over a much shorter distance in the new fibers than in earlier few-mode fibers, the processing required is much simpler.

Earlier demonstrations of close-coupling were limited to narrow wavelength ranges of less than 5 nanometers. In San Diego, Rademacher reported testing an 80-km length of three-core 125-nm fiber with signals from a frequency comb light source. The team transmitted 24.5 gigabaud 16-quadrature amplitude modulated (16-QAM) signals to sample performance on 359 optical channels in the C and L fiber bands spanning a 75-nm bandwidth. Signals were looped repeatedly through the test fiber and an optical amplifier to simulate a total distance of 2040 kilometers.

The total data rate measured over that distance was 172 terabits per second, a record for 125-µm fibers. Fibers with much larger diameter and more cores have transmitted over 700 terabits over 2000 km, he says, but they remain in the laboratory. The world’s largest capacity commercial system, the Pacific Light Cable Network, will require six fibers in order to send 144 terabits when and if it comes into full operation. The close-coupled fibers “are far from the required maturity for subsea use,” says Rademacher. His group also is studying other ways that multicore fibers might improve on today’s single-mode fibers.

The Nokia record addresses the huge market for high-speed connections in massive Internet data centers. At last year’s OFC, the industry showed commercial 400-gigabit links for data centers, and announced a push for 800 gigabit rates. This year, Nokia reported a laboratory demonstration that came close to doubling the 800-Gig rate.

Successfully achieving single-channel data rates exceeding 100 gigabits depends on transmitting signals coherently rather than via the simple off-on keying used for rates up to 10 gigabits. Coherent systems convert input digital signals to analog format at the transmitter, then the receiver converts the analog signals back to digital form for distribution. This makes the analog-digital converter a crucial choke point for achieving the highest possible data rates.

Speaking via the Internet from Stuttgart, Fred Buchali of Nokia Bell Labs explained how his group had used a new silicon-germanium chip to achieve a record single-carrier transmission rate of 1.52 terabits per second through 80 km of standard single-mode fiber. Their digital-to-analog converter generated 128 gigabaud at information rates of 6.2 bits per symbol for each of two polarizations. It broke a previous record of 1.3 terabits per second that Nokia reported in September 2019.

Micram Microelectronic GmbH of Bochum, Germany, designed and made the prototype for Nokia using high-speed bipolar transistors and 55-nanometer CMOS technology. Buchali said that looping the fiber and adding erbium amplifiers allowed them to reach 240 km at a data rate of 1.46 Tbit/s. The apparent goal is to reach 1.6 Tbit/s, four times the current-best 400 gigabits per second, at the typical data center distance of 80 km. 

If we are able to meet that target with a single carrier as is demonstrated here rather than a number of carriers—say, 4 at 400Gbps—then it is quite likely that the solution would be both more efficient in spectrum usage and lower cost,” says Theodore Sizer, Smart Optical Fabric & Devices Research Lab leader at Nokia Bell labs. That could be an important step in letting the fast-growing data center world handle the world’s insatiable demand for data.

5G Network Slicing: Ensuring Successful Implementation and Validation

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/5g_network_slicing_ensuring_successful_implementation_and_validation

Network slicing is a virtual networking architecture essential for meeting 5G’s diverse requirements for future-proof scalability and flexibility. Its implementation is in the form of an end-to-end feature that spans the core and radio access networks. Adequately defining and implementing the slices, as well as properly validating service isolation, elasticity, and security assurance, are critical for deployment success. This webinar explains how to implement network slicing in 5G networks and provides best practices for validation. 

Overview of learnings

  • Learn to identify the nodes and interfaces involved in network slicing. 
  • Find out about signaling procedures for network slicing implementation. 
  • Understand upcoming test challenges.

Terabits-Per-Second Data Rates Achieved at Short Range

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/networks/terabit-second

Using the same kind of techniques that allow DSL to transmit high-speed Internet over regular phone lines, scientists have transmitted signals at 10 terabits per second or more over short distances, significantly faster than other telecommunications technologies, a new study finds.

Digital subscriber line (DSL) modems delivered the first taste of high-speed Internet access to many users. They make use of the fact that existing regular telephone lines are capable of handling a much greater bandwidth than is needed just for voice. DSL systems leverage that extra bandwidth to send multiple signals in parallel across many frequencies.

Using megahertz frequencies, current DSL technologies can achieve downstream transmission rates of up to 100 megabits per second at a range of 500 meters, and more than 1 gigabit per second at shorter distances. (DSL signal quality often decreases over distance because of the limitations of phone lines; telephone companies can boost voice signals with small amplifiers called loading coils, but these do not work for DSL signals.)

New Photonics Engine Promises Low-Loss, Energy-Efficient Data Capacity for Hyperscale Data Centers

Post Syndicated from Lynne Peskoe-Yang original https://spectrum.ieee.org/tech-talk/computing/networks/new-photonics-engine-promises-lowloss-energyefficient-data-capacity-for-hyperscale-data-centers

At the Optical Networking and Communication Conference in San Francisco, which wrapped up last Thursday, a team of researchers from Intel described a possible solution to a computing problem that keeps data server engineers awake at night: how networks can keep up with our growing demand for data.

The amount of data used globally is growing exponentially. Reports from last year suggest that something like 2.5 quintillion bytes of data are produced each day. All that data has to be routed from its origins—in consumer hard drives, mobile phones, IoT devices, and other processors—through multiple servers as it finds its way to other machines.

“The challenge is to get data in and out of the chip,” without losing information or slowing down processing, said Robert Blum, the Director of Marketing and New Business at Intel. Optical systems, like fiber-optic cables, have been in widespread use as an alternative computing medium for decades, but loss still occurs at the inevitable boundaries between materials in a hybrid optoelectronic system.

The team at Intel has developed a photonic engine with the equivalent processing power of sixteen 100-GB transceivers, or 4 of the latest 12.8 TB generation. The standout feature of the new chip is its co-packaging, a method of close physical integration of the necessary electrical components with faster, lossless optical ones.

The close integration of the optical components allows Intel’s engine to “break the wall,” of the maximum density of pluggable port transceivers on a switch ASIC, according to Blum. More ports on a switch—the specialized processor that routes data traffic—mean higher processing power, but only so many connectors can fit together before overheating becomes a threat.

The photonic engine brings the optical elements right up to the switch. Optical fibers require less space to connect and improve air flow throughout the server without adding to its heat waste. “With this [co-packaging] innovation, higher levels of bit rates are possible because you are no longer limited by electrical data transfer,” said Blum. Once you get to optical computing, distance is free—2 meters, 200 meters, it doesn’t matter.”

Driving huge amounts of high-speed data over the foot-long copper trays, as is necessary in standard server architectures, is also expensive—especially in terms of energy consumption. “With electrical [computation], as speed goes higher, you need more power; with optical, it is literally lossless at any speed,” said lead device integration engineer Saeed Fathololoumi.

“Power is really the currency on which data centers operate,” added Blum. “They are limited by the amount of power you can supply to them, and you want to use as much of that power as possible to compute.”

The co-packaged photonic engine currently exists as a functional demo back at Intel’s lab. The demonstration at the conference used a P4-programmable Barefoot Tofino 2 switch ASIC capable of speeds reaching12.8 terabits per second, in combination with Intel’s 1.6-Tbps silicon photonics engines. “The optical interface is already the standard industry interface, but in the lab we’re using a switch that can talk to any other switch using optical protocols,” said Blum.

It’s the first step toward an all-optical input-output scheme, which may offer future data centers a way to cope with the rapidly expanding data demands of the Internet-connected public. For the Intel team, that means working with the rest of the computing industry to define the initial deployments of the new engines. “We’ve proven out the main technical building blocks, the technical hurdles,” said Fathololoumi. “The risk is low now to develop this into a product.”

Cloud Services Tool Lets You Pay for Data You Use—Not Data You Store

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/networks/pay-cloud-services-data-tool-news

Cloud storage services usually charge clients for how much data they wish to store. But charging users only when they actually use that data may be a more cost-effective approach, a new study finds.

Internet-scale web applications—the kind that run on servers across the globe and may handle millions of users—are increasingly relying on services that store data in the cloud. This helps applications deal with huge amounts of data. Facebook, for example, generates 4 petabytes (4 million gigabytes) of data every day.

Extending Quantum Entanglement Across Town

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/nanoclast/computing/networks/taking-quantum-entanglement-across-town

A team of German researchers has stretched the distance quantum information can travel from stationary quantum memory to optical telecom pulse. The group’s new experiment transfers the information contained in a single quantum bit from an atomic state to a single photon, then sends it through some 20 kilometers of fiber optic cable.

This finding begins to extend the distance over which quantum systems (including quantum computers and quantum communications hubs) can be physically separated while still remaining connected. It also serves as a milestone on the road toward a so-called quantum repeater, which would broadly expand the footprint of quantum technologies toward regional, national, or even international connectivity.

“One of the grand goals is a quantum network, which then would link together different quantum computers,” says Harald Weinfurter, professor of physics at Ludwig Maximilian University of Munich, Germany. “And if we can establish entanglement between many such segments, then we can link all the segments together. And then link, in an efficient manner, two atoms over a really long distance.”

The researchers, who reported their findings in a recent issue of the journal Physical Review Letters, have only one-half of a full-fledged communications system from one stationary qubit to another.

Weinfurter noted that, to complete such a quantum communications channel, the team would have to also complete the process in reverse. So, the data in an individual qubit would be transferred to a photon, travel some distance, and then be transferred back to a single atom at the other end of the chain.

“At the end of the day it will work out; we are very positive,” says Weinfurter.

In the experiments reported in the paper, rubidium atoms were captured in a tabletop laser atom trap and cooled down to millionths of a degree above absolute zero. The researchers then picked out an individual atom from the rubidium atom cloud using optical tweezers (a focused laser beam that nudges atoms as if they’re being manipulated by physical tweezers).

They zapped the single atom, pushing it up to an excited state (actually two nearby energy states, separated by the spin state of the electron). The atom, in other words, exists in a quantum state that is both spin-up and spin-down versions of that excited state. When the atom decays, the polarization of the ensuing photon depends on the spin-up or -down nature of the excited state that produced it.

The next step involved trapping the polarized photon and converting it to a fiber optic S-band photon, which can travel through 20 km of fiber on average before being absorbed or attenuated. The researchers found they can preserve on average some 78 percent of the entanglement between the rubidium atom and the fiber optic photon.

The next challenge, says Weinfurter, is to build out the full atom-to-photon-to-atom quantum communication system within their lab. And then, from there, to actually physically separate one apparatus from another by roughly 20 km and try to make it all work.

As it happens, Weinfurter notes, the Max Planck Institute for Quantum Optics in Munich happens to be about 20 km from the team’s lab. “It’s good to have them that close.”

Then, as quantum computer makers today know all too well, the challenge of error correction will rear its head. As with quantum computer error correction, entanglement purification for a quantum communication system like this is not an easy challenge to overcome. But it’s a necessary step if present-day technologies are to be scaled up into a system that can transmit quantum entanglement from one stationary qubit to another at dozens or hundreds of kilometers distance or more.

“In the whole process, we lose the quality of the entangled states,” Weinfurter said. “Then we have to recover this. There are lots of proposals how to do this. But one has to implement it first.”

Gate Drive Measurement Considerations

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/gate-drive-measurement-considerations

One of the primary purposes of a gate driver is to enable power switches to turn on and off faster, improving rise and fall times. Faster switching enables higher efficiency and higher power density, reducing losses in the power stage associated with high slew rates. However, as slew rates increase, so do measurement and characterization uncertainty. 

Effective measurement and characterization considerations must account for: ► Proper gate driver design – Accurate timing (propagation delay in regard to skew, PWD, jitter) – Controllable gate rise and fall times  – Robustness against noise sources (input glitches and CMTI) ► Minimized noise coupling ► Minimized parasitic inductance

The trend for silicon based power designs over wide bandgap power designs makes measurement and characterization a greater challenge. High slew rates in SiC and GaN devices present  designers with hazards such as large overshoots and ringing, and potentially large  unwanted voltage transients that can cause spurious switching of the MOSFETs.

How to Improve Security Visibility and Detection-Response Operations in AWS

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how-to-improve-security-visibility-and-detectionresponse-operations-in-aws

Security teams often handle a large stream of alerts, creating noise and impairing their ability to determine which incidents to prioritize. By aggregating security information from various sources and automating incident response, organizations can increase visibility into their environment and focus on the most important potential threats. In this webinar, SANS and AWS Marketplace explore how organizations can leverage solutions to create more signal and less noise for actionable responses, enhancing and accelerating security operations.

Register today to be among the first to receive the associated whitepaper written by SANS Analyst and Senior Instructor Dave Shackleford.

Attendees will learn to:

  • Use continuous monitoring to gain insight into events and behaviors that move into and through your cloud environment
  • Integrate security incident and event management (SIEM) solutions to enhance detection and investigation of potential threats
  • Leverage security orchestration automation and response (SOAR) technologies to auto-remediate events and reduce noise in your environment

Breaking Down Barriers in FPGA Engineering Speeds up Development

Post Syndicated from Digilent original https://spectrum.ieee.org/computing/networks/breaking-down-barriers-in-fpga-engineering-speeds-up-development

It’s hard to reinvent the wheel—they’re round and they spin. But you can make them more efficient, faster, and easier for anyone to use. This is essentially what Digilent Inc. has done with its new Eclypse Z7 Field-Programmable Gate Array (FPGA) board. The Eclypse Z7 represents the first host board of Digilent’s new Eclyspe platform that aims at increasing productivity and accelerating FPGA system design.

To accomplish this, Digilent has taken the design and development of FPGAs out of the silo restricted to highly specialized digital design engineers or embedded systems engineers and opened it up to a much broader group of people that have knowledge of common programming languages, like C and C++. Additional languages like Python and LabVIEW are expected to be supported in future updates.

FPGAs have been a key tool for engineers to tailor a circuit board exactly the way it is needed to be for a particular application. To program these FPGAs specialized development tools are needed. Typically, the tool chain used for Xilinx FPGAs is a programming environment known as Vivado, provided by Xilinx, one of the original developers of FPGAs.

“FPGA development environments like Vivado really require a very niche understanding and knowledge,” said Steve Johnson, president of Digilent. “As a result, they are relegated to a pretty small cadre of engineers.”

Johnson added, “Our intent with the Eclypse Z7 is to empower a much larger number of engineers and even scientists so that they can harness the power of these FPGAs and Systems on a Chip (SoCs), which typically would be out of their reach. We want to broaden the customer base and empower a much larger group of people.”

Digilent didn’t just target relatively easy SoC chip devices. Instead, the company jumped into the deep end of the FPGA pool and focused on the development of a Zynq 7020 FPGA SoC from Xilinx, which has a fairly complex combination of a dual-core ARM processor with an FPGA fabric. This complex part presents even more of challenge for most engineers.

To overcome this complexity, Johnson explains that they essentially abstracted the complexity out of the system level development of Input/Output (I/O) modules by incorporating a software layer and FPGA “blocks” that serve as a kind of driver.

“You can almost think of it as when you plug a printer into a computer, you don’t need to know all of the details of how that printer works,” explained Johnson. “We’re essentially providing a low-level driver for each of these I/O modules so that someone can just plug it in.”

With this capability, a user can configure an I/O device that they just plugged in and start acquiring data from it, according to Johnson. Typically, this would require weeks of work involving the pouring over of data sheets and understanding the registers of the devices that you’ve plugged in. You would need to learn how to communicate with that device at a very low-level so that it was properly configured to move data back and forth. With the new Eclypse Z7 all of that trouble has been taken off the table.

image

Beyond the software element of the new platform, there’s a focus on high-speed analog and digital I/O. This focus is partly due to Digilent’s alignment with its parent company—National Instruments—and its focus around automated measurements. This high-speed analog and digital I/O is expected to be a key feature for applications where FPGAs and SoCs are really powerful: Edge Computing. 

In these Edge Computing environments, such as in predictive maintenance, you need analog inputs to be able to do vibration or signal monitoring applications. In these types of applications you need high-speed analog inputs and outputs and a lot of processing power near the sensor.

The capabilities of these FPGA and SoC devices in Edge Computing could lead to applying machine learning or artificial intelligence to these devices, ushering in a convergence between two important trends – Artificial Intelligence (AI) and the Internet of Things (IoT) that’s coming to be known as the Artificial Intelligence of Things (AIoT), according to Johnson.

Currently, the FPGA and SoC platforms used in these devices can take advantage of 4G networks to enable Edge devices like those envisioned in AIoT scenarios. But this capability will be greatly enhanced when 5G networks are mature. At that time, Johnson envisions you’ll just have a 5G module that you can plug into a USB or miniPCIe port on an Edge device.

“These SoCs—these ARM processors with the FPGAs attached to them—are exactly the right kind of architecture to do this low-power, small form factor, Edge Computing,” said Johnson. “The analog input that we’re focusing on is intended to both sense the real world and then process and deliver that information. So they’re meant exactly for that kind of application.”

This move by Digilent to empower a greater spectrum of engineers and scientists is in line with their overall aim of helping customers create, prototype and develop small, embedded systems—whether they are medical devices or edge computing devices.

Improving Codec Execution With ARM Cortex-M Processors

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/improving-codec-execution-with-arm-cortexm-processors

Digital Signal Processing (DSP) has traditionally required the use of an expensive dedicated DSP processor. While solutions have been implemented in microcontrollers using fixed-point math libraries for decades, this does require software libraries that can use more processing cycles than a processor capable of executing DSP instructions.

In this paper, we will explore how we can speed up DSP codecs using the DSP extensions built-in to the Arm Cortex-M processors.

You will learn:

  • The technology trends moving data processing to the edge of the network to enable more compute performance
  • What are the DSP extensions on the Arm Cortex-M processors and the benefits they bring, including cost savings and decreased system-level complexity
  • How to convert analog circuits to software using modeling software such as MathWorks MATLAB or Advanced Solutions Nederlands (ASN) filter designer
  • How to utilize the floating-point unit (FPU) with Cortex-M to improve performance
  • How to use the open-source CMSIS-DSP software library to create IIR and FIR filters in addition to calculating a Fast Fourier Transform (FFT)
  • How to implement the IIR filter that utilizes CMSIS-DSP using the Advanced Solutions Nederlands (ASN) designer