Bogus Security Technology: An Anti-5G USB Stick

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/05/bogus_security_.html

The 5GBioShield sells for £339.60, and the description sounds like snake oil:

…its website, which describes it as a USB key that “provides protection for your home and family, thanks to the wearable holographic nano-layer catalyser, which can be worn or placed near to a smartphone or any other electrical, radiation or EMF [electromagnetic field] emitting device”.

“Through a process of quantum oscillation, the 5GBioShield USB key balances and re-harmonises the disturbing frequencies arising from the electric fog induced by devices, such as laptops, cordless phones, wi-fi, tablets, et cetera,” it adds.

Turns out that it’s just a regular USB stick.

Lasers Write Data Into Glass

Post Syndicated from Amy Nordrum original https://spectrum.ieee.org/computing/hardware/lasers-write-data-into-glass

Magnetic tape and hard disk drives hold much of the world’s archival data. Compared with other memory and storage technologies, tape and disk drives cost less and are more reliable. They’re also nonvolatile, meaning they don’t require a constant power supply to preserve data. Cultural institutions, financial firms, government agencies, and film companies have relied on these technologies for decades, and will continue to do so far into the future.

But archivists may soon have another option—using an extremely fast laser to write data into a 2-millimeter-thick piece of glass, roughly the size of a Post-it note, where that information can remain essentially forever.

This experimental form of optical data storage was demonstrated in 2013 by researchers at the University of Southampton in England. Soon after, that group began working with engineers at Microsoft Research in an effort called Project Silica. Last November, Microsoft completed its first proof of concept by writing the 1978 film Superman on a single small piece of glass and retrieving it.

With this method, researchers could theoretically store up to 360 terabytes of data on a disc the size of a DVD. For comparison, Panasonic aims to someday fit 1 TB on conventional optical discs, while Seagate and Western Digital are shooting for 50- to 60-TB hard disk drives by 2026.

International Data Corp. expects the world to produce 175 zettabytes of data by 2025—up from 33 ZB in 2018. Though only a fraction of that data will be stored, today’s methods may no longer suffice. “We believe people’s appetite for storage will force scientists to look into other kinds of materials,” says Waguih Ishak, chief technologist at Corning Research and Development Corp.

Microsoft’s work is part of a broader company initiative to improve cloud storage through optics. “I think they see it as potentially a distinguishing technology from something like [Amazon Web Services] and other cloud providers,” says James Byron, a Ph.D. candidate in computer science at the University of California, Santa Cruz, who studies storage methods.

Microsoft isn’t alone—John Morris, chief technology officer at Seagate, says researchers there are also focused on understanding the potential of optical data storage in glass. “The challenge is to develop systems that can read and write with reasonable throughput,” he says.

Writing data to glass involves focusing a femtosecond laser, which pulses very quickly, on a point within the glass. The glass itself is a sort known as fused silica. It’s the same type of extremely pure glass used for the Hubble Space Telescope’s mirror as well as the windows on the International Space Station.

The laser’s pulse deforms the glass at its focal point, forming a tiny 3D structure called a voxel. Two properties that measure how the voxel interacts with polarized light—retardance and change in the light’s polarization angle—can together represent several bits of data per voxel.

Microsoft can currently write hundreds of layers of voxels into each piece of glass. The glass can be written to once and read back many times. “This is data in glass, not on glass,” says Ant Rowstron, a principal researcher and deputy lab director at Microsoft Research Lab in Cambridge, England.

Reading data from the glass requires an entirely different setup, which is one potential drawback of this method. Researchers shine different kinds of polarized light—in which light waves all oscillate in the same direction, rather than every which way—onto specific voxels. They capture the results with a camera. Then, machine-learning algorithms analyze those images and translate their measurements into data.

Ishak, who is also an adjunct professor of electrical engineering at Stanford University, is optimistic about the approach. “I’m sure that in the matter of a decade, we’ll see a whole new kind of storage that eclipses and dwarfs everything that we have today,” he says. “And I firmly believe that those pure materials like fused silica will definitely play a major role there.”

But many scientific and engineering challenges remain. “The writing process is hard to make reliable and repeatable, and [it’s hard] to minimize the time it takes to create a voxel,” says Rowstron. “The read process has been a challenge in figuring out how to read the data from the glass using the minimum signal possible from the glass.”

The Microsoft group has added error-correcting codes to improve the system’s accuracy and continues to refine its machine-learning algorithms to automate the read-back process. Already, the team has improved writing speeds by several orders of magnitude from when they began, though Rowstron declined to share absolute speeds.

The team is also considering what it means to store data for such a long time. “We are working on thinking what a Rosetta Stone for glass could look like to help people decode it in the future,” Rowstron says.

This article appears in the June 2020 print issue as “Storing Data in Glass.”

Security updates for Friday

Post Syndicated from jake original https://lwn.net/Articles/821794/rss

Security updates have been issued by Debian (libexif and tomcat8), Fedora (python38), openSUSE (libxslt), Oracle (git), Red Hat (bind, freerdp, and git), Scientific Linux (git), SUSE (qemu and tomcat), and Ubuntu (apt, json-c, kernel, linux, linux-raspi2, linux-raspi2-5.3, and openssl).

Go Modules- A guide for monorepos (Part 1)

Post Syndicated from Grab Tech original https://engineering.grab.com/go-module-a-guide-for-monorepos-part-1

Go modules are a new feature in Go for versioning packages and managing dependencies. It has been almost 2 years in the making, and it’s finally production-ready in the Go 1.14 release early this year. Go recommends using single-module repositories by default, and warns that multi-module repositories require great care.

At Grab, we have a large monorepo and changing from our existing monorepo structure has been an interesting and humbling adventure. We faced serious obstacles to fully adopting Go modules. This series of articles describes Grab’s experience working with Go modules in a multi-module monorepo, the challenges we faced along the way, and the solutions we came up with.

To fully appreciate Grab’s journey in using Go Modules, it’s important to learn about the beginning of our vendoring process.

Native support for vendoring using the vendor folder

With Go 1.5 came the concept of the vendor folder, a new package discovery method, providing native support for vendoring in Go for the first time.

With the vendor folder, projects influenced the lookup path simply by copying packages into a vendor folder nested at the project root. Go uses these packages before traversing the GOPATH root, which allows a monorepo structure to vendor packages within the same repo as if they were 3rd-party libraries. This enabled go build to work consistently without any need for extra scripts or env var modifications.

Initial obstacles

There was no official command for managing the vendor folder, and even copying the files in the vendor folder manually was common.

At Grab, different teams took different approaches. This meant that we had multiple version manifests and lock files for our monorepo’s vendor folder. It worked fine as long as there were no conflicts. At this time very few 3rd-party libraries were using proper tagging and semantic versioning, so it was worse because the lock files were largely a jumble of commit hashes and timestamps.

Jumbled commit hashes and timestamps
Jumbled commit hashes and timestamps

As a result of the multiple versions and lock files, the vendor directory was not reproducible, and we couldn’t be sure what versions we had in there.

Temporary relief

We eventually settled on using Glide, and standardized our vendoring process. Glide gave us a reproducible, verifiable vendor folder for our dependencies, which worked up until we switched to Go modules.

Vendoring using Go modules

I first heard about Go modules from Russ Cox’s talk at GopherCon Singapore in 2018, and soon after started working on adopting modules at Grab, which was to manage our existing vendor folder.

This allowed us to align with the official Go toolchain and familiarise ourselves with Go modules while the feature matured.

Switching to go mod

Go modules introduced a go mod vendor command for exporting all dependencies from go.mod into vendor. We didn’t plan to enable Go modules for builds at this point, so our builds continued to run exactly as before, indifferent to the fact that the vendor directory was created using go mod.

The initial task to switch to go mod vendor was relatively straightforward as listed here:

  1. Generated a go.mod file from our glide.yaml dependencies. This was scripted so it could be kept up to date without manual effort.
  2. Replaced the vendor directory.
  3. Committed the changes.
  4. Used go mod instead of glide to manage the vendor folder.

The change was extremely large (due to differences in how glide and go mod handled the pruning of unused code), but equivalent in terms of Go code. However, there were some additional changes needed besides porting the version file.

Addressing incompatible dependencies

Some of our dependencies were not yet compatible with Go modules, so we had to use Go module’s replace directive to substitute them with a working version.

A more complex issue was that parts of our codebase relied on nested vendor directories, and had dependencies that were incompatible with the top level. The go mod vendor command attempts to include all code nested under the root path, whether or not they have used a sub-vendor directory, so this led to conflicts.

Problematic paths

Rather than resolving all the incompatibilities, which would’ve been a major undertaking in the monorepo, we decided to exclude these paths from Go modules instead. This was accomplished by placing an empty go.mod file in the problematic paths.

Nested modules

The empty go.mod file worked. This brought us to an important rule of Go modules, which is central to understanding many of the issues we encountered:

A module cannot contain other modules

This means that although the modules are within the same repository, Go modules treat them as though they are completely independent. When running go mod commands in the root of the monorepo, Go doesn’t even ‘see’ the other modules nested within.

Tackling maintenance issues

After completing the initial migration of our vendor directory to go mod vendor however, it opened up a different set of problems related to maintenance.

With Glide, we could guarantee that the Glide files and vendor directory would not change unless we deliberately changed them. This was not the case after switching to Go modules; we found that the go.mod file frequently required unexpected changes to keep our vendor directory reproducible.

There are two frequent cases that cause the go.mod file to need updates: dependency inheritance and implicit updates.

Dependency inheritance

Dependency inheritance is a consequence of Go modules version selection. If one of the monorepo’s dependencies uses Go modules, then the monorepo inherits those version requirements as well.

When starting a new module, the default is to use the latest version of dependencies. This was an issue for us as some of our monorepo dependencies had not been updated for some time. As engineers wanted to import their module from the monorepo, it caused go mod vendor to pull in a huge amount of updates.

To solve this issue, we wrote a quick script to copy the dependency versions from one module to another.

One key learning here is to have other modules use the monorepo’s versions, and if any updates are needed then the monorepo should be updated first.

Implicit updates

Implicit updates are a more subtle problem. The typical Go modules workflow is to use standard Go commands: go build, go test, and so on, and they will automatically update the go.mod file as needed. However, this was sometimes surprising, and it wasn’t always clear why the go.mod file was being updated. Some of the reasons we found were:

  • A new import was added by mistake, causing the dependency to be added to the go.mod file
  • There is a local replace for some module B, and B changes its own go.mod. When there’s a local replace, it bypasses versioning, so the changes to B’s go.mod are immediately inherited.
  • The build imports a package from a dependency that can’t be satisfied with the current version, so Go attempts to update it.

This means that simply creating a tag in an external repository is sometimes enough to affect the go.mod file, if you already have a broken import in the codebase.

Resolving unexpected dependencies using graphs

To investigate the unexpected dependencies, the command go mod graph proved the most useful.

Running graph with good old grep was good enough, but its output is also compatible with the digraph tool for more sophisticated queries. For example, we could use the following command to trace the source of a dependency on cloud.google.com/go:

$ go mod graph | digraph somepath grab.com/example cloud.google.com/[email protected]

github.com/hashicorp/vault/[email protected] github.com/hashicorp/vault/[email protected]

github.com/hashicorp/vault/[email protected] google.golang.org/[email protected]

google.golang.org/[email protected] google.golang.org/[email protected]

google.golang.org/[email protected] cloud.google.com/[email protected]
Diagram generated using modgraphviz
Diagram generated using modgraphviz

Stay tuned for more

I hope you have enjoyed this article. In our next post, we’ll cover the other solutions we have for catching unexpected changes to the go.mod file and addressing dependency issues.

Join us

Grab is more than just the leading ride-hailing and mobile payments platform in Southeast Asia. We use data and technology to improve everything from transportation to payments and financial services across a region of more than 620 million people. We aspire to unlock the true potential of Southeast Asia and look for like-minded individuals to join us on this ride.

If you share our vision of driving South East Asia forward, apply to join our team today.

Credits

The cute Go gopher logo for this blog’s cover image was inspired by Renee French’s original work.

Законопроекти на ВМРО: един отхвърлен, друг постъпва

Post Syndicated from nellyo original https://nellyo.wordpress.com/2020/05/29/vmro_zzld/

Вчера в парламентарната медийна комисия е отхвърлен на първо четене законопроектът на Сиди и др. за фалшивите новини.  Разбира се, това не е окончателно препятстване на този текст, но е индикативно. Протоколът ще бъде качен ТУК, когато бъде публикуван.  Това е  проектът срещу шарлатаните и мародерите в интернет средата (според мотивите).

В същия ден според сайта на НС е постъпил друг законопроект от същия екип  – Сиди и др., този път ЗИД на Закона за защита на личните данни.

За съжаление текстът е качен във вид, в който не може да се копира, а да се преписва точно този текст не е оправдано усилие. Не може ли Народното събрание да въведе  изисквания за формата на внасяните законопроекти (машинно читаем формат, отворен, платформено независим)?

В мотивите на законопроекта се обсъжда съдбата на законопроекта за фалшивите новини и се казва, че

 Сама по себе си така поднесен прочитът на предложената от нас промяна в ЗРТ, е невярно предадена информация.

Поради това  според новия ЗИД всички собственици на интернет сайтове, онлайн платформи, профили в социални мрежи, онлайн блогове оповестяват на видно място информация за себе си като администратор на лични данни(физическо или юридическо лице).

 

Export logs from Cloudflare Gateway with Logpush

Post Syndicated from Sam Rhea original https://blog.cloudflare.com/export-logs-from-cloudflare-gateway-with-logpush/

Export logs from Cloudflare Gateway with Logpush

Like many people, I have spent a lot more time at home in the last several weeks. I use the free version of Cloudflare Gateway, part of Cloudflare for Teams, to secure the Internet-connected devices on my WiFi network. In the last week, Gateway has processed about 114,000 DNS queries from those devices and blocked nearly 100 as potential security risks.

I can search those requests in the Cloudflare for Teams UI. The logs capture the hostname requested, the time of the request, and Gateway’s decision to allow or block. This works fine for one-off investigations into a block, but does not help if I want to analyze the data more thoroughly. The last thing I want to do is click through hundreds or thousands of pages.

That problem is even more difficult for organizations attempting to keep hundreds or thousands of users and their devices secure. Whether they secure roaming devices with DoH or a static IP address, or keep users safe as they return to offices, deployments at that scale need a better option for auditing tens or hundreds of millions of queries each week.

Starting today, you can configure the automatic export of logs from Cloudflare Gateway to third-party storage destinations or security information and event management (SIEM) tools. Once exported, your team can analyze and audit the data as needed. The feature builds on the same robust Cloudflare Logpush Service that powers data export from Cloudflare’s infrastructure products.

Cloudflare Gateway

Cloudflare Gateway is one-half of Cloudflare for Teams, Cloudflare’s platform for securing users, devices, and data. With Cloudflare for Teams, our global network becomes your team’s network, replacing on-premise appliances and security subscriptions with a single solution delivered closer to your users – wherever they work.

Export logs from Cloudflare Gateway with Logpush

As part of that platform, Cloudflare Gateway blocks threats on the public Internet from becoming incidents inside your organization. Gateway’s first release added DNS security filtering and content blocking to the world’s fastest DNS resolver, Cloudflare’s 1.1.1.1.

Deployment takes less than 5 minutes. Teams can secure entire office networks and segment traffic reports by location. For distributed organizations, Gateway can be deployed via MDM on networks that support IPv6 or using a dedicated IPv4 as part of a Cloudflare Enterprise account.

With secure DNS filtering, administrators can click a single button to block known threats, like sources of malware or phishing sites. Policies can be extended to block specific categories, like gambling sites or social media. When users request a filtered site, Gateway stops the DNS query from resolving and prevents the device from connecting to a malicious destination or hostname with blocked material.

Cloudflare Logpush

The average user makes about 5,000 DNS queries each day. For an organization with 1,000 employees, that produces 5M rows of data daily. That data includes regular Internet traffic, but also potential trends like targeted phishing campaigns or the use of cloud storage tools that are not approved by your IT organization.

The Cloudflare for Teams UI presents some summary views of that data, but each organization has different needs for audit, retention, or analysis. The best way to let you investigate the data in any way you need is to give you all of it. However the volume of data and how often you might need to review it means that API calls or CSV downloads are not suitable. A real logging pipeline is required.

Cloudflare Logpush solves that challenge. Cloudflare’s Logpush Service exports the data captured by Cloudflare’s network to storage destinations that you control. Rather than requiring your team to build a system to call Cloudflare APIs and pull data, Logpush routinely exports data with fields that you configure.

Cloudflare’s data team built the Logpush pipeline to make it easy to integrate with popular storage providers. Logpush supports AWS S3, Google Cloud Storage, Sumo Logic, and Microsoft Azure out of the box. Administrators can choose a storage provider, validate they own the destination, and configure exports of logs that will send deltas every five minutes from that point onward.

How it works

When enabled, you can navigate to a new section of the Logs component in the Cloudflare for Teams UI, titled “Logpush”. Once there, you’ll be able to choose which fields you want to export from Cloudflare Gateway and the storage destination.

Export logs from Cloudflare Gateway with Logpush

The Logpush wizard will walk you through validating that you own the destination and configuring how you want folders to be structured. When saved, Logpush will send updated logs every five minutes to that destination. You can configure multiple destinations and monitor for any issues by returning to this section of the Cloudflare for Teams UI.

Export logs from Cloudflare Gateway with Logpush

What’s next?

Cloudflare’s Logpush Service is only available to customers on a contract plan. If you are interested in upgrading, please let us know. All Cloudflare for Teams plans include 30-days of data that can be searched in the UI.

Cloudflare Access, the other half of Cloudflare for Teams, also supports granular log export. You can configure Logpush for Access in the Cloudflare dashboard that houses Infrastructure features like the WAF and CDN. We plan to migrate that configuration to this UI in the near future.

US Copyright Office’s DMCA Tweaks Trigger ‘Internet Disconnection’ Concerns

Post Syndicated from Ernesto original https://torrentfreak.com/us-copyright-offices-dmca-tweaks-trigger-internet-disconnection-concerns-200529/

After several years of public consultations and stakeholder meetings, the US Copyright Office issued its review of the DMCA’s safe harbor provisions.

The report doesn’t propose any major overhauls of the DMCA. Instead, it aims to fine-tune some parts, to better balance the interests of copyright holders and online service providers (OSPs).

More drastic suggestions were put on the backburner. Those include pirate site blocking and a ‘takedown and staydown’ requirement for online services, which would require mandatory upload filtering.

Not Everyone Is Happy with the Report

The Copyright Office’s attempt to create more balance is well-intended but not everyone is pleased with it. For example, a statement released by several prominent music industry groups, including the RIAA, shows that they wanted and expected more.

The music groups provide a list of things big technology platforms could do to address the concerns raised in the report. However, their first suggestion is to ensure that ‘takedown’ means ‘staydown.’ That’s one of the things the Copyright Office explicitly did not recommend, as there may be a negative impact.

On the other side, there was a lot of critique of the apparent disregard for a key party in the DMCA debate, the public at large. The Copyright Office frames DMCA issues as a ‘copyright holders’ vs. ‘online service provider’ debate, but the voice of the public is glanced over at times.

Looking at the suggestions in the report, however, it’s clear the public will be heavily impacted by the proposed changes. This is also a problem signaled by some digital rights groups.

Disconnecting Alleged Copyright Infringers

According to Public Knowledge, the Copyright Office’s recommendations are ill-considered. The digital rights group believes that the report heavily favors copyright holders while totally overlooking the interests of millions of regular Internet users.

The proposals don’t only harm the general public, they also fail to recognize copyright abuses, including false DMCA notices, the group adds.

“In a contentious debate, it comes down on the same side (copyright holders) in nearly every instance, and disregards ample evidence that the DMCA is often abused by people looking to censor content they have no rights over,” the group notes.

Public Knowlege takes offense to the Office’s comments regarding repeat infringers. These stress that people’s Internet access be disconnected based on allegations of copyright infringements by copyright holders.

“Astonishingly, the Copyright Office buys into the idea that users should be subject to being cut off from internet access entirely on the basis of allegations of copyright infringement,” Public Knowledge writes.

The DMCA text is currently not clear on whether allegations are good enough, but the Office’s recommendation is backed by an Appeals Court order. Nonetheless, Public Knowledge doesn’t believe it should be law.

“Congress should not be making it easier for private actors to completely and unilaterally remove a person’s ability to access the internet,” the group writes. “The internet is not just a giant copyrighted-content delivery mechanism; it is the fundamental backbone of modern life.”

Public Knowledge is not alone in its criticism. The Electronic Frontier Foundation (EFF) also stresses that the interests of the public are largely ignored, tilting the “balance” towards copyright holders.

“For example, the proposal to terminate someone’s Internet access—at any time, but especially now—is a hugely disproportionate response to unproven allegations of copyright infringement,” EFF wrote on Twitter.

Copyright Office Cherry-Picking?

It’s worth noting that court decisions are not always leading to the Copyright Office. The Appeals Court previously ruled that a repeat infringer policy doesn’t have to be written down, for example, but the Office now suggests updating the DMCA to change this.

Requiring a written repeat infringer policy, contrary to the Appeal Court ruling, would favor copyright holders. The same is true for confirming the other Appeal Court ruling, which concluded that ISPs must deal with repeat infringers based on allegations alone.

This doesn’t mean that the Office is always taking one side, however. As mentioned earlier, the report also denied the top demands from copyright holders by not recommending site blocking and ‘takedown – staydown’ policies.

Disagreement Remains

By highlighting these positions from two opposing sides of the debate, it is clear that the Copyright Office report includes positive and negative elements for all stakeholders. While it attempts to create more balance, disagreement remains.

This has also been the general theme of the DMCA revision debate over the past several years. The demands from one side usually hurt the other, and vice versa. It took a long time before the Office finalized its views and given what’s at stake, pushing any changes through Congress is not going to be easy.

From: TF, for the latest news on copyright battles, piracy and more.

Latest Raspberry Pi OS update – May 2020

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/latest-raspberry-pi-os-update-may-2020/

Along with yesterday’s launch of the new 8GB Raspberry Pi 4, we launched a beta 64-bit ARM version of Debian with the Raspberry Pi Desktop, so you could use all those extra gigabytes. We also updated the 32-bit version of Raspberry Pi OS (the new name for Raspbian), so here’s a quick run-through of what has changed.

NEW Raspberry Pi OS update (May 2020)

An update to the Raspberry Pi Desktop for all our operating system images is also out today, and we’ll have more on that in tomorrow’s blog post. For now, fi…

Bookshelf

As many of you know, we have our own publishing company, Raspberry Pi Press, who publish a variety of magazines each month, including The MagPi, HackSpace magazine, and Wireframe. They also publish a wide range of other books and magazines, which are released either to purchase as a physical product (from their website) or as free PDF downloads.

To make all this content more visible and easy to access, we’ve added a new Bookshelf application – you’ll find it in the Help section of the main menu.

Bookshelf shows the entire current catalogue of free magazines – The MagPi, HackSpace magazine and Wireframe, all with a complete set of back issues – and also all the free books from Raspberry Pi Press. When you run the application, it automatically updates the catalogue and shows any new titles which have been released since you last ran it with a little “new” flash in the corner of the cover.

To read any title, just double-click on it – if it is already on your Raspberry Pi, it will open in Chromium (which, it turns out, is quite a good PDF viewer); if it isn’t, it will download and then open automatically when the download completes. You can see at a glance which titles are downloaded and which are not by the “cloud” icon on the cover of any file which has not been downloaded.

All the PDF files you download are saved in the “Bookshelf” directory in your home directory, so you can also access the files directly from there.

There’s a lot of excellent content produced by Raspberry Pi Press – we hope this makes it easier to find and read.

Edit – some people have reported that Bookshelf incorrectly gives a “disk full” error when running on a system in which the language is not English; a fix for that is being uploaded to apt at the moment, so updating from apt (“sudo apt update” followed by “sudo apt upgrade”) should get the fixed version.

Magnifier

As mentioned in my last blog post (here), one of the areas we are currently trying to improve is accessibility to the Desktop for people with visual impairments. We’ve already added the Orca screen reader (which has had a few bug fixes since the last release which should make it work more reliably in this image), and the second recommendation we had from AbilityNet was to add a screen magnifier.

This proved to be harder than it should have been! I tried a lot of the existing screen magnifier programs that were available for Debian desktops, but none of them really worked that well; I couldn’t find one that worked the way the magnifiers in the likes of MacOS and Ubuntu did, so I ended up writing one (almost) from scratch.

To install it, launch Recommended Applications in the new image and select Magnifier under Universal Access. Once it has installed, reboot.

You’ll see a magnifying glass icon at the right-hand end of the taskbar – to enable the magnifier, click this icon, or use the keyboard shortcut Ctrl-Alt-M. (To turn the magnifier off, just click the icon again or use the same keyboard shortcut.)

Right-clicking the magnifier icon brings up the magnifier options. You can choose a circular or rectangular window of whatever size you want, and choose by how much you want to zoom the image. The magnifier window can either follow the mouse pointer, or be a static window on the screen. (To move the static window, just drag it with the mouse.)

Also, in some applications, you can have the magnifier automatically follow the text cursor, or the button focus. Unfortunately, this depends on the application supporting the required accessibility toolkit, which not all applications do, but it works reasonably well in most included applications. One notable exception is Chromium, which is adding accessibility toolkit support in a future release; for now, if you want a web browser which supports the accessibility features, we recommend Firefox, which can be installed by entering the following into a terminal window:

sudo apt install firefox-esr

(Please note that we do not recommend using Firefox on Raspberry Pi OS unless you need accessibility features, as, unlike Chromium, it is not able to use the Raspberry Pi’s hardware to accelerate video playback.)

I don’t have a visual impairment, but I find the magnifier pretty useful in general for looking at the finer details of icons and the like, so I recommend installing it and having a go yourself.

User research

We already know a lot of the things that people are using Raspberry Pi for, but we’ve recently been wondering if we’re missing anything… So we’re now including a short optional questionnaire to ask you, the users, for feedback on what you are doing with your Raspberry Pi in order to make sure we are providing the right support for what people are actually doing.

This questionnaire will automatically be shown the first time you launch the Chromium browser on a new image. There are only four questions, so it won’t take long to complete, and the results are sent to a Google Form which collates the results.

You’ll notice at the bottom of the questionnaire there is a field which is automatically filled in with a long string of letters and numbers. This is a serial number which is generated from the hardware in your particular Raspberry Pi which means we can filter out multiple responses from the same device (if you install a new image at some point in future, for example). It does not allow us to identify anything about you or your Raspberry Pi, but if you are concerned, you can delete the string before submitting the form.

As above, this questionnaire is entirely optional – if you don’t want to fill it in, just close Chromium and re-open it and you won’t see it again – but it would be very helpful for future product development if we can get this information, so we’d really appreciate it if as many people as possible would fill it in.

Other changes

There is also the usual set of bug fixes and small tweaks included in the image, full details of which can be found in the release notes on the download page.

One particular change which it is worth pointing out is that we have made a small change to audio. Raspberry Pi OS uses what is known as ALSA (Advanced Linux Sound Architecture) to control audio devices. Up until now, both the internal audio outputs on Raspberry Pi – the HDMI socket and the headphone jack – have been treated as a single ALSA device, with a Raspberry Pi-specific command used to choose which is active. Going forward, we are treating each output as a separate ALSA device; this makes managing audio from the two HDMI sockets on Raspberry Pi 4 easier and should be more compatible with third-party software. What this means is that after installing the updated image, you may need to use the audio output selector (right-click the volume icon on the taskbar) to re-select your audio output. (There is a known issue with Sonic Pi, which will only use the HDMI output however the selector is set – we’re looking at getting this fixed in a future release.)

Some people have asked how they can switch the audio output from the command line without using the desktop. To do this, you will need to create a file called .asoundrc in your home directory; ALSA looks for this file to determine which audio device it should use by default. If the file does not exist, ALSA uses “card 0” – which is HDMI – as the output device. If you want to set the headphone jack as the default output, create the .asoundrc file with the following contents:

defaults.pcm.card 1
defaults.ctl.card 1

This tells ALSA that “card 1” – the headphone jack – is the default device. To switch back to the HDMI output, either change the ‘1’s in the file to ‘0’s, or just delete the file.

How do I get it?

The new image is available for download from the usual place: our Downloads page.

To update an existing image, use the usual terminal command:

sudo apt update
sudo apt full-upgrade

To just install the bookshelf app:

sudo apt update
sudo apt install rp-bookshelf

To just install the magnifier, either find it under Universal Access in Recommended Software, or:

sudo apt update
sudo apt install mage

You’ll need to add the magnifier plugin to the taskbar after installing the program itself. Once you’ve installed the program and rebooted, right-click the taskbar and choose Add/Remove Panel Items; click Add, and select the Magnifier option.

We hope you like the changes — as ever, all feedback is welcome, so please leave a comment below!

The post Latest Raspberry Pi OS update – May 2020 appeared first on Raspberry Pi.

Looking for C-to-anything transpilers

Post Syndicated from esr original http://esr.ibiblio.org/?p=8705

I’m looking for languages that have three properties:

(1) Must have weak memory safety. The language is permitted to crash on an out -of-bounds array reference or null pointer, but may not corrupt or overwrite memory as a result.

(2) Must have a transpiler from C that produces human-readable, maintainable code that preserves (non-perverse) comments. The transpiler is allowed to not do a 100% job, but it must be the case that (a) the parts it does translate are correct, and (b) the amount of hand-fixup required to get to complete translation is small.

(3) Must not be Go, Rust, Ada, or Nim. I already know about these languages and their transpilers.

New – SaaS Contract Upgrades and Renewals for AWS Marketplace

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-saas-contract-upgrades-and-renewals-for-aws-marketplace/

AWS Marketplace currently contains over 7,500 listings from 1,500 independent software vendors (ISVs). You can browse the digital catalog to find, test, buy, and deploy software that runs on AWS:

Each ISV sets the pricing model and prices for their software. There are a variety of options available, including free trials, hourly or usage-based pricing, monthly, annual AMI pricing, and up-front pricing for 1-, 2-, and 3-year contracts. These options give each ISV the flexibility to define the models that work best for their customers. If their offering is delivered via a Software as a Service (SaaS) contract model, the seller can define the usage categories, dimensions, and contract length.

Upgrades & Renewals
AWS customers that make use of the SaaS and usage-based products that they find in AWS Marketplace generally start with a small commitment and then want to upgrade or renew them early as their workloads expand.

Today we are making the process of upgrading and renewing these contracts easier than ever before. While the initial contract is still in effect, buyers can communicate with sellers to negotiate a new Private Offer that best meets their needs. The offer can include additional entitlements to use the product, pricing discounts, a payment schedule, a revised contract end-date, and changes to the end-user license agreement (EULA), all in accord with the needs of a specific buyer.

Once the buyer accepts the offer, the new terms go in to effect immediately. This new, streamlined process means that sellers no longer need to track parallel (paper and digital) contracts, and also ensures that buyers receive continuous service.

Let’s say I am already using a product from AWS Marketplace and negotiate an extended contract end-date with the seller. The seller creates a Private Offer for me and sends me a link that I follow in order to find & review it:

I select the Upgrade offer, and I can see I have a new contract end date, the number of dimensions on my upgrade contract, and the payment schedule. I click Upgrade current contract to proceed:

I confirm my intent:

And I am good to go:

This feature is available to all buyers & SaaS sellers, and applies to SaaS contracts and contracts with consumption pricing.

Jeff;

The Biggest Robotics Research Conference Is Now More Accessible Than Ever

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/virtual-icra-robotics-research-conference

If it wasn’t for COVID-19, we’d probably be in Paris right now, enjoying the beautiful weather, stuffing ourselves with pastries, and getting ready for another amazing edition of the International Conference on Robotics and Automation (ICRA), the world’s largest robotics research gathering. We’re not doing any of that, of course. Personally, I’ve barely left my house since March, and the in-person ICRA conference in Paris was quite sensibly cancelled a while ago.

The good news, however, is that ICRA is now a virtual conference instead, and the reason that it’s good news (and not just some sad pandemic-y compromise) is that the IEEE Robotics and Automation Society (RAS) and the ICRA conference committees have put in an astonishing amount of work in a very short period of time to bring the entire conference online in a way that actually seems like it might work out pretty well for everyone.

Kражба на разследване на Биволъ се “узакони” чрез журналистическа награда

Post Syndicated from Екип на Биволъ original https://bivol.bg/shekerova-staikov-trifonov.html

петък 29 май 2020


Генка Шикерова направила предаване* за наградите на фондация “Радостина Константинова”. Поканила наградените Теодора Трифонова от бТВ и Николай Стайков от АКФ. Самата Генка – тоже наградена от същата фондация.

Теодора Трифонова, която ползва разследванията на Биволъ, без да се позове по никакъв начин на тях, считала че имало атака от нас не към нея, а към бТВ. Хубаво е да имаш широк медиен гръб и да се криеш зад него. Някак си по-комфортно и спокойно се чувстваш, сваляш отговорности и парираш опасности… Знаеш за кого и за какво работиш и знаеш точно, че ще ти бъде добре заплатено за работата, а ако се издъниш – медията ще те покрие с респектиращия си финансов, обществен и юридически ресурс.

Да работиш разследващ журналист в извънсистемна медия като Биволъ, без заплати, хонорари и привилегии, но с щедри заплахи, всевъзможни атаки и лична саможертва – това е непознат лукс на свобода за колегите от големите “национални” и казионни медии!

Стайков пък, за да омаловажи липсата на журналистическа етика от страна на колежката си събеседничка, мъдро констатирал, че нямало монопол върху информациите и регистрите, т.е. всеки може да ползва разследвания и публикации, без да се позове на първоизточник, с простото алиби, че също можел да си намери информацията. Можел, можел, ама не го направил, първо видял и прочел в Биволъ, пък тогава се сетил…

Не ни се случва за пръв път да награждават друг заради наши достижения. Така за “Стенограмата КТБ” заради която се съдихме и осъдихме успешно президентската институция, награда от “Програма достъп до информация” получи… журналист от BiT. 

Това Етичен кодекс, Закон за радиото и телевизията, в който е залегнало задължителното спазване на Етичния кодекс – празни приказки на едни маргинали от Биволъ, които заслужават да си бъдат плагиатствани, защото в мафиотизираната държава и без това са главна мишена на свинската пропаганда и всички институции. Не е важно, че Биволъ е споменаван коректно и цитиран в международните медии като първоизточник на #КъщиЗаТъщи – важното е местният ПР “да вървой”.

А че в портфолиото на Биволъ са големи награди, за които те само могат да мечтаят и които представят българската журналистика на световно равнище, това винаги се маргинализира от комплексираните родни колеги!

Нас награди в проядената от зависимости и корупция Бг не ни блазнят. Същата тази фондация “Радостина Константинова” брутално и безпринципно ни отне преди време Голямата награда, защото разследванията ни директно засягаха Пеевски и Доган! В Бг журналистическите награди трябва да се раздават правилно и “коректно” и изключения не се допускат. Сега и да ни дават такива – връщаме, ако сметнем, че са свързани със задкулисие или нечисти субекти! Но тук фактите са налице, а правилата и нормите в журналистическата етика са вечни!

Всичко това в “заводската седянка” е ОК. Наградили се, направили си наградените предаване, събрали се, хвърлили парченце сарказъм срещу Биволъ – нас бълха ни ухапала. Обаче как да не обърнем внимание, че продуцентката на предаването на Генка е в същото време и… председател на журито наградило своята водеща и 2-мата й гости. Малко реклама никога не е излишна – правиш предаване, логично е да си го промотираш чрез някоя от нароилите се многобройни родни фондацийки и журналистически награди “на всеки километър”.

Дългогодишната шефка на новините в бТВ Люба Ризова сега е продуцент на Шикерова в предаването “Алтернативата” по ТВ1. Какво лошо има да даде награда на водещата на своето си предаване и да се саморекламират? Обаждала се била на членове на журито да гласуват за Генка – айде сега! А че Антикорупционният фонд на Стайков е в спонсорското лого на надписите на същото това предаване – е, случайно съвпадение…

Навремето Люба Ризова бе шеф и на екипа от репортери на “Господари на ефира”. Предаването не получи нов договор, след фиктивното пребиване на репортера Димитър Върбанов.

В едни други среди и обстоятелства, подобни обвързаности се наричат “конфликт на интереси”. В българските медии обаче си е “приятелско шушукане”. Огромното обществено влияние обаче е и огромна отговорност и изисква висока професионална и морална класа! А това, че трябва да сме от едната страна на барикадата е безспорно. Но барикадата се нарича Истина и ние сме винаги и неизменно от нейната страна!

*”Алтернативата” по ТВ1 от 28/06/2020.

Снимка: Screenshot от предаването.

Fine-grained Continuous Delivery With CodePipeline and AWS Step Functions

Post Syndicated from Richard H Boyd original https://aws.amazon.com/blogs/devops/new-fine-grained-continuous-delivery-with-codepipeline-and-aws-stepfunctions/

Automating your software release process is an important step in adopting DevOps best practices. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline was modeled after the way that the retail website Amazon.com automated software releases, and many early decisions for CodePipeline were based on the lessons learned from operating a web application at that scale.

However, while most cross-cutting best practices apply to most releases, there are also business specific requirements that are driven by domain or regulatory requirements. CodePipeline attempts to strike a balance between enforcing best practices out-of-the-box and offering enough flexibility to cover as many use-cases as possible.

To support use cases requiring fine-grained customization, we are launching today a new AWS CodePipeline action type for starting an AWS Step Functions state machine execution. Previously, accomplishing such a workflow required you to create custom integrations that marshaled data between CodePipeline and Step Functions. However, you can now start either a Standard or Express Step Functions state machine during the execution of a pipeline.

With this integration, you can do the following:

·       Conditionally run an Amazon SageMaker hyper-parameter tuning job

·       Write and read values from Amazon DynamoDB, as an atomic transaction, to use in later stages of the pipeline

·       Run an Amazon Elastic Container Service (Amazon ECS) task until some arbitrary condition is satisfied, such as performing integration or load testing

Example Application Overview

In the following use case, you’re working on a machine learning application. This application contains both a machine learning model that your research team maintains and an inference engine that your engineering team maintains. When a new version of either the model or the engine is released, you want to release it as quickly as possible if the latency is reduced and the accuracy improves. If the latency becomes too high, you want the engineering team to review the results and decide on the approval status. If the accuracy drops below some threshold, you want the research team to review the results and decide on the approval status.

This example will assume that a CodePipeline already exists and is configured to use a CodeCommit repository as the source and builds an AWS CodeBuild project in the build stage.

The following diagram illustrates the components built in this post and how they connect to existing infrastructure.

Architecture Diagram for CodePipline Step Functions integration

First, create a Lambda function that uses Amazon Simple Email Service (Amazon SES) to email either the research or engineering team with the results and the opportunity for them to review it. See the following code:

import json
import os
import boto3
import base64

def lambda_handler(event, context):
    email_contents = """
    <html>
    <body>
    <p><a href="{url_base}/{token}/success">PASS</a></p>
    <p><a href="{url_base}/{token}/fail">FAIL</a></p>
    </body>
    </html>
"""
    callback_base = os.environ['URL']
    token = base64.b64encode(bytes(event["token"], "utf-8")).decode("utf-8")

    formatted_email = email_contents.format(url_base=callback_base, token=token)
    ses_client = boto3.client('ses')
    ses_client.send_email(
        Source='[email protected]',
        Destination={
            'ToAddresses': [event["team_alias"]]
        },
        Message={
            'Subject': {
                'Data': 'PLEASE REVIEW',
                'Charset': 'UTF-8'
            },
            'Body': {
                'Text': {
                    'Data': formatted_email,
                    'Charset': 'UTF-8'
                },
                'Html': {
                    'Data': formatted_email,
                    'Charset': 'UTF-8'
                }
            }
        },
        ReplyToAddresses=[
            '[email protected]',
        ]
    )
    return {}

To set up the Step Functions state machine to orchestrate the approval, use AWS CloudFormation with the following template. The Lambda function you just created is stored in the email_sender/app directory. See the following code:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  NotifierFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: email_sender/
      Handler: app.lambda_handler
      Runtime: python3.7
      Timeout: 30
      Environment:
        Variables:
          URL: !Sub "https://${TaskTokenApi}.execute-api.${AWS::Region}.amazonaws.com/Prod"
      Policies:
      - Statement:
        - Sid: SendEmail
          Effect: Allow
          Action:
          - ses:SendEmail
          Resource: '*'

  MyStepFunctionsStateMachine:
    Type: AWS::StepFunctions::StateMachine
    Properties:
      RoleArn: !GetAtt SFnRole.Arn
      DefinitionString: !Sub |
        {
          "Comment": "A Hello World example of the Amazon States Language using Pass states",
          "StartAt": "ChoiceState",
          "States": {
            "ChoiceState": {
              "Type": "Choice",
              "Choices": [
                {
                  "Variable": "$.accuracypct",
                  "NumericLessThan": 96,
                  "Next": "ResearchApproval"
                },
                {
                  "Variable": "$.latencyMs",
                  "NumericGreaterThan": 80,
                  "Next": "EngineeringApproval"
                }
              ],
              "Default": "SuccessState"
            },
            "EngineeringApproval": {
                 "Type":"Task",
                 "Resource":"arn:aws:states:::lambda:invoke.waitForTaskToken",
                 "Parameters":{  
                    "FunctionName":"${NotifierFunction.Arn}",
                    "Payload":{
                      "latency.$":"$.latencyMs",
                      "team_alias":"[email protected]",
                      "token.$":"$$.Task.Token"
                    }
                 },
                 "Catch": [ {
                    "ErrorEquals": ["HandledError"],
                    "Next": "FailState"
                 } ],
              "Next": "SuccessState"
            },
            "ResearchApproval": {
                 "Type":"Task",
                 "Resource":"arn:aws:states:::lambda:invoke.waitForTaskToken",
                 "Parameters":{  
                    "FunctionName":"${NotifierFunction.Arn}",
                    "Payload":{  
                       "accuracy.$":"$.accuracypct",
                       "team_alias":"[email protected]",
                       "token.$":"$$.Task.Token"
                    }
                 },
                 "Catch": [ {
                    "ErrorEquals": ["HandledError"],
                    "Next": "FailState"
                 } ],
              "Next": "SuccessState"
            },
            "FailState": {
              "Type": "Fail",
              "Cause": "Invalid response.",
              "Error": "Failed Approval"
            },
            "SuccessState": {
              "Type": "Succeed"
            }
          }
        }

  TaskTokenApi:
    Type: AWS::ApiGateway::RestApi
    Properties: 
      Description: String
      Name: TokenHandler
  SuccessResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      ParentId: !Ref TokenResource
      PathPart: "success"
      RestApiId: !Ref TaskTokenApi
  FailResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      ParentId: !Ref TokenResource
      PathPart: "fail"
      RestApiId: !Ref TaskTokenApi
  TokenResource:
    Type: AWS::ApiGateway::Resource
    Properties:
      ParentId: !GetAtt TaskTokenApi.RootResourceId
      PathPart: "{token}"
      RestApiId: !Ref TaskTokenApi
  SuccessMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      HttpMethod: GET
      ResourceId: !Ref SuccessResource
      RestApiId: !Ref TaskTokenApi
      AuthorizationType: NONE
      MethodResponses:
        - ResponseParameters:
            method.response.header.Access-Control-Allow-Origin: true
          StatusCode: 200
      Integration:
        IntegrationHttpMethod: POST
        Type: AWS
        Credentials: !GetAtt APIGWRole.Arn
        Uri: !Sub "arn:aws:apigateway:${AWS::Region}:states:action/SendTaskSuccess"
        IntegrationResponses:
          - StatusCode: 200
            ResponseTemplates:
              application/json: |
                {}
          - StatusCode: 400
            ResponseTemplates:
              application/json: |
                {"uhoh": "Spaghetti O's"}
        RequestTemplates:
          application/json: |
              #set($token=$input.params('token'))
              {
                "taskToken": "$util.base64Decode($token)",
                "output": "{}"
              }
        PassthroughBehavior: NEVER
        IntegrationResponses:
          - StatusCode: 200
      OperationName: "TokenResponseSuccess"
  FailMethod:
    Type: AWS::ApiGateway::Method
    Properties:
      HttpMethod: GET
      ResourceId: !Ref FailResource
      RestApiId: !Ref TaskTokenApi
      AuthorizationType: NONE
      MethodResponses:
        - ResponseParameters:
            method.response.header.Access-Control-Allow-Origin: true
          StatusCode: 200
      Integration:
        IntegrationHttpMethod: POST
        Type: AWS
        Credentials: !GetAtt APIGWRole.Arn
        Uri: !Sub "arn:aws:apigateway:${AWS::Region}:states:action/SendTaskFailure"
        IntegrationResponses:
          - StatusCode: 200
            ResponseTemplates:
              application/json: |
                {}
          - StatusCode: 400
            ResponseTemplates:
              application/json: |
                {"uhoh": "Spaghetti O's"}
        RequestTemplates:
          application/json: |
              #set($token=$input.params('token'))
              {
                 "cause": "Failed Manual Approval",
                 "error": "HandledError",
                 "output": "{}",
                 "taskToken": "$util.base64Decode($token)"
              }
        PassthroughBehavior: NEVER
        IntegrationResponses:
          - StatusCode: 200
      OperationName: "TokenResponseFail"

  APIDeployment:
    Type: AWS::ApiGateway::Deployment
    DependsOn:
      - FailMethod
      - SuccessMethod
    Properties:
      Description: "Prod Stage"
      RestApiId:
        Ref: TaskTokenApi
      StageName: Prod

  APIGWRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: "Allow"
            Principal:
              Service:
                - "apigateway.amazonaws.com"
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: 
                 - 'states:SendTaskSuccess'
                 - 'states:SendTaskFailure'
                Resource: '*'
  SFnRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: "Allow"
            Principal:
              Service:
                - "states.amazonaws.com"
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: 
                 - 'lambda:InvokeFunction'
                Resource: !GetAtt NotifierFunction.Arn

 

After you create the CloudFormation stack, you have a state machine, an Amazon API Gateway REST API, a Lambda function, and the roles each resource needs.

Your pipeline invokes the state machine with the load test results, which contain the accuracy and latency statistics. It decides which, if either, team to notify of the results. If the results are positive, it returns a success status without notifying either team. If a team needs to be notified, the Step Functions asynchronously invokes the Lambda function and passes in the relevant metric and the team’s email address. The Lambda function renders an email with links to the pass/fail response so the team can choose the Pass or Fail link in the email to respond to the review. You use the REST API to capture the response and send it to Step Functions to continue the state machine execution.

The following diagram illustrates the visual workflow of the approval process within the Step Functions state machine.

StepFunctions StateMachine for approving code changes

 

After you create your state machine, Lambda function, and REST API, return to CodePipeline console and add the Step Functions integration to your existing release pipeline. Complete the following steps:

  1. On the CodePipeline console, choose Pipelines.
  2. Choose your release pipeline.CodePipeline before adding StepFunction integration
  3. Choose Edit.CodePipeline Edit View
  4. Under the Edit:Build section, choose Add stage.
  5. Name your stage Release-Approval.
  6. Choose Save.
    You return to the edit view and can see the new stage at the end of your pipeline.CodePipeline Edit View with new stage
  7. In the Edit:Release-Approval section, choose Add action group.
  8. Add the Step Functions StateMachine invocation Action to the action group. Use the following settings:
    1. For Action name, enter CheckForRequiredApprovals.
    2. For Action provider, choose AWS Step Functions.
    3. For Region, choose the Region where your state machine is located (this post uses US West (Oregon)).
    4. For Input artifacts, enter BuildOutput (the name you gave the output artifacts in the build stage).
    5. For State machine ARN, choose the state machine you just created.
    6. For Input type¸ choose File path. (This parameter tells CodePipeline to take the contents of a file and use it as the input for the state machine execution.)
    7. For Input, enter results.json (where you store the results of your load test in the build stage of the pipeline).
    8. For Variable namespace, enter StepFunctions. (This parameter tells CodePipeline to store the state machine ARN and execution ARN for this event in a variable namespace named StepFunctions. )
    9. For Output artifacts, enter ApprovalArtifacts. (This parameter tells CodePipeline to store the results of this execution in an artifact called ApprovalArtifacts. )Edit Action Configuration
  9. Choose Done.
    You return to the edit view of the pipeline.
    CodePipeline Edit Configuration
  10. Choose Save.
  11. Choose Release change.

When the pipeline execution reaches the approval stage, it invokes the Step Functions state machine with the results emitted from your build stage. This post hard-codes the load-test results to force an engineering approval by increasing the latency (latencyMs) above the threshold defined in the CloudFormation template (80ms). See the following code:

{
  "accuracypct": 100,
  "latencyMs": 225
}

When the state machine checks the latency and sees that it’s above 80 milliseconds, it invokes the Lambda function with the engineering email address. The engineering team receives a review request email similar to the following screenshot.

review email

If you choose PASS, you send a request to the API Gateway REST API with the Step Functions task token for the current execution, which passes the token to Step Functions with the SendTaskSuccess command. When you return to your pipeline, you can see that the approval was processed and your change is ready for production.

Approved code change with stepfunction integration

Cleaning Up

When the engineering and research teams devise a solution that no longer mixes performance information from both teams into a single application, you can remove this integration by deleting the CloudFormation stack that you created and deleting the new CodePipeline stage that you added.

Conclusion

For more information about CodePipeline Actions and the Step Functions integration, see Working with Actions in CodePipeline.

Using Weak Electric Fields to Make Virus-Killing Face Masks

Post Syndicated from Megan Scudellari original https://spectrum.ieee.org/the-human-os/biomedical/devices/using-weak-electric-fields-to-make-viruskilling-face-masks

Face masks help limit the spread of COVID-19 and are currently recommended by governments worldwide. 

Now, engineers at Indiana University demonstrate for the first time that a fabric generating a weak electric field can inactivate coronaviruses. The electroceutical fabric, described in a ChemRxiv preprint that has not yet been peer-reviewed, could be used to make face masks and other personal protective equipment (PPE), the authors say.

The fabric was tested against a pig respiratory coronavirus and a human coronavirus that causes the common cold. It has not yet been tested against SARS-CoV-2, the virus that causes COVID-19.

“The work is of interest for the scientific community; it will open new [areas to] search to provide smart solutions to overcome the COVID-19 pandemic,” says Mahmoud Al Ahmad, an electrical engineer at the University of United Arab Emirates, who was not involved in the research. While the concept will require more development before being applied to PPE, he says, “it is an excellent start in this direction.”

Beyond masks, the findings raise the possibility of using weak electrical fields to curb the spread of viruses in many ways, such as purifying air in common spaces or disinfecting operating room surfaces, says study author Chandan Sen, director of the Indiana Center for Regenerative Medicine and Engineering at Indiana University School of Medicine. “Coronavirus is not the first or last virus that is going to disrupt our lives,” he says. “We’re thinking about bigger and broader approaches to utilize weak electric fields against virus infectivity.”

Sen’s lab has been co-developing the electroceutical fabric technology, under the proprietary name V.Dox Technology, with Arizona-based company Vomaris for the past six years. Sen retains a financial stake in the company.

The technology consists of a matrix pattern of silver and zinc dots printed onto a material, such as polyester or cotton. The dots form a battery generating a weak electric field: When exposed to a conductive medium, like gel or sweat, electrons transfer from the zinc to the silver in a REDOX reaction, generating a potential difference of 0.5 volts. The technology is FDA-cleared and commercialized for wound care, where it has been shown to treat bacterial biofilm infections.

To be used in masks, moisture will need to be applied in some fashion. According to Sen, approaches could include embedding a hydrogel so it activates the dots or inserting liquid-filled piping on periphery of the mask. Moisture from exhaled air will continue to keep the fabric moist.

When the COVID-19 pandemic began, Sen and his team began to wonder if the technology might affect viruses as well as bacteria. Past work in the literature suggested coronaviruses rely on electrostatic forces for attachment and genome assembly, and Sen hoped an electric field would disrupt those forces and therefore kill the virus.

In collaboration with IU geneticist Kenneth Cornetta, who performed some of the initial virus experiments in his laboratory, the team exposed a pig respiratory coronavirus to the electroceutical fabric for 1 or 5 minutes. After one minute, they found evidence that the virus particles had begun to destabilize and aggregate, becoming larger than before exposure. That suggests the weak electric field was causing “damaging structural alterations to the virions,” the authors write.

Next, the team tested the virus particles exposed to the fabric against cells in a dish. “The infectivity was gone,” says Sen.

The results indicate “promise for this strategy,” says Murugappan Muthukumar, a professor of polymer science and engineering at the University of Massachusetts, Amherst, who was not involved in the study. “The authors’ hypothesis that the electrostatic forces within the virus particles and between the virus particles and the fabric are important is correct and is a very good idea.”

Still, Muthukumar notes, it is difficult to extrapolate how the electric field affects the viral genome, and more work needs to be done to investigate the effects observed in the paper.

Since publishing the preprint, the team also tested the fabric against human coronavirus 229E, a cause of upper respiratory tract infections, and gotten similar results, adds Sen.

The team has submitted the data to the FDA in the hopes of receiving Emergency Use Authorization to use the fabric in face masks. The technology could even be incorporated into the manufacturing of N95 masks or as an insert, says Sen.

currently sells their wound-dressing kits for between $38 and $69 online. Sen says the technology is inexpensive to manufacture and could be used in PPE at a modest cost.

Independent of Vomaris, Sen’s laboratory is developing a tunable electroceutical called patterned electroceutical dressing, in which the field strength can be altered depending on need. The dressing has shown to be safe for patients with wounds, says Sen, and is currently in clinical testing. 

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close