Tag Archives: KAT

Kata Containers 1.0

Post Syndicated from ris original https://lwn.net/Articles/755230/rss

Kata Containers 1.0 has been released. “This first release of Kata Containers completes the merger of Intel’s Clear Containers and Hyper’s runV technologies, and delivers an OCI compatible runtime with seamless integration for container ecosystem technologies like Docker and Kubernetes.

AWS IoT 1-Click – Use Simple Devices to Trigger Lambda Functions

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-iot-1-click-use-simple-devices-to-trigger-lambda-functions/

We announced a preview of AWS IoT 1-Click at AWS re:Invent 2017 and have been refining it ever since, focusing on simplicity and a clean out-of-box experience. Designed to make IoT available and accessible to a broad audience, AWS IoT 1-Click is now generally available, along with new IoT buttons from AWS and AT&T.

I sat down with the dev team a month or two ago to learn about the service so that I could start thinking about my blog post. During the meeting they gave me a pair of IoT buttons and I started to think about some creative ways to put them to use. Here are a few that I came up with:

Help Request – Earlier this month I spent a very pleasant weekend at the HackTillDawn hackathon in Los Angeles. As the participants were hacking away, they occasionally had questions about AWS, machine learning, Amazon SageMaker, and AWS DeepLens. While we had plenty of AWS Solution Architects on hand (decked out in fashionable & distinctive AWS shirts for easy identification), I imagined an IoT button for each team. Pressing the button would alert the SA crew via SMS and direct them to the proper table.

Camera ControlTim Bray and I were in the AWS video studio, prepping for the first episode of Tim’s series on AWS Messaging. Minutes before we opened the Twitch stream I realized that we did not have a clean, unobtrusive way to ask the camera operator to switch to a closeup view. Again, I imagined that a couple of IoT buttons would allow us to make the request.

Remote Dog Treat Dispenser – My dog barks every time a stranger opens the gate in front of our house. While it is great to have confirmation that my Ring doorbell is working, I would like to be able to press a button and dispense a treat so that Luna stops barking!

Homes, offices, factories, schools, vehicles, and health care facilities can all benefit from IoT buttons and other simple IoT devices, all managed using AWS IoT 1-Click.

All About AWS IoT 1-Click
As I said earlier, we have been focusing on simplicity and a clean out-of-box experience. Here’s what that means:

Architects can dream up applications for inexpensive, low-powered devices.

Developers don’t need to write any device-level code. They can make use of pre-built actions, which send email or SMS messages, or write their own custom actions using AWS Lambda functions.

Installers don’t have to install certificates or configure cloud endpoints on newly acquired devices, and don’t have to worry about firmware updates.

Administrators can monitor the overall status and health of each device, and can arrange to receive alerts when a device nears the end of its useful life and needs to be replaced, using a single interface that spans device types and manufacturers.

I’ll show you how easy this is in just a moment. But first, let’s talk about the current set of devices that are supported by AWS IoT 1-Click.

Who’s Got the Button?
We’re launching with support for two types of buttons (both pictured above). Both types of buttons are pre-configured with X.509 certificates, communicate to the cloud over secure connections, and are ready to use.

The AWS IoT Enterprise Button communicates via Wi-Fi. It has a 2000-click lifetime, encrypts outbound data using TLS, and can be configured using BLE and our mobile app. It retails for $19.99 (shipping and handling not included) and can be used in the United States, Europe, and Japan.

The AT&T LTE-M Button communicates via the LTE-M cellular network. It has a 1500-click lifetime, and also encrypts outbound data using TLS. The device and the bundled data plan is available an an introductory price of $29.99 (shipping and handling not included), and can be used in the United States.

We are very interested in working with device manufacturers in order to make even more shapes, sizes, and types of devices (badge readers, asset trackers, motion detectors, and industrial sensors, to name a few) available to our customers. Our team will be happy to tell you about our provisioning tools and our facility for pushing OTA (over the air) updates to large fleets of devices; you can contact them at [email protected].

AWS IoT 1-Click Concepts
I’m eager to show you how to use AWS IoT 1-Click and the buttons, but need to introduce a few concepts first.

Device – A button or other item that can send messages. Each device is uniquely identified by a serial number.

Placement Template – Describes a like-minded collection of devices to be deployed. Specifies the action to be performed and lists the names of custom attributes for each device.

Placement – A device that has been deployed. Referring to placements instead of devices gives you the freedom to replace and upgrade devices with minimal disruption. Each placement can include values for custom attributes such as a location (“Building 8, 3rd Floor, Room 1337”) or a purpose (“Coffee Request Button”).

Action – The AWS Lambda function to invoke when the button is pressed. You can write a function from scratch, or you can make use of a pair of predefined functions that send an email or an SMS message. The actions have access to the attributes; you can, for example, send an SMS message with the text “Urgent need for coffee in Building 8, 3rd Floor, Room 1337.”

Getting Started with AWS IoT 1-Click
Let’s set up an IoT button using the AWS IoT 1-Click Console:

If I didn’t have any buttons I could click Buy devices to get some. But, I do have some, so I click Claim devices to move ahead. I enter the device ID or claim code for my AT&T button and click Claim (I can enter multiple claim codes or device IDs if I want):

The AWS buttons can be claimed using the console or the mobile app; the first step is to use the mobile app to configure the button to use my Wi-Fi:

Then I scan the barcode on the box and click the button to complete the process of claiming the device. Both of my buttons are now visible in the console:

I am now ready to put them to use. I click on Projects, and then Create a project:

I name and describe my project, and click Next to proceed:

Now I define a device template, along with names and default values for the placement attributes. Here’s how I set up a device template (projects can contain several, but I just need one):

The action has two mandatory parameters (phone number and SMS message) built in; I add three more (Building, Room, and Floor) and click Create project:

I’m almost ready to ask for some coffee! The next step is to associate my buttons with this project by creating a placement for each one. I click Create placements to proceed. I name each placement, select the device to associate with it, and then enter values for the attributes that I established for the project. I can also add additional attributes that are peculiar to this placement:

I can inspect my project and see that everything looks good:

I click on the buttons and the SMS messages appear:

I can monitor device activity in the AWS IoT 1-Click Console:

And also in the Lambda Console:

The Lambda function itself is also accessible, and can be used as-is or customized:

As you can see, this is the code that lets me use {{*}}include all of the placement attributes in the message and {{Building}} (for example) to include a specific placement attribute.

Now Available
I’ve barely scratched the surface of this cool new service and I encourage you to give it a try (or a click) yourself. Buy a button or two, build something cool, and let me know all about it!

Pricing is based on the number of enabled devices in your account, measured monthly and pro-rated for partial months. Devices can be enabled or disabled at any time. See the AWS IoT 1-Click Pricing page for more info.

To learn more, visit the AWS IoT 1-Click home page or read the AWS IoT 1-Click documentation.

Jeff;

 

[$] Updates in container isolation

Post Syndicated from corbet original https://lwn.net/Articles/754433/rss

At KubeCon
+ CloudNativeCon Europe
2018, several talks explored the topic of
container isolation and security. The last year saw the release of Kata Containers which, combined with
the CRI-O project, provided strong isolation
guarantees for containers using a hypervisor. During the conference, Google
released its own hypervisor called gVisor, adding yet another
possible solution for this problem. Those new developments prompted the
community to work on integrating the concept of “secure containers”
(or “sandboxed containers”) deeper
into Kubernetes. This work is now coming to fruition; it prompts us to look
again at how Kubernetes tries to keep the bad guys from wreaking havoc once
they break into a container.

Конкурси… и алманаси :)

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2131

Две обяви, насочени към всички любители на фантастиката:

1

НА ВАШЕТО ВНИМАНИЕ – „ФАНТАSTIKA 2017“

Излезе от печат осмият пореден алманах „ФантАstika“. Негов съставител, както винаги досега, е Атанас П. Славов – председател на Дружеството на българските фантасти „Тера Фантазия“.
Алманахът е интересен не само за читателите, запознати с предишните ежегодници, но и за ценителите на супержанра (във всичките му форми), които за пръв път ще вземат това издание в ръцете си.

Преводните автори са застъпени с оригинална новела на аржентинката Тереса Мира де Ечеверия, класически разказ на американеца Томас Шеред и една творба от македонския фантаст Никола Суботич, наскоро отличена в конкурса „Агоп Мелконян“.

В големия раздел на родните фантасти ще се срещнете както с доайена Христо Пощаков, представен като майстор на научната фантастика, фентъзито и хумора, така и с нови произведения от Ценка Бакърджиева, Валентин Д. Иванов, Мартин Петков, Янчо Чолаков, а също и с приказка от дебютната книга на Мел.

И сега разделът „Фантастология“ е посветен на обзори и тенденции в развитието на нашата и световната фантастика, плюс задочни срещи с класици като Светослав Минков и Елин Пелин, видени през погледа на Боряна Владимирова и Александър Карапанчев. Няколко статии разглеждат испаноезични писателки, руски тематични направления в модерната НФ, българската фантастика в нова аудио форма и последния брой на списание „Тера фантастика“.

В раздела „Съзвездие Кинотавър“ ще се запознаете с някои от актуалните екранизации на фантастични романи, с англичанина, създал сценария на „Изкуствен интелект“, и с шеговит комикс (за това как на Кубрик му е изглеждало бъдещето през 2019 година).

Броят обявява уникалния по темата си конкурс „Изгревът на следващото“ – за разкази, посветени на едно желаемо бъдеще. Разделът „Футурум“ включва статии за новите информационни религии, несъстояли се финали на света и особено любопитна фаКтастика.

И още по страниците на този алманах: подбрани картини от художника Андриан Бекяров… пристрастен репортаж за Еврокон 2017 в Дортмунд… поезия… и много други събития от неизчерпаемата сфера на въображението.

За повече информация: http://choveshkata.net/blog/?p=6617.

2

Дружество на българските фантасти „Тера Фантазия“ и фондация „Човешката библиотека“ канят всички автори да участват в първия Конкурс „Изгревът на следващото“.

В момента се провежда не един конкурс за български художествени текстове, но този е единственият, който има за тема възможното движение към позитивно бъдеще. Днес, в епохата на ширещи се антиутопии и безкритично катастрофично мислене, се изисква истинска интелектуална смелост, за да потърсим формите за Изхода. Смелост да допуснем, че Човешкият дух е в състояние да намери пътя си към по-високото ниво, интелект да си го представим и талант да го защитим художествено.

Какво е решението на задачата, наречена „Кризисно съвремие“?

Какво е решението, което води до по-висше състояние на ЧоВечността и Човечеството, към бъдеще, в което ЧоВечният Разум е надрасъл безчовечното невежество?

Какво е решението, което ще създаде свят, в който науките и технологиите ще се развиват, за да расте качеството на Човека, а не богатствата на единици?

Какво е решението, което ще избегне застиналите утопиянства, където позьорис бели хитони рецитират един на друг надути речи?

Конкурсът „Изгревът на следващото“ ще бъде мястото, където ще се публикуват истории, посветени на това търсене. Произведения, които с художествен талант и моделираща сила ще защитават нови светове от този вид по един от следните два начина:

  • По спиралата към следващото: Съдби на индивиди и общества, търсещи изхода от съвременното кризисно състояние на света ни; образи на учени, мислители и обикновени хора, напипващи в мрака на неизвестното пътищата към тази цел; приключения на личности, въвлечени в такъв спирален процес и постепенно осъзнаващи смисъла му.
  • Визии на следващото: Изграждане на образи, възникнали в нашето съвремие, но носещи белезите на новото, притежаващи вътрешната свобода, въпреки че са затворени в клетката на настоящата социална несвобода; образи на групи и общества, постигнали белези на следващото, без ескейпизъм, фанатизъм и аскетизъм. Хуманитарни технологии, водещи до освобождаване от опредметяването, разкриващи етическите и интелектуалните ресурси на ЧоВечното. Непротиворечиви и реалистично обрисувани общества на бъдещето, в които всяка личност е пълноценно разгърната и осъществена, без да зависи или да бъде притежавана от друга.

Приемливи са всички жанрове – достатъчно е разказите да засягат поне една от горните две теми.

Крайният срок за участие е 1 юни 2018 г.

Трите най-високо класирани разказа ще получат награди по 200 лв. и заедно с други подбрани заглавия от конкурса ще бъдат публикувани в следващите издания на алманаха „ФантАstika“.

Пълните условия са описани в сайта на Човешката библиотека: http://choveshkata.net/blog/?p=6668

Там ще откриете и най-актуална информация в случай на промени.

SoFi, the underwater robotic fish

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/robotic-fish/

With the Greenland shark finally caught on video for the very first time, scientists and engineers are discussing the limitations of current marine monitoring technology. One significant advance comes from the CSAIL team at Massachusetts Institute of Technology (MIT): SoFi, the robotic fish.

A Robotic Fish Swims in the Ocean

More info: http://bit.ly/SoFiRobot Paper: http://robert.katzschmann.eu/wp-content/uploads/2018/03/katzschmann2018exploration.pdf

The untethered SoFi robot

Last week, the Computer Science and Artificial Intelligence Laboratory (CSAIL) team at MIT unveiled SoFi, “a soft robotic fish that can independently swim alongside real fish in the ocean.”

MIT CSAIL underwater fish SoFi using Raspberry Pi

Directed by a Super Nintendo controller and acoustic signals, SoFi can dive untethered to a maximum of 18 feet for a total of 40 minutes. A Raspberry Pi receives input from the controller and amplifies the ultrasound signals for SoFi via a HiFiBerry. The controller, Raspberry Pi, and HiFiBerry are sealed within a waterproof, cast-moulded silicone membrane filled with non-conductive mineral oil, allowing for underwater equalisation.

MIT CSAIL underwater fish SoFi using Raspberry Pi

The ultrasound signals, received by a modem within SoFi’s head, control everything from direction, tail oscillation, pitch, and depth to the onboard camera.

As explained on MIT’s news blog, “to make the robot swim, the motor pumps water into two balloon-like chambers in the fish’s tail that operate like a set of pistons in an engine. As one chamber expands, it bends and flexes to one side; when the actuators push water to the other channel, that one bends and flexes in the other direction.”

MIT CSAIL underwater fish SoFi using Raspberry Pi

Ocean exploration

While we’ve seen many autonomous underwater vehicles (AUVs) using onboard Raspberry Pis, SoFi’s ability to roam untethered with a wireless waterproof controller is an exciting achievement.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time. We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.” – CSAIL PhD candidate Robert Katzschmann

As the MIT news post notes, SoFi’s simple, lightweight setup of a single camera, a motor, and a smartphone lithium polymer battery set it apart it from existing bulky AUVs that require large motors or support from boats.

For more in-depth information on SoFi and the onboard tech that controls it, find the CSAIL team’s paper here.

The post SoFi, the underwater robotic fish appeared first on Raspberry Pi.

Owner of ShareBeast and AlbumJams Sentenced To Five Years in Prison

Post Syndicated from Andy original https://torrentfreak.com/owner-of-sharebeast-and-albumjams-sentenced-to-five-years-in-prison-180323/

According to the RIAA, ShareBeast.com and AlbumJams.com were responsible for the illegal distribution of “a massive library” of popular albums and tracks.

With a nod to the sensitivity of pre-release piracy, the sites were blamed for offering “thousands of songs” that hadn’t yet reached their official release dates. In September 2015, U.S. authorities shut them down, placing seizure notices on both domains.

The RIAA claimed that ShareBeast was the largest illegal file-sharing site operating in the United States, noting that the site’s IP addresses at the time indicated that at least some hosting had taken place in Illinois.

“Millions of users accessed songs from ShareBeast each month without one penny of compensation going to countless artists, songwriters, labels and others who created the music,” RIAA Chairman & CEO Cary Sherman commented at the time.

Two years later in September 2017, then 29-year-old former ShareBeast operator Artur Sargsyan pleaded guilty to one felony count of criminal copyright infringement, admitting to the unauthorized distribution and reproduction of over one billion copies of copyrighted works.

“Through Sharebeast and other related sites, this defendant profited by illegally distributing copyrighted music and albums on a massive scale,” said U. S. Attorney John Horn.

“The collective work of the FBI and our international law enforcement partners have shut down the Sharebeast websites and prevented further economic losses by scores of musicians and artists.”

The Department of Justice reported that from 2012 to 2015, Sargsyan used ShareBeast as a pirate music repository, illegally hosting music by Ariana Grande, Katy Perry, Beyonce, Kanye West, and Justin Bieber, among others. Sargsyan linked to that content from Newjams.net and Albumjams.com, and granted access to the public.

If Sargsyan had responded to takedown notices more positively, it’s possible that things may have progressed in a different direction. The RIAA sent the site more than 100 copyright-infringement emails over a three-year period but to no effect.

This led the music industry group to get out its calculator and inform the DoJ that the total monetary loss to its member companies was “a conservative” $6.3 billion “gut-punch” to music creators who were paid nothing by the service.

Given the huge numbers involved, it’s likely that Sargsyan hoped his 2017 guilty plea would result in a more forgiving sentence. Yesterday, however, the full weight of the law came crashing down.

California resident Artur Sargsyan was sentenced by U.S. District Judge Timothy C. Batten, Sr., to five years in prison, followed by three years of supervised release. The now 30-year-old was also ordered to pay $458,200 restitution and ordered to forfeit $184,768.87.

“Sargsyan operated one of the most successful illegal music sharing websites on the Internet,” said U.S. Attorney Byung J. “BJay” Pak.

“His reproduction of copyrighted musical works were made available only to generate undeserved profits for himself. The incredible work done by our law enforcement partners and prosecutors in light of the complexity of Sargsyan’s operation demonstrates that we will employ all of our resources to stop this kind of theft.”

David J. LaValley, Special Agent in Charge of FBI Atlanta, said that Sargsyan was warned several times that he was violating the law by illegally sharing copyrighted works, but chose to ignore the warnings.

“His sentence sends a message that no matter how complex the operation, the FBI, its federal partners and law enforcement partners around the globe will go to every length to protect the property of hard working artists and the companies that produce their art,” LaValley said.

Given the music group’s lengthy statements on the Sharebeast topic in the past, thus far the RIAA has been relatively brief. Welcoming news of the sentencing via Twitter, the major labels’ figurehead congratulated the law enforcement bodies behind the successful prosecution.

“Congrats to U.S. Attorney BJay Pak + his team along with @TheJusticeDept CCIPS Division and @FBIAtlanta for their leadership on this important case,” the RIAA wrote.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Auto Scaling is now available for Amazon SageMaker

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/auto-scaling-is-now-available-for-amazon-sagemaker/

Kumar Venkateswar, Product Manager on the AWS ML Platforms Team, shares details on the announcement of Auto Scaling with Amazon SageMaker.


With Amazon SageMaker, thousands of customers have been able to easily build, train and deploy their machine learning (ML) models. Today, we’re making it even easier to manage production ML models, with Auto Scaling for Amazon SageMaker. Instead of having to manually manage the number of instances to match the scale that you need for your inferences, you can now have SageMaker automatically scale the number of instances based on an AWS Auto Scaling Policy.

SageMaker has made managing the ML process easier for many customers. We’ve seen customers take advantage of managed Jupyter notebooks and managed distributed training. We’ve seen customers deploying their models to SageMaker hosting for inferences, as they integrate machine learning with their applications. SageMaker makes this easy –  you don’t have to think about patching the operating system (OS) or frameworks on your inference hosts, and you don’t have to configure inference hosts across Availability Zones. You just deploy your models to SageMaker, and it handles the rest.

Until now, you have needed to specify the number and type of instances per endpoint (or production variant) to provide the scale that you need for your inferences. If your inference volume changes, you can change the number and/or type of instances that back each endpoint to accommodate that change, without incurring any downtime. In addition to making it easy to change provisioning, customers have asked us how we can make managing capacity for SageMaker even easier.

With Auto Scaling for Amazon SageMaker, in the SageMaker console, the AWS Auto Scaling API, and the AWS SDK, this becomes much easier. Now, instead of having to closely monitor inference volume, and change the endpoint configuration in response, customers can configure a scaling policy to be used by AWS Auto Scaling. Auto Scaling adjusts the number of instances up or down in response to actual workloads, determined by using Amazon CloudWatch metrics and target values defined in the policy. In this way, customers can automatically adjust their inference capacity to maintain predictable performance at a low cost. You simply specify the target inference throughput per instance and provide upper and lower bounds for the number of instances for each production variant. SageMaker will then monitor throughput per instance using Amazon CloudWatch alarms, and then it will adjust provisioned capacity up or down as needed.

After you configure the endpoint with Auto Scaling, SageMaker will continue to monitor your deployed models to automatically adjust the instance count. SageMaker will keep throughput within desired levels, in response to changes in application traffic. This makes it easier to manage models in production, and it can help reduce the cost of deployed models, as you no longer have to provision sufficient capacity in order to manage your peak load. Instead, you configure the limits to accommodate your minimum expected traffic and the maximum peak, and Amazon SageMaker will work within those limits to minimize cost.

How do you get started? Open the SageMaker console. For existing endpoints, you first access the endpoint to modify the settings.


Then, scroll to the Endpoint runtime settings section, select the variant, and choose Configure auto scaling.


First, configure the minimum and maximum number of instances.

Next, choose the throughput per instance at which you want to add an additional instance, given previous load testing.

You can optionally set cool down periods for scaling in or out, to avoid oscillation during periods of wide fluctuation in workload. If not, SageMaker will assume default values.

And that’s it! You now have an endpoint that will automatically scale with increasing inferences.

You pay for the capacity used at regular SageMaker pay-as-you-go pricing, so you no longer have to pay for unused capacity during relative idle periods!

Auto Scaling in Amazon SageMaker is available today in the US East (N. Virginia & Ohio), EU (Ireland), and U.S. West (Oregon) AWS regions. To learn more, see the Amazon SageMaker Auto Scaling documentation.


Kumar Venkateswar is a Product Manager in the AWS ML Platforms team, which includes Amazon SageMaker, Amazon Machine Learning, and the AWS Deep Learning AMIs. When not working, Kumar plays the violin and Magic: The Gathering.

 

 

 

 


 

 

Facebook Will Verify the Physical Location of Ad Buyers with Paper Postcards

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/facebook_will_v.html

It’s not a great solution, but it’s something:

The process of using postcards containing a specific code will be required for advertising that mentions a specific candidate running for a federal office, Katie Harbath, Facebook’s global director of policy programs, said. The requirement will not apply to issue-based political ads, she said.

“If you run an ad mentioning a candidate, we are going to mail you a postcard and you will have to use that code to prove you are in the United States,” Harbath said at a weekend conference of the National Association of Secretaries of State, where executives from Twitter Inc and Alphabet Inc’s Google also spoke.

“It won’t solve everything,” Harbath said in a brief interview with Reuters following her remarks.

But sending codes through old-fashioned mail was the most effective method the tech company could come up with to prevent Russians and other bad actors from purchasing ads while posing as someone else, Harbath said.

It does mean a several-days delay between purchasing an ad and seeing it run.

MPA Met With Russian Site-Blocking Body to Discuss Piracy

Post Syndicated from Andy original https://torrentfreak.com/mpa-met-with-russian-site-blocking-body-to-discuss-piracy-180209/

Given Russia’s historical reputation for having a weak approach to online piracy, the last few years stand in stark contrast to those that went before.

Overseen by telecoms watchdog Rozcomnadzor, Russia now has one of the toughest site-blocking regimes in the whole world. It’s possible to have entire sites blocked in a matter of days, potentially over a single piece of infringing content. For persistent offenders, permanent blocking is now a reality.

While that process requires the involvement of the courts, the subsequent blocking of mirror sites does not, with Russia blocking more than 500 since a new law was passed in October 2017.

With anti-piracy measures now a force to be reckoned with in Russia, it’s emerged that last week Stan McCoy, president of the Motion Picture Association’s EMEA division, met with telecoms watchdog Roskomnadzor in Moscow.

McCoy met with Rozcomnadzor chief Alexander Zharov last Friday, in a meeting that was also attended by Ekaterina Mironova, head of the anti-piracy committee of the Media Communication Union (ISS).

According to Rozcomnadzor, issues discussed included copyright-related legislation and regulation. Also on the agenda was the strengthening of international cooperation, including between public organizations representing the interests of rightholders.

“In particular, an agreement was reached to expand contacts between the MPAA and the ISS,” Rozcomnadzor notes.

The ISS (known locally as Media-Communication Union MKC) was founded by the largest Russian media companies and telecom operators in February 2014. It differentiates itself from other organizations with the claim that its the first group of its type to represent the interests of communications companies, rights holders, broadcasters and large distributors.

During the meeting, McCoy was given an update on Russia’s implementation of the various anti-piracy laws introduced and developed since May 2015.

“Since the introduction of the anti-piracy laws, Roskomnadzor has received more than 2,800 rulings from the Moscow City Court on the adoption of preliminary provisional [blocking] measures to protect copyright on the Internet, including 1,630 for movies,” the watchdog reveals.

“In connection with the deletion of pirated content, access to the territory of Russia was restricted for 1,547 Internet resources. Based on the decisions of the Moscow City Court, 752 pirated sites are now permanently blocked, and according to the decisions of the Ministry of Communications, more than 600 ‘mirrors’ of these resources are blocked too.”

While it’s normally the position of the US to criticize Russia for not doing enough to tackle piracy, it must’ve been interesting to participate in a meeting where for once the Russians had the upper hand. Even though the MPAA previously campaigned for one, there is no site-blocking mechanism in the United States.

“The fight against piracy stimulates the growth of the legal online video market in Russia. Attendance of legal online sites is constantly growing. Users are attracted to high-quality content for an affordable fee,” Rozcomnadzor concludes.

The meeting’s participants will join up again during the St. Petersburg International Economic Forum scheduled to take place May 24-26.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

timeShift(GrafanaBuzz, 1w) Issue 29

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/01/12/timeshiftgrafanabuzz-1w-issue-29/

Welcome to TimeShift

intro paragraph


Latest Stable Release

Grafana 4.6.3 is now available. Latest bugfixes include:

  • Gzip: Fixes bug Gravatar images when gzip was enabled #5952
  • Alert list: Now shows alert state changes even after adding manual annotations on dashboard #99513
  • Alerting: Fixes bug where rules evaluated as firing when all conditions was false and using OR operator. #93183
  • Cloudwatch: CloudWatch no longer display metrics’ default alias #101514, thx @mtanda

Download Grafana 4.6.3 Now


From the Blogosphere

Graphite 1.1: Teaching an Old Dog New Tricks: Grafana Labs’ own Dan Cech is a contributor to the Graphite project, and has been instrumental in the addition of some of the newest features. This article discusses five of the biggest additions, how they work, and what you can expect for the future of the project.

Instrument an Application Using Prometheus and Grafana: Chris walks us through how easy it is to get useful metrics from an application to understand bottlenecks and performace. In this article, he shares an application he built that indexes your Gmail account into Elasticsearch, and sends the metrics to Prometheus. Then, he shows you how to set up Grafana to get meaningful graphs and dashboards.

Visualising Serverless Metrics With Grafana Dashboards: Part 3 in this series of blog posts on “Monitoring Serverless Applications Metrics” starts with an overview of Grafana and the UI, covers queries and templating, then dives into creating some great looking dashboards. The series plans to conclude with a post about setting up alerting.

Huawei FAT WLAN Access Points in Grafana: Huawei’s FAT firmware for their WLAN Access points lacks central management overview. To get a sense of the performance of your AP’s, why not quickly create a templated dashboard in Grafana? This article quickly steps your through the process, and includes a sample dashboard.


Grafana Plugins

Lots of updated plugins this week. Plugin authors add new features and fix bugs often, to make your plugin perform better – so it’s important to keep your plugins up to date. We’ve made updating easy; for on-prem Grafana, use the Grafana-cli tool, or update with 1 click if you’re using Hosted Grafana.

UPDATED PLUGIN

Clickhouse Data Source – The Clickhouse Data Source plugin has been updated a few times with small fixes during the last few weeks.

  • Fix for quantile functions
  • Allow rounding with round option for both time filters: $from and $to

Update

UPDATED PLUGIN

Zabbix App – The Zabbix App had a release with a redesign of the Triggers panel as well as support for Multiple data sources for the triggers panel

Update

UPDATED PLUGIN

OpenHistorian Data Source – this data source plugin received some new query builder screens and improved documentation.

Update

UPDATED PLUGIN

BT Status Dot Panel – This panel received a small bug fix.

Update

UPDATED PLUGIN

Carpet Plot Panel – A recent update for this panel fixes a D3 import bug.

Update


Upcoming Events

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

Women Who Go Berlin: Go Workshop – Monitoring and Troubleshooting using Prometheus and Grafana | Berlin, Germany – Jan 31, 2018: In this workshop we will learn about one of the most important topics in making apps production ready: Monitoring. We will learn how to use tools you’ve probably heard a lot about – Prometheus and Grafana, and using what we learn we will troubleshoot a particularly buggy Go app.

Register Now

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. There is no need to register; all are welcome.

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Carl Bergquist – Quickie: Monitoring? Not OPS Problem

Why should we monitor our system? Why can’t we just rely on the operations team anymore? They use to be able to do that. What’s currently changing? Presentation content: – Why do we monitor our system – How did it use to work? – Whats changing – Why do we need to shift focus – Everyone should be on call. – Resilience is the goal (Best way of having someone care about quality is to make them responsible).

Register Now

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Leonard Gram – Presentation: DevOps Deconstructed

What’s a Site Reliability Engineer and how’s that role different from the DevOps engineer my boss wants to hire? I really don’t want to be on call, should I? Is Docker the right place for my code or am I better of just going straight to Serverless? And why should I care about any of it? I’ll try to answer some of these questions while looking at what DevOps really is about and how commodisation of servers through “the cloud” ties into it all. This session will be an opinionated piece from a developer who’s been on-call for the past 6 years and would like to convince you to do the same, at least once.

Register Now

Stockholm Metrics and Monitoring | Stockholm, Sweden – Feb 7, 2018:
Observability 3 ways – Logging, Metrics and Distributed Tracing

Let’s talk about often confused telemetry tools: Logging, Metrics and Distributed Tracing. We’ll show how you capture latency using each of the tools and how they work differently. Through examples and discussion, we’ll note edge cases where certain tools have advantages over others. By the end of this talk, we’ll better understand how each of Logging, Metrics and Distributed Tracing aids us in different ways to understand our applications.

Register Now

OpenNMS – Introduction to “Grafana” | Webinar – Feb 21, 2018:
IT monitoring helps detect emerging hardware damage and performance bottlenecks in the enterprise network before any consequential damage or disruption to business processes occurs. The powerful open-source OpenNMS software monitors a network, including all connected devices, and provides logging of a variety of data that can be used for analysis and planning purposes. In our next OpenNMS webinar on February 21, 2018, we introduce “Grafana” – a web-based tool for creating and displaying dashboards from various data sources, which can be perfectly combined with OpenNMS.

Register Now

GrafanaCon EU | Amsterdam, Netherlands – March 1-2, 2018:
Lock in your seat for GrafanaCon EU while there are still tickets avaialable! Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

We have some exciting talks lined up from Google, CERN, Bloomberg, eBay, Red Hat, Tinder, Automattic, Prometheus, InfluxData, Percona and more! Be sure to get your ticket before they’re sold out.

Learn More


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Nice hack! I know I like to keep one eye on server requests when I’m dropping beats. 😉


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

Thanks for reading another issue of timeShift. Let us know what you think! Submit a comment on this article below, or post something at our community forum.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

timeShift(GrafanaBuzz, 1w) Issue 27

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/12/22/timeshiftgrafanabuzz-1w-issue-27/

As we wrap up 2017, I wanted to kick off my last timeShift of the year to thank you, the Grafana community, for all your input, feedback, and involvement that’s made Grafana better with every release. While code contributions are extremely important, they’re not the only way to participate in the open source software community. Feature requests, bug reports, writing documentation, testing new features, participating in hackathons and meetups – all contribute to making open source projects better.

Yet More Copyright Trolls Invade Sweden Demanding Much More Money

Post Syndicated from Andy original https://torrentfreak.com/yet-more-copyright-trolls-invade-sweden-demanding-much-more-money-171221/

Back in 2016, so-called copyright-trolling landed in Sweden for the first time via an organization calling itself Spridningskollen (Distribution Check). Within months, however, it was all over, with the operation heading for the hills after much negative publicity.

February this year, another wave of trolling hit the country, with Danish law firm Njord Law targeting the subscribers of several ISPs, including Telia, Tele2 and Bredbandsbolaget. Thousands of IP addresses had been harvested by its media company partners, potentially linking to thousands of subscribers.

“We have sent out a few thousand letters, but we have been given the right to obtain information behind many more IP addresses that we are waiting to receive from the telecom operators. So there are more,” lawyer Jeppe Brogaard Clausen said in October.

But while Internet users in Sweden wait for news of how this campaign is progressing, multiple new threats are appearing on the horizon. Swedish publication Breakit reports that several additional law firms in Sweden are also getting in on the action with one, Innerstans Advokatbyrå, already sending out demands to alleged file-sharers.

“By downloading and uploading the movie without permission from the copyright holder, you have committed a copyright infringement,” its letter warns.

“However, the rightsholder wishes to propose a conciliation solution consisting of paying a flat rate of 7,000 kronor [$831] in one payment for all of the copyright infringements in question.”

The demand for 7,000 kronor is significantly more than 4,500 kronor ($535) demanded by Njord Law but Innerstans Advokatbyrå warns that this amount will only be the beginning, should an alleged pirate fail to pay up and the case goes to court.

“If this happens, the amount will not be limited to 7,000 kronor but will compensate for the damage suffered and will include compensation for investigative costs, application fees and attorney fees,” the company warns.

Breakit spoke with Alex Block at Innerstans Advokatbyrå who wouldn’t reveal how many letters had been sent out. However, he did indicate that while damages amounts will be decided by the court, a license for a shared film can cost 80,000 kronor ($12,800).

“This will last for a long time, and to a large extent,” he said.

Of course, we’ve reported on plenty of these campaigns before and their representatives all state that people will be taken to court if they don’t pay. This one is no different, with Block assuring the public that if they don’t pay, court will follow. The credibility of the campaign is at stake, he notes.

“It’s our intention [to go to court], even if we prefer to avoid it. We must make reality of our requirements, otherwise it will not work,” he says.

Breakit says it has seen a copy of one letter from the lawfirm, which reveals a collaboration between US film company Mile High Distribution Inc. and Mircom International Content Management & Consulting Ltd.

Mircom is extremely well known in trolling circles having conducted campaigns in several areas of the EU. German outfit Media Protector is also involved, having tracked the IP addresses of the alleged pirates. This company also has years of experience working with copyright trolls.

With several other law firms apparently getting in on the action, Swedish authorities need to ensure that the country doesn’t become another Germany where trolls have run rampant for a number of years, causing misery for thousands.

While that help may not necessarily be forthcoming, it’s perhaps a little surprising that given Sweden’s proud and recent history of piracy activism, there appear to be very few signs of a visible and organized pushback from the masses. That will certainly please the trolls, who tend to thrive when unchallenged.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Managing AWS Lambda Function Concurrency

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/managing-aws-lambda-function-concurrency/

One of the key benefits of serverless applications is the ease in which they can scale to meet traffic demands or requests, with little to no need for capacity planning. In AWS Lambda, which is the core of the serverless platform at AWS, the unit of scale is a concurrent execution. This refers to the number of executions of your function code that are happening at any given time.

Thinking about concurrent executions as a unit of scale is a fairly unique concept. In this post, I dive deeper into this and talk about how you can make use of per function concurrency limits in Lambda.

Understanding concurrency in Lambda

Instead of diving right into the guts of how Lambda works, here’s an appetizing analogy: a magical pizza.
Yes, a magical pizza!

This magical pizza has some unique properties:

  • It has a fixed maximum number of slices, such as 8.
  • Slices automatically re-appear after they are consumed.
  • When you take a slice from the pizza, it does not re-appear until it has been completely consumed.
  • One person can take multiple slices at a time.
  • You can easily ask to have the number of slices increased, but they remain fixed at any point in time otherwise.

Now that the magical pizza’s properties are defined, here’s a hypothetical situation of some friends sharing this pizza.

Shawn, Kate, Daniela, Chuck, Ian and Avleen get together every Friday to share a pizza and catch up on their week. As there is just six of them, they can easily all enjoy a slice of pizza at a time. As they finish each slice, it re-appears in the pizza pan and they can take another slice again. Given the magical properties of their pizza, they can continue to eat all they want, but with two very important constraints:

  • If any of them take too many slices at once, the others may not get as much as they want.
  • If they take too many slices, they might also eat too much and get sick.

One particular week, some of the friends are hungrier than the rest, taking two slices at a time instead of just one. If more than two of them try to take two pieces at a time, this can cause contention for pizza slices. Some of them would wait hungry for the slices to re-appear. They could ask for a pizza with more slices, but then run the same risk again later if more hungry friends join than planned for.

What can they do?

If the friends agreed to accept a limit for the maximum number of slices they each eat concurrently, both of these issues are avoided. Some could have a maximum of 2 of the 8 slices, or other concurrency limits that were more or less. Just so long as they kept it at or under eight total slices to be eaten at one time. This would keep any from going hungry or eating too much. The six friends can happily enjoy their magical pizza without worry!

Concurrency in Lambda

Concurrency in Lambda actually works similarly to the magical pizza model. Each AWS Account has an overall AccountLimit value that is fixed at any point in time, but can be easily increased as needed, just like the count of slices in the pizza. As of May 2017, the default limit is 1000 “slices” of concurrency per AWS Region.

Also like the magical pizza, each concurrency “slice” can only be consumed individually one at a time. After consumption, it becomes available to be consumed again. Services invoking Lambda functions can consume multiple slices of concurrency at the same time, just like the group of friends can take multiple slices of the pizza.

Let’s take our example of the six friends and bring it back to AWS services that commonly invoke Lambda:

  • Amazon S3
  • Amazon Kinesis
  • Amazon DynamoDB
  • Amazon Cognito

In a single account with the default concurrency limit of 1000 concurrent executions, any of these four services could invoke enough functions to consume the entire limit or some part of it. Just like with the pizza example, there is the possibility for two issues to pop up:

  • One or more of these services could invoke enough functions to consume a majority of the available concurrency capacity. This could cause others to be starved for it, causing failed invocations.
  • A service could consume too much concurrent capacity and cause a downstream service or database to be overwhelmed, which could cause failed executions.

For Lambda functions that are launched in a VPC, you have the potential to consume the available IP addresses in a subnet or the maximum number of elastic network interfaces to which your account has access. For more information, see Configuring a Lambda Function to Access Resources in an Amazon VPC. For information about elastic network interface limits, see Network Interfaces section in the Amazon VPC Limits topic.

One way to solve both of these problems is applying a concurrency limit to the Lambda functions in an account.

Configuring per function concurrency limits

You can now set a concurrency limit on individual Lambda functions in an account. The concurrency limit that you set reserves a portion of your account level concurrency for a given function. All of your functions’ concurrent executions count against this account-level limit by default.

If you set a concurrency limit for a specific function, then that function’s concurrency limit allocation is deducted from the shared pool and assigned to that specific function. AWS also reserves 100 units of concurrency for all functions that don’t have a specified concurrency limit set. This helps to make sure that future functions have capacity to be consumed.

Going back to the example of the consuming services, you could set throttles for the functions as follows:

Amazon S3 function = 350
Amazon Kinesis function = 200
Amazon DynamoDB function = 200
Amazon Cognito function = 150
Total = 900

With the 100 reserved for all non-concurrency reserved functions, this totals the account limit of 1000.

Here’s how this works. To start, create a basic Lambda function that is invoked via Amazon API Gateway. This Lambda function returns a single “Hello World” statement with an added sleep time between 2 and 5 seconds. The sleep time simulates an API providing some sort of capability that can take a varied amount of time. The goal here is to show how an API that is underloaded can reach its concurrency limit, and what happens when it does.
To create the example function

  1. Open the Lambda console.
  2. Choose Create Function.
  3. For Author from scratch, enter the following values:
    1. For Name, enter a value (such as concurrencyBlog01).
    2. For Runtime, choose Python 3.6.
    3. For Role, choose Create new role from template and enter a name aligned with this function, such as concurrencyBlogRole.
  4. Choose Create function.
  5. The function is created with some basic example code. Replace that code with the following:

import time
from random import randint
seconds = randint(2, 5)

def lambda_handler(event, context):
time.sleep(seconds)
return {"statusCode": 200,
"body": ("Hello world, slept " + str(seconds) + " seconds"),
"headers":
{
"Access-Control-Allow-Headers": "Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token",
"Access-Control-Allow-Methods": "GET,OPTIONS",
}}

  1. Under Basic settings, set Timeout to 10 seconds. While this function should only ever take up to 5-6 seconds (with the 5-second max sleep), this gives you a little bit of room if it takes longer.

  1. Choose Save at the top right.

At this point, your function is configured for this example. Test it and confirm this in the console:

  1. Choose Test.
  2. Enter a name (it doesn’t matter for this example).
  3. Choose Create.
  4. In the console, choose Test again.
  5. You should see output similar to the following:

Now configure API Gateway so that you have an HTTPS endpoint to test against.

  1. In the Lambda console, choose Configuration.
  2. Under Triggers, choose API Gateway.
  3. Open the API Gateway icon now shown as attached to your Lambda function:

  1. Under Configure triggers, leave the default values for API Name and Deployment stage. For Security, choose Open.
  2. Choose Add, Save.

API Gateway is now configured to invoke Lambda at the Invoke URL shown under its configuration. You can take this URL and test it in any browser or command line, using tools such as “curl”:


$ curl https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01
Hello world, slept 2 seconds

Throwing load at the function

Now start throwing some load against your API Gateway + Lambda function combo. Right now, your function is only limited by the total amount of concurrency available in an account. For this example account, you might have 850 unreserved concurrency out of a full account limit of 1000 due to having configured a few concurrency limits already (also the 100 concurrency saved for all functions without configured limits). You can find all of this information on the main Dashboard page of the Lambda console:

For generating load in this example, use an open source tool called “hey” (https://github.com/rakyll/hey), which works similarly to ApacheBench (ab). You test from an Amazon EC2 instance running the default Amazon Linux AMI from the EC2 console. For more help with configuring an EC2 instance, follow the steps in the Launch Instance Wizard.

After the EC2 instance is running, SSH into the host and run the following:


sudo yum install go
go get -u github.com/rakyll/hey

“hey” is easy to use. For these tests, specify a total number of tests (5,000) and a concurrency of 50 against the API Gateway URL as follows(replace the URL here with your own):


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

The output from “hey” tells you interesting bits of information:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

Summary:
Total: 381.9978 secs
Slowest: 9.4765 secs
Fastest: 0.0438 secs
Average: 3.2153 secs
Requests/sec: 13.0891
Total data: 140024 bytes
Size/request: 28 bytes

Response time histogram:
0.044 [1] |
0.987 [2] |
1.930 [0] |
2.874 [1803] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
3.817 [1518] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
4.760 [719] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
5.703 [917] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
6.647 [13] |
7.590 [14] |
8.533 [9] |
9.477 [4] |

Latency distribution:
10% in 2.0224 secs
25% in 2.0267 secs
50% in 3.0251 secs
75% in 4.0269 secs
90% in 5.0279 secs
95% in 5.0414 secs
99% in 5.1871 secs

Details (average, fastest, slowest):
DNS+dialup: 0.0003 secs, 0.0000 secs, 0.0332 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0046 secs
req write: 0.0000 secs, 0.0000 secs, 0.0005 secs
resp wait: 3.2149 secs, 0.0438 secs, 9.4472 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0004 secs

Status code distribution:
[200] 4997 responses
[502] 3 responses

You can see a helpful histogram and latency distribution. Remember that this Lambda function has a random sleep period in it and so isn’t entirely representational of a real-life workload. Those three 502s warrant digging deeper, but could be due to Lambda cold-start timing and the “second” variable being the maximum of 5, causing the Lambda functions to time out. AWS X-Ray and the Amazon CloudWatch logs generated by both API Gateway and Lambda could help you troubleshoot this.

Configuring a concurrency reservation

Now that you’ve established that you can generate this load against the function, I show you how to limit it and protect a backend resource from being overloaded by all of these requests.

  1. In the console, choose Configure.
  2. Under Concurrency, for Reserve concurrency, enter 25.

  1. Click on Save in the top right corner.

You could also set this with the AWS CLI using the Lambda put-function-concurrency command or see your current concurrency configuration via Lambda get-function. Here’s an example command:


$ aws lambda get-function --function-name concurrencyBlog01 --output json --query Concurrency
{
"ReservedConcurrentExecutions": 25
}

Either way, you’ve set the Concurrency Reservation to 25 for this function. This acts as both a limit and a reservation in terms of making sure that you can execute 25 concurrent functions at all times. Going above this results in the throttling of the Lambda function. Depending on the invoking service, throttling can result in a number of different outcomes, as shown in the documentation on Throttling Behavior. This change has also reduced your unreserved account concurrency for other functions by 25.

Rerun the same load generation as before and see what happens. Previously, you tested at 50 concurrency, which worked just fine. By limiting the Lambda functions to 25 concurrency, you should see rate limiting kick in. Run the same test again:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01

While this test runs, refresh the Monitoring tab on your function detail page. You see the following warning message:

This is great! It means that your throttle is working as configured and you are now protecting your downstream resources from too much load from your Lambda function.

Here is the output from a new “hey” command:


$ ./go/bin/hey -n 5000 -c 50 https://ofixul557l.execute-api.us-east-1.amazonaws.com/prod/concurrencyBlog01
Summary:
Total: 379.9922 secs
Slowest: 7.1486 secs
Fastest: 0.0102 secs
Average: 1.1897 secs
Requests/sec: 13.1582
Total data: 164608 bytes
Size/request: 32 bytes

Response time histogram:
0.010 [1] |
0.724 [3075] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
1.438 [0] |
2.152 [811] |∎∎∎∎∎∎∎∎∎∎∎
2.866 [11] |
3.579 [566] |∎∎∎∎∎∎∎
4.293 [214] |∎∎∎
5.007 [1] |
5.721 [315] |∎∎∎∎
6.435 [4] |
7.149 [2] |

Latency distribution:
10% in 0.0130 secs
25% in 0.0147 secs
50% in 0.0205 secs
75% in 2.0344 secs
90% in 4.0229 secs
95% in 5.0248 secs
99% in 5.0629 secs

Details (average, fastest, slowest):
DNS+dialup: 0.0004 secs, 0.0000 secs, 0.0537 secs
DNS-lookup: 0.0002 secs, 0.0000 secs, 0.0184 secs
req write: 0.0000 secs, 0.0000 secs, 0.0016 secs
resp wait: 1.1892 secs, 0.0101 secs, 7.1038 secs
resp read: 0.0000 secs, 0.0000 secs, 0.0005 secs

Status code distribution:
[502] 3076 responses
[200] 1924 responses

This looks fairly different from the last load test run. A large percentage of these requests failed fast due to the concurrency throttle failing them (those with the 0.724 seconds line). The timing shown here in the histogram represents the entire time it took to get a response between the EC2 instance and API Gateway calling Lambda and being rejected. It’s also important to note that this example was configured with an edge-optimized endpoint in API Gateway. You see under Status code distribution that 3076 of the 5000 requests failed with a 502, showing that the backend service from API Gateway and Lambda failed the request.

Other uses

Managing function concurrency can be useful in a few other ways beyond just limiting the impact on downstream services and providing a reservation of concurrency capacity. Here are two other uses:

  • Emergency kill switch
  • Cost controls

Emergency kill switch

On occasion, due to issues with applications I’ve managed in the past, I’ve had a need to disable a certain function or capability of an application. By setting the concurrency reservation and limit of a Lambda function to zero, you can do just that.

With the reservation set to zero every invocation of a Lambda function results in being throttled. You could then work on the related parts of the infrastructure or application that aren’t working, and then reconfigure the concurrency limit to allow invocations again.

Cost controls

While I mentioned how you might want to use concurrency limits to control the downstream impact to services or databases that your Lambda function might call, another resource that you might be cautious about is money. Setting the concurrency throttle is another way to help control costs during development and testing of your application.

You might want to prevent against a function performing a recursive action too quickly or a development workload generating too high of a concurrency. You might also want to protect development resources connected to this function from generating too much cost, such as APIs that your Lambda function calls.

Conclusion

Concurrent executions as a unit of scale are a fairly unique characteristic about Lambda functions. Placing limits on how many concurrency “slices” that your function can consume can prevent a single function from consuming all of the available concurrency in an account. Limits can also prevent a function from overwhelming a backend resource that isn’t as scalable.

Unlike monolithic applications or even microservices where there are mixed capabilities in a single service, Lambda functions encourage a sort of “nano-service” of small business logic directly related to the integration model connected to the function. I hope you’ve enjoyed this post and configure your concurrency limits today!

What do you want your button to do?

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/button/

Here at Raspberry Pi, we know that getting physical with computing is often a catalyst for creativity. Building a simple circuit can open up a world of making possibilities! This ethos of tinkering and invention is also being used in the classroom to inspire a whole new generation of makers too, and here is why.

The all-important question

Physical computing provides a great opportunity for creative expression: the button press! By explaining how a button works, how to build one with a breadboard attached to computer, and how to program the button to work when it’s pressed, you can give learners young and old all the conceptual skills they need to build a thing that does something. But what do they want their button to do? Have you ever asked your students or children at home? I promise it will be one of the most mindblowing experiences you’ll have if you do.

A button. A harmless, little arcade button.

Looks harmless now, but put it into the hands of a child and see what happens!

Amy will want her button to take a photo, Charlie will want his button to play a sound, Tumi will want her button to explode TNT in Minecraft, Jack will want their button to fire confetti out of a cannon, and James Robinson will want his to trigger silly noises (doesn’t he always?)! Idea generation is the inherent gift that every child has in abundance. As educators and parents, we’re always looking to deeply engage our young people in the subject matter we’re teaching, and they are never more engaged than when they have an idea and want to implement it. Way back in 2012, I wanted my button to print geeky sayings:

Geek Gurl Diaries Raspberry Pi Thermal Printer Project Sneak Peek!

A sneak peek at the finished Geek Gurl Diaries ‘Box of Geek’. I’ve been busy making this for a few weeks with some help from friends. Tutorial to make your own box coming soon, so keep checking the Geek Gurl Diaries Twitter, facebook page and channel.

What are the challenges for this approach in education?

Allowing this kind of free-form creativity and tinkering in the classroom obviously has its challenges for teachers, especially those confined to rigid lesson structures, timings, and small classrooms. The most common worry I hear from teachers is “what if they ask a question I can’t answer?” Encouraging this sort of creative thinking makes that almost an inevitability. How can you facilitate roughly 30 different projects simultaneously? The answer is by using those other computational and transferable thinking skills:

  • Problem-solving
  • Iteration
  • Collaboration
  • Evaluation

Clearly specifying a problem, surveying the tools available to solve it (including online references and external advice), and then applying them to solve the problem is a hugely important skill, and this is a great opportunity to teach it.

A girl plays a button reaction game at a Raspberry Pi event

Press ALL the buttons!

Hands-off guidance

When we train teachers at Picademy, we group attendees around themes that have come out of the idea generation session. Together they collaborate on an achievable shared goal. One will often sketch something on a whiteboard, decomposing the problem into smaller parts; then the group will divide up the tasks. Each will look online or in books for tutorials to help them with their step. I’ve seen this behaviour in student groups too, and it’s very easy to facilitate. You don’t need to be the resident expert on every project that students want to work on.

The key is knowing where to guide students to find the answers they need. Curating online videos, blogs, tutorials, and articles in advance gives you the freedom and confidence to concentrate on what matters: the learning. We have a number of physical computing projects that use buttons, linked to our curriculum for learners to combine inputs and outputs to solve a problem. The WhooPi cushion and GPIO music box are two of my favourites.

A Raspberry Pi and button attached to a computer display

Outside of formal education, events such as Raspberry Jams, CoderDojos, CAS Hubs, and hackathons are ideal venues for seeking and receiving support and advice.

Cross-curricular participation

The rise of the global maker movement, I think, is in response to abstract concepts and disciplines. Children are taught lots of concepts in isolation that aren’t always relevant to their lives or immediate environment. Digital making provides a unique and exciting way of bridging different subject areas, allowing for cross-curricular participation. I’m not suggesting that educators should throw away all their schemes of work and leave the full direction of the computing curriculum to students. However, there’s huge value in exposing learners to the possibilities for creativity in computing. Creative freedom and expression guide learning, better preparing young people for the workplace of tomorrow.

So…what do you want your button to do?

Hello World

Learn more about today’s subject, and read further articles regarding computer science in education, in Hello World magazine issue 1.

Read Hello World issue 1 for more…

UK-based educators can subscribe to Hello World to receive a hard copy delivered for free to their doorstep, while the PDF is available for free to everyone via the Hello World website.

The post What do you want your button to do? appeared first on Raspberry Pi.

The Pirate Bay & 1337x Must Be Blocked, Austrian Supreme Court Rules

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-1337x-must-be-blocked-austrian-supreme-court-rules-171014/

Following a long-running case, in 2015 Austrian ISPs were ordered by the Commercial Court to block The Pirate Bay and other “structurally-infringing” sites including 1337x.to, isohunt.to, and h33t.to.

The decision was welcomed by the music industry, which looked forward to having more sites blocked in due course.

Soon after, local music rights group LSG sent its lawyers after several other large ISPs urging them to follow suit, or else. However, the ISPs dug in and a year later, in May 2016, things began to unravel. The Vienna Higher Regional Court overruled the earlier decision of the Commercial Court, meaning that local ISPs were free to unblock the previously blocked sites.

The Court concluded that ISP blocks are only warranted if copyright holders have exhausted all their options to take action against those actually carrying out the infringement. This decision was welcomed by the Internet Service Providers Austria (ISPA), which described the decision as an important milestone.

The ISPs argued that only torrent files, not the content itself, was available on the portals. They also had a problem with the restriction of access to legitimate content.

“A problem in this context is that the offending pages also have legal content and it is no longer possible to access that if barriers are put in place,” said ISPA Secretary General Maximilian Schubert.

Taking the case to its ultimate conclusion, the music companies appealed to the Supreme Court. Another year on and its decision has just been published and for the rightsholders, who represent 3,000 artists including The Beatles, Justin Bieber, Eric Clapton, Coldplay, David Guetta, Iggy Azalea, Michael Jackson, Lady Gaga, Metallica, George Michael, One Direction, Katy Perry, and Queen, to name a few, it was worth the effort.

The Court looked at whether “the provision and operation of a BitTorrent platform with the purpose of online file sharing [of non-public domain works]” represents a “communication to the public” under the EU Copyright Directive. Citing the now-familiar BREIN v Filmspeler and BREIN v Ziggo and XS4All cases that both received European Court of Justice rulings earlier this year, the Supreme Court concluded it was.

Citing another Dutch case, in which Playboy publisher Sanoma took on the blog GeenStijl.nl, the Court noted that linking to copyrighted content hosted elsewhere also amounted to a “communication to the public”, a situation mirrored on torrent sites like The Pirate Bay.

“The similarity of the technical procedure in this case when compared to BitTorrent platforms lies in the fact that in both cases the operators of the website did not provide any copyrighted works themselves, but merely provided further information on sites where the protected works were available,” the Court notes in its ruling.

In respect of the potential for blocking legitimate content as well as that infringing copyright, the Court turned the ISPs’ own arguments against them somewhat.

The ISPs had previously argued that blocking The Pirate Bay and other sites was pointless since the torrents they host would still be available elsewhere. The Court noted that point and also found that people can easily upload their torrents to sites that aren’t blocked, since there’s plenty of choice.

The ISPA criticized the Supreme Court’s ruling, noting that in future ISPs will still find themselves being held responsible for decisions concerning blocking.

“We do not support illegal content on the Internet in any way, but consider it extremely questionable that the decision on what is illegal and what is not falls to ISPs, instead of a court,” said ISPA Secretary General Maximilian.

“Although we find it positive that a court of last resort has taken the decision, the assessment of the website in the first instance continues to be left to the Internet provider. The Supreme Court’s expansion of the circle of sites that be potentially blocked further complicates this task for the operator and furthers the privatization of law enforcement.

“It is extremely unpleasant that even after more than 10 years of fierce discussion, there is still no compelling legal basis for a court decision on Internet blocking, which puts providers in the role of both judge and hangman.”

Also of interest is ISPA’s stance on how blocking of content fails to solve the underlying issue. When content is blocked, rather than removed, it simply displaces the problem, leaving others to pick up the pieces, the Internet body argues.

“Illegal content is permanently removed from the network by deletion. Everything else is a placebo with extremely dangerous side effects, which can easily be bypassed by both providers and consumers. The only thing that remains is a blocking infrastructure that can be misused for many purposes and, unfortunately, will be used in many places,” Schubert says.

“The current situation, where providers have to block the rightsholders quasi on the spot, if they do not want to engage in a time-consuming and cost-intensive litigation, is really not sustainable so we issue a call to action to the legislature.”

The domains that were listed in the case, many of which are already defunct, are: thepiratebay.se, thepiratebay.gd, thepiratebay.la, thepiratebay.mn, thepiratebay.mu, thepiratebay.sh, thepiratebay.tw, thepiratebay.fm, thepiratebay.ms, thepiratebay.vg, isohunt.to, 1337x.to and h33t.to.

Whether it will be added later is unclear, but the only domain currently used by The Pirate Bay (thepiratebay.org) is not included in the list.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Community Profile: Matthew Timmons-Brown

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-matthew-timmons-brown/

This column is from The MagPi issue 57. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

“I first set up my YouTube channel because I noticed a massive lack of video tutorials for the Raspberry Pi,” explains Matthew Timmons-Brown, known to many as The Raspberry Pi Guy. At 18 years old, the Cambridge-based student has more than 60 000 subscribers to his channel, making his account the most successful Raspberry Pi–specific YouTube account to date.

Matthew Timmons-Brown

Matt gives a talk at the Raspberry Pi 5th Birthday weekend event

The Raspberry Pi Guy

If you’ve attended a Raspberry Pi event, there’s a good chance you’ve already met Matt. And if not, you’ll have no doubt come across one or more of his tutorials and builds online. On more than one occasion, his work has featured on the Raspberry Pi blog, with his yearly Raspberry Pi roundup videos being a staple of the birthday celebrations.

Matthew Timmons-Brown

With his website, Matt aimed to collect together “the many strands of The Raspberry Pi Guy” into one, neat, cohesive resource — and it works. From newcomers to the credit card-sized computer to hardened Pi veterans, The Raspberry Pi Guy offers aid and inspiration for many. Looking for a review of the Raspberry Pi Zero W? He’s filmed one. Looking for a step-by-step guide to building a Pi-powered Amazon Alexa? No problem, there’s one of those too.

Make your Raspberry Pi artificially intelligent! – Amazon Alexa Personal Assistant Tutorial

Artificial Intelligence. A hefty topic that has dominated the field since computers were first conceived. What if I told you that you could put an artificial intelligence service on your own $30 computer?! That’s right! In this tutorial I will show you how to create your own artificially intelligent personal assistant, using Amazon’s Alexa voice recognition and information service!

Raspberry Pi electric skateboard

Last summer, Matt introduced the world to his Raspberry Pi-controlled electric skateboard, soon finding himself plastered over local press as well as the BBC and tech sites like Adafruit and geek.com. And there’s no question as to why the build was so popular. With YouTubers such as Casey Neistat increasing the demand for electric skateboards on a near-daily basis, the call for a cheaper, home-brew version has quickly grown.

DIY 30KM/H ELECTRIC SKATEBOARD – RASPBERRY PI/WIIMOTE POWERED

Over the summer, I made my own electric skateboard using a £4 Raspberry Pi Zero. Controlled with a Nintendo Wiimote, capable of going 30km/h, and with a range of over 10km, this project has been pretty darn fun. In this video, you see me racing around Cambridge and I explain the ins and outs of this project.

Using a Raspberry Pi Zero, a Nintendo Wii Remote, and a little help from members of the Cambridge Makespace community, Matt built a board capable of reaching 30km/h, with a battery range of 10km per charge. Alongside Neistat, Matt attributes the project inspiration to Australian student Tim Maier, whose build we previously covered in The MagPi.

Matthew Timmons-Brown and Eben Upton standing in a car park looking at a smartphone

LiDAR

Despite the success and the fun of the electric skateboard (including convincing Raspberry Pi Trading CEO Eben Upton to have a go for local television news coverage), the project Matt is most proud of is his wireless LiDAR system for theoretical use on the Mars rovers.

Matthew Timmons-Brown's LiDAR project for scanning terrains with lasers

Using a tablet app to define the angles, Matt’s A Level coursework LiDAR build scans the surrounding area, returning the results to the touchscreen, where they can be manipulated by the user. With his passion for the cosmos and the International Space Station, it’s no wonder that this is Matt’s proudest build.

Built for his A Level Computer Science coursework, the build demonstrates Matt’s passion for space and physics. Used as a means of surveying terrain, LiDAR uses laser light to measure distance, allowing users to create 3D-scanned, high-resolution maps of a specific area. It is a perfect technology for exploring unknown worlds.

Matthew Timmons-Brown and two other young people at a reception in the Houses of Parliament

Matt was invited to St James’s Palace and the Houses of Parliament as part of the Raspberry Pi community celebrations in 2016

Joining the community

In a recent interview at Hills Road Sixth Form College, where he is studying mathematics, further mathematics, physics, and computer science, Matt revealed where his love of electronics and computer science started. “I originally became interested in computer science in 2012, when I read a tiny magazine article about a computer that I would be able to buy with pocket money. This was a pretty exciting thing for a 12-year-old! Your own computer… for less than £30?!” He went on to explain how it became his mission to learn all he could on the subject and how, months later, his YouTube channel came to life, cementing him firmly into the Raspberry Pi community

The post Community Profile: Matthew Timmons-Brown appeared first on Raspberry Pi.

[$] Using eBPF and XDP in Suricata

Post Syndicated from jake original https://lwn.net/Articles/737771/rss

Much software that uses the Linux kernel does so at comparative
arms-length: when it needs the kernel, perhaps for a read or write, it
performs a system call, then (at least from its point of view) continues
operation later, with whatever the kernel chooses to give it in reply. Some
software, however, gets pretty intimately involved with the kernel as part
of its normal operation, for example by using eBPF for low-level packet
processing. Suricata is such a program; Eric Leblond
spoke about it at Kernel Recipes 2017 in a talk entitled “eBPF and XDP
seen from the
eyes of a meerkat”.

timeShift(GrafanaBuzz, 1w) Issue 18

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/10/20/timeshiftgrafanabuzz-1w-issue-18/

Welcome to another issue of timeShift. This week we released Grafana 4.6.0-beta2, which includes some fixes for alerts, annotations, the Cloudwatch data source, and a few panel updates. We’re also gearing up for Oredev, one of the biggest tech conferences in Scandinavia, November 7-10. In addition to sponsoring, our very own Carl Bergquist will be presenting “Monitoring for everyone.” Hope to see you there – swing by our booth and say hi!


Latest Release

Grafana 4.6-beta-2 is now available! Grafana 4.6.0-beta2 adds fixes for:

  • ColorPicker display
  • Alerting test
  • Cloudwatch improvements
  • CSV export
  • Text panel enhancements
  • Annotation fix for MySQL

To see more details on what’s in the newest version, please see the release notes.

Download Grafana 4.6.0-beta-2 Now


From the Blogosphere

Screeps and Grafana: Graphing your AI: If you’re unfamiliar with Screeps, it’s a MMO RTS game for programmers, where the objective is to grow your colony through programming your units’ AI. You control your colony by writing JavaScript, which operates 247 in the single persistent real-time world filled by other players. This article walks you through graphing all your game stats with Grafana.

ntopng Grafana Integration: The Beauty of Data Visualization: Our friends at ntop created a tutorial so that you can graph ntop monitoring data in Grafana. He goes through the metrics exposed, configuring the ntopng Data Source plugin, and building your first dashboard. They’ve also created a nice video tutorial of the process.

Installing Graphite and Grafana to Display the Graphs of Centreon: This article, provides a step-by-step guide to getting your Centreon data into Graphite and visualizing the data in Grafana.

Bit v. Byte Episode 3 – Metrics for the Win: Bit v. Byte is a new weekly Podcast about the web industry, tools and techniques upcoming and in use today. This episode dives into metrics, and discusses Grafana, Prometheus and NGINX Amplify.

Code-Quickie: Visualize heating with Grafana: With the winter weather coming, Reinhard wanted to monitor the stats in his boiler room. This article covers not only the visualization of the data, but the different devices and sensors you can use to can use in your own home.

RuuviTag with C.H.I.P – BLE – Node-RED: Following the temperature-monitoring theme from the last article, Tobias writes about his journey of hooking up his new RuuviTag to Grafana to measure temperature, relative humidity, air pressure and more.


Early Bird will be Ending Soon

Early bird discounts will be ending soon, but you still have a few days to lock in the lower price. We will be closing early bird on October 31, so don’t wait until the last minute to take advantage of the discounted tickets!

Also, there’s still time to submit your talk. We’ll accept submissions through the end of October. We’re looking for technical and non-technical talks of all sizes. Submit a CFP now.

Get Your Early Bird Ticket Now


Grafana Plugins

This week we have updates to two panels and a brand new panel that can add some animation to your dashboards. Installing plugins in Grafana is easy; for on-prem Grafana, use the Grafana-cli tool, or with 1 click if you are using Hosted Grafana.

NEW PLUGIN

Geoloop Panel – The Geoloop panel is a simple visualizer for joining GeoJSON to Time Series data, and animating the geo features in a loop. An example of using the panel would be showing the rate of rainfall during a 5-hour storm.

Install Now

UPDATED PLUGIN

Breadcrumb Panel – This plugin keeps track of dashboards you have visited within one session and displays them as a breadcrumb. The latest update fixes some issues with back navigation and url query params.

Update

UPDATED PLUGIN

Influx Admin Panel – The Influx Admin panel duplicates features from the now deprecated Web Admin Interface for InfluxDB and has lots of features like letting you see the currently running queries, which can also be easily killed.

Changes in the latest release:

  • Converted to typescript project based on typescript-template-datasource
  • Select Databases. This only works with PR#8096
  • Added time format options
  • Show tags from response
  • Support template variables in the query

Update


Contribution of the week:

Each week we highlight some of the important contributions from our amazing open source community. Thank you for helping make Grafana better!

The Stockholm Go Meetup had a hackathon this week and sent a PR for letting whitelisted cookies pass through the Grafana proxy. Thanks to everyone who worked on this PR!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

This is awesome – we can’t get enough of these public dashboards!

We Need Your Help!

Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about this fun experiment.

Tell Me More


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

Please tell us how we’re doing. Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Getting Ready for AWS re:Invent 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/getting-ready-for-aws-reinvent-2017/

With just 40 days remaining before AWS re:Invent begins, my colleagues and I want to share some tips that will help you to make the most of your time in Las Vegas. As always, our focus is on training and education, mixed in with some after-hours fun and recreation for balance.

Locations, Locations, Locations
The re:Invent Campus will span the length of the Las Vegas strip, with events taking place at the MGM Grand, Aria, Mirage, Venetian, Palazzo, the Sands Expo Hall, the Linq Lot, and the Encore. Each venue will host tracks devoted to specific topics:

MGM Grand – Business Apps, Enterprise, Security, Compliance, Identity, Windows.

Aria – Analytics & Big Data, Alexa, Container, IoT, AI & Machine Learning, and Serverless.

Mirage – Bootcamps, Certifications & Certification Exams.

Venetian / Palazzo / Sands Expo Hall – Architecture, AWS Marketplace & Service Catalog, Compute, Content Delivery, Database, DevOps, Mobile, Networking, and Storage.

Linq Lot – Alexa Hackathons, Gameday, Jam Sessions, re:Play Party, Speaker Meet & Greets.

EncoreBookable meeting space.

If your interests span more than one topic, plan to take advantage of the re:Invent shuttles that will be making the rounds between the venues.

Lots of Content
The re:Invent Session Catalog is now live and you should start to choose the sessions of interest to you now.

With more than 1100 sessions on the agenda, planning is essential! Some of the most popular “deep dive” sessions will be run more than once and others will be streamed to overflow rooms at other venues. We’ve analyzed a lot of data, run some simulations, and are doing our best to provide you with multiple opportunities to build an action-packed schedule.

We’re just about ready to let you reserve seats for your sessions (follow me and/or @awscloud on Twitter for a heads-up). Based on feedback from earlier years, we have fine-tuned our seat reservation model. This year, 75% of the seats for each session will be reserved and the other 25% are for walk-up attendees. We’ll start to admit walk-in attendees 10 minutes before the start of the session.

Las Vegas never sleeps and neither should you! This year we have a host of late-night sessions, workshops, chalk talks, and hands-on labs to keep you busy after dark.

To learn more about our plans for sessions and content, watch the Get Ready for re:Invent 2017 Content Overview video.

Have Fun
After you’ve had enough training and learning for the day, plan to attend the Pub Crawl, the re:Play party, the Tatonka Challenge (two locations this year), our Hands-On LEGO Activities, and the Harley Ride. Stay fit with our 4K Run, Spinning Challenge, Fitness Bootcamps, and Broomball (a longstanding Amazon tradition).

See You in Vegas
As always, I am looking forward to meeting as many AWS users and blog readers as possible. Never hesitate to stop me and to say hello!

Jeff;

 

 

“KRACK”: a severe WiFi protocol flaw

Post Syndicated from corbet original https://lwn.net/Articles/736486/rss

The “krackattacks” web site
discloses a set of WiFi protocol flaws that defeat most of the protection
that WPA2 encryption is supposed to provide. “In a key
reinstallation attack, the adversary tricks a victim into reinstalling an
already-in-use key. This is achieved by manipulating and replaying
cryptographic handshake messages. When the victim reinstalls the key,
associated parameters such as the incremental transmit packet number
(i.e. nonce) and receive packet number (i.e. replay counter) are reset to
their initial value. Essentially, to guarantee security, a key should only
be installed and used once. Unfortunately, we found this is not guaranteed
by the WPA2 protocol
“.