Tag Archives: IP

AWS Bill Simplification – Consolidated CloudWatch Charges

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-bill-simplification-consolidated-cloudwatch-charges/

The bill that you receive for your use of AWS in July will include a change in the way that Amazon CloudWatch charges are presented. The CloudWatch team made this change in order to make your bill simpler and easier to understand.

Consolidating Charges
In the past, charges for your usage of CloudWatch were split between two sections of your bill. For historical reasons, the charges for CloudWatch Alarms, CloudWatch Metrics, and calls to the CloudWatch API were reported in the Elastic Compute Cloud (EC2) detail section, while charges for CloudWatch Logs and CloudWatch Dashboards were reported in the CloudWatch detail section, like this:

We have received feedback that splitting the charges across two sections of the bill made it difficult to locate and understand the entire set of monitoring charges. In order to address this issue, we are moving the charges that were formerly listed in the Elastic Compute Cloud (EC2) detail section to the CloudWatch detail section. We are making the same change to the detailed billing report, moving the affected charges from the AmazonEC2 product code to the AmazonCloudWatch product code and changing to the AmazonCloudWatch product name. This change does not affect your overall bill; it simply consolidates all of the charges for the use of CloudWatch in one section.

Billing Metric
The CloudWatch billing metric named Estimated Charges can be viewed as a Total Estimated Charge, or broken down By Service:

The total will not change. However, as noted above, the charges that formerly had AmazonEC2 as the ServiceName dimension will now have it set to AmazonCloudWatch:

You may need to adjust thresholds on your billing alarms as a result:

Once again, your total AWS bill will not change. You will begin to see the consolidated charges for CloudWatch in your AWS bill for July 2017.

Jeff;

 

Пловдивските варвари

Post Syndicated from Йовко Ламбрев original https://yovko.net/barbarians-of-plovdiv/

Манол ми е приятел. Казвам го в началото, за да не бъда обвинен, че се правя на неутрален наблюдател. Като не само, че нямам претенция да съм такъв, а и дори не искам да бъда. В Пловдив скандалите станаха толкова много, че да бъдеш неутрален, може да означава само две неща – или си част от безобразията, които съсипват града ни или си съучастник в масовия кататоничен ступор, в който безразличието, а не смъртта погребва бъдещето на пловдивчани.

Преди три години Пловдив спечели възможността да бъде Европейска столица на културата през 2019 година. Но същинското състезание спечелиха една група идеалисти, с много труд, емоции, усилия и лични жертви. Те са от онези мои съграждани, които сбъдват мечти… и е вдъхновяващо да познаваш част от тях.

Тези хора – и днес, и утре ще продължават да са такива. Независимо от всичко. Незвисимо, че спечелиха най-важната битка, но загубиха войната – вечната сякаш война с посредствеността на политиците от управлението, с дребните душици в общинския съвет и пасивността на съгражданите си.

Общинската фондация „Пловдив 2019“ бе превзета – бавно, методично и вече окончателно. Бяха изхвърлени или докарани до отчаяние дотам, че да напуснат, повечето ключови фигури и смислени хора с репутация, които се бориха за възможността Пловдив да бъде европейска културна столица. Местните политици използваха гражданите да им спечелят титлата (очевидно за тях това е трофей, а не възможност), след което ги изхвърлиха, за да не им се пречкат в усилията да провалят всичко постигнато.

В последните дни Пловдив проспа и друг огромен скандал. Едно от доказателствата, че градът ни има осемхилядолетна история бе изринато с багери, а днес залято с бетон, докато общински и районен кмет и всички общинари широко са затворили очи за случващото се… Титаничен срам! На нас не ни е нужен ИДИЛ/ДАЕШ да руши миналото ни – справяме се блестящо сами!

И сме виновни всички, които търпим! Защото на последния марш за европейско правосъдие в Пловдив бяхме стотина и нещо души, докато останалите ни гледаха с тъпо учудване от кафенетата. Защото докато не сте готови да се разделите с охладената си следобедна биричка и да пожертвате малко време да следите и държите в мъртва хватка всеки продажен политик, те ще продължават с безобразията си.

Бях наскоро на едно особено показателно заседание на пловдивския общински съвет. Да – всеки може да отиде и поне да ги наблюдава, и ако тези, които ходят вземат да стават повече или пък да чакат отвън след заседание за да поразпитват какви са ги свършили, също предвиждам едно друго ниво на регионалната ни политика, но… уви – двата реда за публика в залата най-често са рехаво-празни. Това беше същото заседание, на което общинарите се отметнаха от решението си от миналата есен да откажат акциите от Панаира и решиха да ги приемат. Между другото, заради иронията на историята е добре да припомня, че още на следващото заседание отново се отказаха от тях. И е трудно да се намери толкова красноречив пример, който така тъжно да илюстрира принципната безпринципност сред утайката, която имаме за общински съветници. Двойно отмятане на 180 градуса…

Гьонсуратлъкът очевидно е ключова дарба на кмета Иван Тотев, с която компенсира липсата на красноречие, харизма и други по-важни качества. Не само защото не защити Манол Пейков като член на УС, на който самият той се явява председател, или поне да му даде възможност да се самозащити, не само защото изговори купчина глупости за недоразумение или грешка, която уж трябваше да бъде поправена на 8 юни на извънредна сесия, каквато не се състоя, но и на вчерашната редовна сесия пред очите на други членове на УС на Фондацията предложи нови хора, и нито дума за Манол Пейков. И това няма как да е случайно… Това е демонстрация на безпардонност!

Много ми се иска, както сте започнали, да прочетете и текста на Иво Дернев от „Под тепето“ – акцентира върху важни детайли към историята.

Иначе на онази сесия – панаирджийската – кметът Тотев завърши свое изказване така:

И когато тук някои сравняват къде историята ще ми отреди място, искам да кажа, че моята задача е да изкарам достойно втория мандат и след 20 години някой да каже: Искам да бъда кмет като Иван Тотев.

Не съм сигурен, че цитатът е точен, но такъв е според в. Марица. Все едно – това беше смисъла. И е чуден симптом на самозабравянето, комплексарщината и липсата на санитарна скромност.

Чувайки това, едва не прихнах, и се зачудих дали някой съветва г-н Кмета да чете повече история и да си припомня по-честичко, че в общества, където институциите не работят в полза на гражданите, се случва някоя капка да прелее чашата на търпението на простосмъртните така, че след 20 години хората още да помнят как инж. Иван Тотев е бил изхвърлен като въшлив дюшек от общината, заедно с повечето общински съветници. След разрушени и опожарени тютюневи складове, провалени проекти като „Пловдив 2019“, транспортния или този за площад Централен, бетонирани могили, прокурорски упражнения, все по-често си мисля, че би било напълно справедливо Пловдив, съгражданите ми и най-вече лично г-н Тотев да имат такъв мил спомен след време.

След Вяра Анкова

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/06/22/bnt-14/

Мъж влиза в БНТ след Вяра Анкова е заглавие в 24 часа.

24 часа е издание, което  не предполага – то  информира за решеното:  Защо е опасно този СЕМ да избира шефове на БНТ и БНР  и  изборът се отложи.

Сега казват: мъж влиза в БНТ.  Кой точно мъж? И това е казано.

Да си спомним как се фабрикува новината – като се обявява за обществено мнение: Обединение на БНТ и БНР иска медийната общност.  Нищо подобно – но 24 часа свежда волята.  Владимира Янева по-късно   разкрива механиката.

Да видим какво искала сега медийната общност – ето заглавие: Колеги на Каменаров смятат, че той е с най-големи шансове. И голяма снимка.

Октоподът-оракул Паул почина, но ние си имаме 24 часа.

 

Filed under: BG Media, BG Regulator

From Idea to Launch: Getting Your First Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-customers/

line outside of Apple

After deciding to build an unlimited backup service and developing our own storage platform, the next step was to get customers and feedback. Not all customers are created equal. Let’s talk about the types, and when and how to attract them.

How to Get Your First Customers

First Step – Don’t Launch Publicly
Launch when you’re ready for the judgments of people who don’t know you at all. Until then, don’t launch. Sign up users and customers either that you know, those you can trust to cut you some slack (while providing you feedback), or at minimum those for whom you can set expectations. For months the Backblaze website was a single page with no ability to get the product and minimal info on what it would be. This is not to counter the Lean Startup ‘iterate quickly with customer feedback’ advice. Rather, this is an acknowledgement that there are different types of feedback required based on your development stage.

Sign Up Your Friends
We knew all of our first customers; they were friends, family, and previous co-workers. Many knew what we were up to and were excited to help us. No magic marketing or tech savviness was required to reach them – we just asked that they try the service. We asked them to provide us feedback on their experience and collected it through email and conversations. While the feedback wasn’t unbiased, it was nonetheless wide-ranging, real, and often insightful. These people were willing to spend time carefully thinking about their feedback and delving deeper into the conversations.

Broaden to Beta
Unless you’re famous or your service costs $1 million per customer, you’ll probably need to expand quickly beyond your friends to build a business – and to get broader feedback. Our next step was to broaden the customer base to beta users.

Opening up the service in beta provides three benefits:

  1. Air cover for the early warts. There are going to be issues, bugs, unnecessarily complicated user flows, and poorly worded text. Beta tells people, “We don’t consider the product ‘done’ and you should expect some of these issues. Please be patient with us.”
  2. A request for feedback. Some people always provide feedback, but beta communicates that you want it.
  3. An awareness opportunity. Opening up in beta provides an early (but not only) opportunity to have an announcement and build awareness.

Pitching Beta to Press
Not all press cares about, or is even willing to cover, beta products. Much of the mainstream press wants to write about services that are fully live, have scale, and are important in the marketplace. However, there are a number of sites that like to cover the leading edge – and that means covering betas. Techcrunch, Ars Technica, and SimpleHelp covered our initial private beta launch. I’ll go into the details of how to work with the press to cover your announcements in a post next month.

Private vs. Public Beta
Both private and public beta provide all three of the benefits above. The difference between the two is that private betas are much more controlled, whereas public ones bring in more users. But this isn’t an either/or – I recommend doing both.

Private Beta
For our original beta in 2008, we decided that we were comfortable with about 1,000 users subscribing to our service. That would provide us with a healthy amount of feedback and get some early adoption, while not overwhelming us or our server capacity, and equally important not causing cash flow issues from having to buy more equipment. So we decided to limit the sign-up to only the first 1,000 people who signed up; then we would shut off sign-ups for a while.

But how do you even get 1,000 people to sign up for your service? In our case, get some major publications to write about our beta. (Note: In a future post I’ll explain exactly how to find and reach out to writers. Sign up to receive all of the entrepreneurial posts in this series.)

Public Beta
For our original service (computer backup), we did not have a public beta; but when we launched Backblaze B2, we had a private and then a public beta. The private beta allowed us to work out early kinks, while the public beta brought us a more varied set of use cases. In public beta, there is no cap on the number of users that may try the service.

While this is a first-class problem to have, if your service is flooded and stops working, it’s still a problem. Think through what you will do if that happens. In our early days, when our system could get overwhelmed by volume, we had a static web page hosted with a different registrar that wouldn’t let customers sign up but would tell them when our service would be open again. When we reached a critical volume level we would redirect to it in order to at least provide status for when we could accept more customers.

Collect Feedback
Since one of the goals of betas is to get feedback, we made sure that we had our email addresses clearly presented on the site so users could send us thoughts. We were most interested in broad qualitative feedback on users’ experience, so all emails went to an internal mailing list that would be read by everyone at Backblaze.

For our B2 public and private betas, we also added an optional short survey to the sign-up process. In order to be considered for the private beta you had to fill the survey out, though we found that 80% of users continued to fill out the survey even when it was not required. This survey had both closed-end questions (“how much data do you have”) and open-ended ones (“what do you want to use cloud storage for?”).

BTW, despite us getting a lot of feedback now via our support team, Twitter, and marketing surveys, we are always open to more – you can email me directly at gleb.budman {at} backblaze.com.

Don’t Throw Away Users
Initially our backup service was available only on Windows, but we had an email sign-up list for people who wanted it for their Mac. This provided us with a sense of market demand and a ready list of folks who could be beta users and early adopters when we had a Mac version. Have a service targeted at doctors but lawyers are expressing interest? Capture that.

Product Launch

When
The first question is “when” to launch. Presuming your service is in ‘public beta’, what is the advantage of moving out of beta and into a “version 1.0”, “gold”, or “public availability”? That depends on your service and customer base. Some services fly through public beta. Gmail, on the other hand, was (in)famous for being in beta for 5 years, despite having over 100 million users.

The term beta says to users, “give us some leeway, but feel free to use the service”. That’s fine for many consumer apps and will have near zero impact on them. However, services aimed at businesses and government will often not be adopted with a beta label as the enterprise customers want to know the company feels the service is ‘ready’. While Backblaze started out as a purely consumer service, because it was a data backup service, it was important for customers to trust that the service was ready.

No product is bug-free. But from a product readiness perspective, the nomenclature should also be a reflection of the quality of the product. You can launch a product with one feature that works well out of beta. But a product with fifty features on which half the users will bump into problems should likely stay in beta. The customer feedback, surveys, and your own internal testing should guide you in determining this quality during the beta. Be careful about “we’ve only seen that one time” or “I haven’t been able to reproduce that on my machine”; those issues are likely to scale with customers when you launch.

How
Launching out of beta can be as simple as removing the beta label from the website/product. However, this can be a great time to reach out to press, write a blog post, and send an email announcement to your customers.

Consider thanking your beta testers somehow; can they get some feature turned out for free, an extension of their trial, or premium support? If nothing else, remember to thank them for their feedback. Users that signed up during your beta are likely the ones who will propel your service. They had the need and interest to both be early adopters and deal with bugs. They are likely the key to getting 1,000 true fans.

The Beginning
The title of this post was “Getting your first customers”, because getting to launch may feel like the peak of your journey when you’re pre-launch, but it really is just the beginning. It’s a step along the journey of building your business. If your launch is wildly successful, enjoy it, work to build on the momentum, but don’t lose track of building your business. If your launch is a dud, go out for a coffee with your team, say “well that sucks”, and then get back to building your business. You can learn a tremendous amount from your early customers, and they can become your biggest fans, but the success of your business will depend on what you continue to do the months and years after your launch.

The post From Idea to Launch: Getting Your First Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Kim Dotcom Opposes US’s “Fugitive” Claims at Supreme Court

Post Syndicated from Ernesto original https://torrentfreak.com/kim-dotcom-opposes-uss-fugitive-claims-supreme-court-170622/

megaupload-logoWhen Megaupload and Kim Dotcom were raided five years ago, the authorities seized millions of dollars in cash and other property.

The US government claimed the assets were obtained through copyright crimes so went after the bank accounts, cars, and other seized possessions of the Megaupload defendants.

Kim Dotcom and his colleagues were branded as “fugitives” and the Government won its case. Dotcom’s legal team quickly appealed this verdict, but lost once more at the Fourth Circuit appeals court.

A few weeks ago Dotcom and his former colleagues petitioned the Supreme Court to take on the case.

They don’t see themselves as “fugitives” and want the assets returned. The US Government opposed the request, but according to a new reply filed by Megaupload’s legal team, the US Government ignores critical questions.

The Government has a “vested financial stake” in maintaining the current situation, they write, which allows the authorities to use their “fugitive” claims as an offensive weapon.

“Far from being directed towards persons who have fled or avoided our country while claiming assets in it, fugitive disentitlement is being used offensively to strip foreigners of their assets abroad,” the reply brief (pdf) reads.

According to Dotcom’s lawyers there are several conflicting opinions from lower courts, which should be clarified by the Supreme Court. That Dotcom and his colleagues have decided to fight their extradition in New Zealand, doesn’t warrant the seizure of their assets.

“Absent review, forfeiture of tens of millions of dollars will be a fait accompli without the merits being reached,” they write, adding that this is all the more concerning because the US Government’s criminal case may not be as strong as claimed.

“This is especially disconcerting because the Government’s criminal case is so dubious. When the Government characterizes Petitioners as ‘designing and profiting from a system that facilitated wide-scale copyright infringement,’ it continues to paint a portrait of secondary copyright infringement, which is not a crime.”

The defense team cites several issues that warrant review and urges the Supreme Court to hear the case. If not, the Government will effectively be able to use assets seizures as a pressure tool to urge foreign defendants to come to the US.

“If this stands, the Government can weaponize fugitive disentitlement in order to claim assets abroad,” the reply brief reads.

“It is time for the Court to speak to the Questions Presented. Over the past two decades it has never had a better vehicle to do so, nor is any such vehicle elsewhere in sight,” Dotcom’s lawyers add.

Whether the Supreme Court accepts or denies the case will likely be decided in the weeks to come.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Security updates for Thursday

Post Syndicated from jake original https://lwn.net/Articles/726234/rss

Security updates have been issued by Arch Linux (lxterminal, lxterminal-gtk3, openvpn, and pcmanfm), CentOS (thunderbird), Debian (jython, spip, tomcat7, and tomcat8), openSUSE (openvpn), Oracle (thunderbird), Slackware (openvpn), SUSE (openvpn), and Ubuntu (kernel, linux-lts-trusty, nss, and valgrind).

HiveMQ 3.2.5 released

Post Syndicated from The HiveMQ Team original http://www.hivemq.com/blog/hivemq-3-2-5-released/

The HiveMQ team is pleased to announce the availability of HiveMQ 3.2.5. This is a maintenance release for the 3.2 series and brings the following improvements:

  • Fixed an issue that caused cluster nodes to not be operational for a long time after start up
  • Fixed an issue that could cause wildcard (+ operator) subscriptions to get lost
  • Fixed an issue that could cause QoS=1 messages to get lost when using cleanSession=false and shared subscription groups
  • Fixed an issue that could cause the current session count metric to be incorrect
  • Fixed an issue that could lead to QoS=0 message to be resent incorrectly when using shared subscriptions
  • Fixed various issues that could cause false Exceptions to be logged
  • Fixed an issue that could lead to an increase in memory usage when using retained messages
  • Improved documentation
  • Improved logging
  • Performance improvements

You can download the new HiveMQ version here.

We strongly recommend to upgrade if you are an HiveMQ 3.2.x user.

Have a great day,
The HiveMQ Team

CoderDojo Coolest Projects 2017

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/coderdojo-coolest-projects-2017/

When I heard we were merging with CoderDojo, I was delighted. CoderDojo is a wonderful organisation with a spectacular community, and it’s going to be great to join forces with the team and work towards our common goal: making a difference to the lives of young people by making technology accessible to them.

You may remember that last year Philip and I went along to Coolest Projects, CoderDojo’s annual event at which their global community showcase their best makes. It was awesome! This year a whole bunch of us from the Raspberry Pi Foundation attended Coolest Projects with our new Irish colleagues, and as expected, the projects on show were as cool as can be.

Coolest Projects 2017 attendee

Crowd at Coolest Projects 2017

This year’s coolest projects!

Young maker Benjamin demoed his brilliant RGB LED table tennis ball display for us, and showed off his brilliant project tutorial website codemakerbuddy.com, which he built with Python and Flask. [Click on any of the images to enlarge them.]

Coolest Projects 2017 LED ping-pong ball display
Coolest Projects 2017 Benjamin and Oly

Next up, Aimee showed us a recipes app she’d made with the MIT App Inventor. It was a really impressive and well thought-out project.

Coolest Projects 2017 Aimee's cook book
Coolest Projects 2017 Aimee's setup

This very successful OpenCV face detection program with hardware installed in a teddy bear was great as well:

Coolest Projects 2017 face detection bear
Coolest Projects 2017 face detection interface
Coolest Projects 2017 face detection database

Helen’s and Oly’s favourite project involved…live bees!

Coolest Projects 2017 live bees

BEEEEEEEEEEES!

Its creator, 12-year-old Amy, said she wanted to do something to help the Earth. Her project uses various sensors to record data on the bee population in the hive. An adjacent monitor displays the data in a web interface:

Coolest Projects 2017 Aimee's bees

Coolest robots

I enjoyed seeing lots of GPIO Zero projects out in the wild, including this robotic lawnmower made by Kevin and Zach:

Raspberry Pi Lawnmower

Kevin and Zach’s Raspberry Pi lawnmower project with Python and GPIO Zero, showed at CoderDojo Coolest Projects 2017

Philip’s favourite make was a Pi-powered robot you can control with your mind! According to the maker, Laura, it worked really well with Philip because he has no hair.

Philip Colligan on Twitter

This is extraordinary. Laura from @CoderDojo Romania has programmed a mind controlled robot using @Raspberry_Pi @coolestprojects

And here are some pictures of even more cool robots we saw:

Coolest Projects 2017 coolest robot no.1
Coolest Projects 2017 coolest robot no.2
Coolest Projects 2017 coolest robot no.3

Games, toys, activities

Oly and I were massively impressed with the work of Mogamad, Daniel, and Basheerah, who programmed a (borrowed) Amazon Echo to make a voice-controlled text-adventure game using Java and the Alexa API. They’ve inspired me to try something similar using the AIY projects kit and adventurelib!

Coolest Projects 2017 Mogamad, Daniel, Basheerah, Oly
Coolest Projects 2017 Alexa text-based game

Christopher Hill did a brilliant job with his Home Alone LEGO house. He used sensors to trigger lights and sounds to make it look like someone’s at home, like in the film. I should have taken a video – seeing it in action was great!

Coolest Projects 2017 Lego home alone house
Coolest Projects 2017 Lego home alone innards
Coolest Projects 2017 Lego home alone innards closeup

Meanwhile, the Northern Ireland Raspberry Jam group ran a DOTS board activity, which turned their area into a conductive paint hazard zone.

Coolest Projects 2017 NI Jam DOTS activity 1
Coolest Projects 2017 NI Jam DOTS activity 2
Coolest Projects 2017 NI Jam DOTS activity 3
Coolest Projects 2017 NI Jam DOTS activity 4
Coolest Projects 2017 NI Jam DOTS activity 5
Coolest Projects 2017 NI Jam DOTS activity 6

Creativity and ingenuity

We really enjoyed seeing so many young people collaborating, experimenting, and taking full advantage of the opportunity to make real projects. And we loved how huge the range of technologies in use was: people employed all manner of hardware and software to bring their ideas to life.

Philip Colligan on Twitter

Wow! Look at that room full of awesome young people. @coolestprojects #coolestprojects @CoderDojo

Congratulations to the Coolest Projects 2017 prize winners, and to all participants. Here are some of the teams that won in the different categories:

Coolest Projects 2017 winning team 1
Coolest Projects 2017 winning team 2
Coolest Projects 2017 winning team 3

Take a look at the gallery of all winners over on Flickr.

The wow factor

Raspberry Pi co-founder and Foundation trustee Pete Lomas came along to the event as well. Here’s what he had to say:

It’s hard to describe the scale of the event, and photos just don’t do it justice. The first thing that hit me was the sheer excitement of the CoderDojo ninjas [the children attending Dojos]. Everyone was setting up for their time with the project judges, and their pure delight at being able to show off their creations was evident in both halls. Time and time again I saw the ninjas apply their creativity to help save the planet or make someone’s life better, and it’s truly exciting that we are going to help that continue and expand.

Even after 8 hours, enthusiasm wasn’t flagging – the awards ceremony was just brilliant, with ninjas high-fiving the winners on the way to the stage. This speaks volumes about the ethos and vision of the CoderDojo founders, where everyone is a winner just by being part of a community of worldwide friends. It was a brilliant introduction, and if this weekend was anything to go by, our merger certainly is a marriage made in Heaven.

Join this awesome community!

If all this inspires you as much as it did us, consider looking for a CoderDojo near you – and sign up as a volunteer! There’s plenty of time for young people to build up skills and start working on a project for next year’s event. Check out coolestprojects.com for more information.

The post CoderDojo Coolest Projects 2017 appeared first on Raspberry Pi.

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Post Syndicated from Heitor Lessa original https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

Three Men Sentenced Following £2.5m Internet Piracy Case

Post Syndicated from Andy original https://torrentfreak.com/three-men-sentenced-following-2-5m-internet-piracy-case-170622/

While legal action against low-level individual file-sharers is extremely rare in the UK, the country continues to pose a risk for those engaged in larger-scale infringement.

That is largely due to the activities of the Police Intellectual Property Crime Unit and private anti-piracy outfits such as the Federation Against Copyright Theft (FACT). Investigations are often a joint effort which can take many years to complete, but the outcomes can often involve criminal sentences.

That was the profile of another Internet piracy case that concluded in London this week. It involved three men from the UK, Eric Brooks, 43, from Bolton, Mark Valentine, 44, from Manchester, and Craig Lloyd, 33, from Wolverhampton.

The case began when FACT became aware of potentially infringing activity back in February 2011. The anti-piracy group then investigated for more than a year before handing the case to police in March 2012.

On July 4, 2012, officers from City of London Police arrested Eric Brooks’ at his home in Bolton following a joint raid with FACT. Computer equipment was seized containing evidence that Brooks had been running a Netherlands-based server hosting more than £100,000 worth of pirated films, music, games, software and ebooks.

According to police, a spreadsheet on Brooks’ computer revealed he had hundreds of paying customers, all recruited from online forums. Using PayPal or utilizing bank transfers, each paid money to access the server. Police mentioned no group or site names in information released this week.

“Enquiries with PayPal later revealed that [Brooks] had made in excess of £500,000 in the last eight years from his criminal business and had in turn defrauded the film and TV industry alone of more than £2.5 million,” police said.

“As his criminal enterprise affected not only the film and TV but the wider entertainment industry including music, games, books and software it is thought that he cost the wider industry an amount much higher than £2.5 million.”

On the same day police arrested Brooks, Mark Valentine’s home in Manchester had a similar unwelcome visit. A day later, Craig Lloyd’s home in Wolverhampton become the third target for police.

Computer equipment was seized from both addresses which revealed that the pair had been paying for access to Brooks’ servers in order to service their own customers.

“They too had used PayPal as a means of taking payment and had earned thousands of pounds from their criminal actions; Valentine gaining £34,000 and Lloyd making over £70,000,” police revealed.

But after raiding the trio in 2012, it took more than four years to charge the men. In a feature common to many FACT cases, all three were charged with Conspiracy to Defraud rather than copyright infringement offenses. All three men pleaded guilty before trial.

On Monday, the men were sentenced at Inner London Crown Court. Brooks was sentenced to 24 months in prison, suspended for 12 months and ordered to complete 140 hours of unpaid work.

Valentine and Lloyd were each given 18 months in prison, suspended for 12 months. Each was ordered to complete 80 hours unpaid work.

Detective Constable Chris Glover, who led the investigation for the City of London Police, welcomed the sentencing.

“The success of this investigation is a result of co-ordinated joint working between the City of London Police and FACT. Brooks, Valentine and Lloyd all thought that they were operating under the radar and doing something which they thought was beyond the controls of law enforcement,” Glover said.

“Brooks, Valentine and Lloyd will now have time in prison to reflect on their actions and the result should act as deterrent for anyone else who is enticed by abusing the internet to the detriment of the entertainment industry.”

While even suspended sentences are a serious matter, none of the men will see the inside of a cell if they meet the conditions of their sentence for the next 12 months. For a case lasting four years involving such large sums of money, that is probably a disappointing result for FACT and the police.

Nevertheless, the men won’t be allowed to enjoy the financial proceeds of their piracy, if indeed any money is left. City of London Police say the trio will be subject to a future confiscation hearing to seize any proceeds of crime.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Пациенти, пациенти – 4

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2029

Още от серията тук, тук и тук.

—-

– Абе, да знаеш някоя знахарка, дето бае против сифилис? Един познат търси под дърво и камък…

—-

– Синът домъкна в къщи кока-кола. И се мъчи да я скрие, да не разбера, нали ме знае колко мразя всичко краварско. На мен ще ми пробутва номера! Пратих го до магазина и му налях в бутилката оцет, концентрат. Три дена в болница с изгаряне първа степен на хранопровода. Кълне се, че повече няма да пие тая гадост за нищо на света! Да видите колко просто е да му избиеш краварските бръмбари от главата!

—-

Прекипя ни от тая бабка на всичките спешни екипи. Стане ли към 11 вечерта, задължително ни вика, вече повече от година, абсолютно всеки ден. Нищо ѝ няма, просто ѝ е скучно. Ама тоя младок, дето го назначихме преди месец, ѝ намери цаката.

Отива екипът му при нея. Влизат с нахлупени маски на чудовища и дълги розови шалове. Пеят и танцуват петнайсет минути, след това си тръгват… Бабата още на следващия ден пише оплакване до главния и описва подробно представлението. Главният проверява – регистрирано стандартно повикване. Документацията изрядна: премерено кръвно, преслушано дишане – всичко в норма. Вика екипа на спявка. Екипът:

– А, онази бабка ли? Дето ни вика всяка вечер да я пазим от караконджолите?

Проблемът разрешен.

—-

Пристига мамичка към четиридесетте, с петмесечно бебе. Имало било обрив между крачетата, можело ли да ѝ изпиша мазило.

– Събличайте я и я слагайте на масата, да я прегледам.

Тя ме гледа подозрително, без да помръдне:

– Нужно ли е да гледате дъщеря ми ТАМ? Не може ли просто да ми напишете рецепта?

Обяснявам, че обриви най-различни, причини също, и съответно лекарства също… Тя си грабва детето, изхвръква през вратата и на излизане се обръща и тръсва:

– Педофилка с педофилка!

—-

– Ало? Извинявайте, аз само да попитам как е жената? Вчера я приеха…

– Ама господине, два часа през нощта е! Защо звъните сега?

– Казаха ми да звънна след девет!

—-

Чудя се, дали някой алтернативен лечител не е обявил, че които от заболяванията си скриеш от доктора при снемането на анамнезата, всичките ще минат сами? Писна ми вече по три пъти да питам пациент има ли алергия или пък диабет, и той да се кълне че няма, а после да се окаже, че от двайсет години вече ежедневно пие хапчета за тях!

Или се е завъдило някак в Интернет? Той е най-голямото зло за човешкото здраве. Даде възможност на лудите да се групират по диагнози и да си помагат срещу лечението. И после се чудим откъде се взеха всичките тези домораждащи, слънцехранещи се, пазещи се от кемтрейлсове, лекуващи се с пикни и лайна…

—-

Днес на прием (участъков терапевт съм) минаха таман 20 души. Най-спокойният и адекватен беше един, дошъл за медицинско освидетелстване за инвалидност. Шизофрения с агресивни пристъпи…

—-

Влиза поредният пациент – чичко с балтон, мустаци и тежък поглед:

– Добър ден. Петров съм, двайсет години съм работил в Министерството на здравеопазването. Така че мислете добре как ще ми се погрижите за зъба!

Както съм капнал след цял ден работа, така ми се качи всичката кръв в главата:

– Оооо, ама оттам ли сте? Значи на вас дължа, че работя в такъв непоносим бардак, без лекарства, без инструменти, с развалена апаратура, смотани началници, гаден график и ниска заплата? Заповядайте на стола, моля ви! Разполагайте се, чувствайте се удобно! Ей сега ще се погрижа за зъба ви! Така ще се погрижа, че за цял живот ще ме запомните!…

—-

Абе това днешните деца непрекъснато боледуват. Ние навремето колко по-здрави бяхме! Чудя се – от какво ли? Смолата от черешите ли е била толкова лечебна, зелените джанки ли, димът от запалените гуми ли, купищата ожулвания, порязвания и синини ли?…

—-

Преди половин година – повикване от разтревожена възрастна дама. Племенникът ѝ от съседния апартамент нещо ужасно отслабнал напоследък, имал странни идеи за храненето. Та не може ли да го видим все пак, нищо че сме Бърза помощ. Човешка молба. Черпи ни с бяло сладко и хубаво вино.

Въздъхваме, но влизаме при въпросния. Кратък разговор. Вече втори месец се хранел само със слънце. Чувствал се чудесно. Бил отслабнал, понеже бил изхвърлил всичките натрупали се в организма му шлаки, всеки момент вече щял да започне да пълнее от слънцето.

Връщаме се при лелята. Предписание за психиатрична помощ. Обясняваме ѝ по каква процедура да мине. След седмица идва да ни носи цветя и бонбони. Прибрали го точно навреме, вече бил на път да не може да бъде захранен даже. И ретината на очите му била попрегоряла, но не фатално. Лелята се била сетила да го посъветва – слънцехраненето са го измислили йогите, те са свикнали на малко и на студ и мрак, а нашето слънце тук е яко, българско. Слагай си тъмни очила като гледаш слънцето, че преяждането е вредно, навсякъде го пише…

В момента онзи ни съди пред Европейския съд. Били сме агенти на не знам каква конспирация и сме го били репресирали политически. Нямам представа как ще свърши тая.

—-

Днес в кабинета една щеше да ме разкъса. Вой до небесата. Защо опашката е толкова дълга и защо лекарите са толкова малко!

Отбелязвам, че единственият начин да станат повече е да нарежем наличните на парчета. Отговор:

– Ей сега ще се оплача от теб в министерството и ще те уволнят!

Вероятно мисли, че така лекарите ще станат повече.

—-

Неделя, 5:30 сутринта. Повикване – „на 80 години, отекло ѝ е лицето“. Находка: бабата си разчесала пъпка на челото, тя се инфектирала и около нея се подуло. До нея стои петдесетинагодишният ѝ син.

– Не направихте ли нещо за тия пет дни, та се стига дотук?

– Ми ходихме в аптеката.

– И какво купихте?

– Ми нищо.

– И за какво тогава ходихте там?

– Ми не знам.

– Е добре, какво искате сега от Бърза помощ?!

– Ми да ни кажете какво да купим от аптеката…

Не вярвате ли, че извънземните съществуват? Ето ги, сред нас са!

—-

Повикване в три през нощта. Петия етаж, асансьор няма, осветление във входа също… Отваря ни вратата развълнуван дядо:

– Докторе, имам захарен диабет!

– Че как го определихте?

– Имам захар в урината!

– Как ви го установиха?

– Аз си го установих! Като пикая на пода, и чехлите ми залепват!

—-

Повикване – мъж, 46 години, „силно подпухнали яйца“… Звънваме на вратата – тя се отваря и на нея стои мъж със свалени панталони и действително доста подпухнали яйца. Орхоепидидимит. Предлагаме му хоспитализация – той възкликва обидено:

– Ама какво, вие няма ли поне да ги опипате?

Като ни вижда, че сме жени…

—-

Докато ѝ преглеждам детето, мамичката разпитва:

– А вие колко деца имате?

– Нямам деца.

– Какво?!… Как тогава ви позволяват да лекувате чуждите деца, без капка опит? Вие срам нямате ли?!

Няколко секунди ми липсва дар словото. След това изтърсвам:

– А на патоанатомите сигурно трябва да им забраняват да правят аутопсии, докато са още живи?

—-

Пращат ни този пациент от изтрезвителя. След едноседмичен делириум. Обяснявам му, че трябва да пие лекарства. Посреща ме железен аргумент:

– Че те нали вредят на черния дроб?

—-

– Ама как така не си давате телефона бе, докторе? Нали сте джипи? Ако детето ми се разболее в три през нощта, на кого да звъня?

—-

– А може ли да ми сложи инжекцията друга сестра?… Ми като ви гледам една такава младичка, сигурно още се учите как се прави…

—-

Гонят ме по линия на министерството. Отказ за оказване на медицинска помощ. Половината ми съседи са се подписали.

Съседката внезапно получила инфаркт. Съседът хукнал, звъннал ми на вратата – без резултат. Дошли и други съседи, звънели, думкали по вратата – без резултат. И написали жалба до министерството.

А аз през това време съм била в болницата на дежурство…

—-

Пристига при мен вчера млада двойка. Нямат дете, въпреки вече двегодишни усилия. Тръгвам да снемам анамнеза – оп-па, дребна подробност. Секс правят задължително с презерватив. Нямали си много доверие и се бояли от инфекции.

Обяснявам им, че така няма как сперматозоидите да стигнат до целта – гледат ме като идиот. Навсякъде в Интернет (къде ли е това навсякъде?) пишело, че латексът пропуска сперматозоидите, но не и инфекциите. Опитвам се кротко да ги убедя, че не е точно така. Възмутени напускат кабинета и отиват да търсят шефа на поликлиниката, да се оплачат от мен. Бил съм тотално некомпетентен и съм бил опасност за болните…

—-

Точно така. Уговарям те да си ваксинираш детето, понеже правителството е наредило всички деца да са с чипове. ГМО соята вече ти влиза в генома и скоро и ти ще пуснеш коренчета и ще станеш на соя.

Всъщност не, преди това ще долетят зелените човечета или чичковците с черните костюми, по твой избор, и ще те приберат. И съседа ти също. Мен няма да ме приберат, аз участвам в световен заговор заедно с всички останали лекари. Експериментираме с човечеството по нареждане на зелените човечета. Вчера ми звъня шефът ни от всепланетния щаб в Хондурас и каза, че ти не си си направил кръвна картина, и не сме могли да ти откраднем ДНК-то. Затова и са ти внушили телепатично да дойдеш, мислейки че си пазиш детето от ваксинация.

А, забравих. Докато ти ваксинирам детето и ти взимам кръв за анализа, ще те заразя със СПИН, сифилис, хепатит и едра шарка. Понеже ми е жал за еднократните спринцовки и тайно ги вадя от кошчето и залепвам обратно в опаковките. Пък и нали съм си купил дипломата, какво разбирам аз от стерилност?…

Хора мили, ако нямате мозък, моля ви, не гледайте кабелна телевизия. И не четете в Интернет. Вземете и прочетете вместо това учебника по биология за седми клас. Всъщност, за него ви е още рано. Почнете с учебника по естествознание за четвърти. Може и направо с хлоразин, той е добър срещу психози…

Най-правилно е изобщо да не ви го казвам. Еволюцията трябва да отсява идиотизма, иначе на поколението ни лошо му се пише. Ама нали все пак съм лекар…

DynamoDB Accelerator (DAX) Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/dynamodb-accelerator-dax-now-generally-available/

Earlier this year I told you about Amazon DynamoDB Accelerator (DAX), a fully-managed caching service that sits in front of (logically speaking) your Amazon DynamoDB tables. DAX returns cached responses in microseconds, making it a great fit for eventually-consistent read-intensive workloads. DAX supports the DynamoDB API, and is seamless and easy to use. As a managed service, you simply create your DAX cluster and use it as the target for your existing reads and writes. You don’t have to worry about patching, cluster maintenance, replication, or fault management.

Now Generally Available
Today I am pleased to announce that DAX is now generally available. We have expanded DAX into additional AWS Regions and used the preview time to fine-tune performance and availability:

Now in Five Regions – DAX is now available in the US East (Northern Virginia), EU (Ireland), US West (Oregon), Asia Pacific (Tokyo), and US West (Northern California) Regions.

In Production – Our preview customers are reporting that they are using DAX in production, that they loved how easy it was to add DAX to their application, and have told us that their apps are now running 10x faster.

Getting Started with DAX
As I outlined in my earlier post, it is easy to use DAX to accelerate your existing DynamoDB applications. You simply create a DAX cluster in the desired region, update your application to reference the DAX SDK for Java (the calls are the same; this is a drop-in replacement), and configure the SDK to use the endpoint to your cluster. As a read-through/write-through cache, DAX seamlessly handles all of the DynamoDB read/write APIs.

We are working on SDK support for other languages, and I will share additional information as it becomes available.

DAX Pricing
You pay for each node in the cluster (see the DynamoDB Pricing page for more information) on a per-hour basis, with prices starting at $0.269 per hour in the US East (Northern Virginia) and US West (Oregon) regions. With DAX, each of the nodes in your cluster serves as a read target and as a failover target for high availability. The DAX SDK is cluster aware and will issue round-robin requests to all nodes in the cluster so that you get to make full use of the cluster’s cache resources.

Because DAX can easily handle sudden spikes in read traffic, you may be able to reduce the amount of provisioned throughput for your tables, resulting in an overall cost savings while still returning results in microseconds.

Jeff;

 

A Stack Clash disclosure post-mortem

Post Syndicated from corbet original https://lwn.net/Articles/726137/rss

For those who are curious about how the community deals with a serious
vulnerability, Solar Designer’s description of the embargo process around
the “Stack Clash” issue (and his unhappiness with it) is worth
a read. “Qualys first informed the distros list about this upcoming set of issues
on May 3. This initial notification didn’t say Stack Clash nor anything
like that, but merely expressed intent to disclose the issues and
concern that the list’s maximum embargo duration of 14 to 19 days might
not be sufficient in this case. In the resulting discussion, I agreed
to consider extending the embargo beyond list policy should there be
convincing reasons for that. In retrospect, I think I shouldn’t have
agreed to that.

Protect Web Sites & Services Using Rate-Based Rules for AWS WAF

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

New Rate-Based Rules
Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

IP Address Tracking– You can see which IP addresses are currently blacklisted.

IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

Using Rate-Based Rules
Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

Then attach the rule (ProtectLogin) to the Web ACL:

The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

Available Now
The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

Jeff;

MPAA & RIAA Demand Tough Copyright Standards in NAFTA Negotiations

Post Syndicated from Andy original https://torrentfreak.com/mpaa-riaa-demand-tough-copyright-standards-in-nafta-negotiations-170621/

The North American Free Trade Agreement (NAFTA) between the United States, Canada, and Mexico was negotiated more than 25 years ago. With a quarter of a decade of developments to contend with, the United States wants to modernize.

“While our economy and U.S. businesses have changed considerably over that period, NAFTA has not,” the government says.

With this in mind, the US requested comments from interested parties seeking direction for negotiation points. With those comments now in, groups like the MPAA and RIAA have been making their positions known. It’s no surprise that intellectual property enforcement is high on the agenda.

“Copyright is the lifeblood of the U.S. motion picture and television industry. As such, MPAA places high priority on securing strong protection and enforcement disciplines in the intellectual property chapters of trade agreements,” the MPAA writes in its submission.

“Strong IPR protection and enforcement are critical trade priorities for the music industry. With IPR, we can create good jobs, make significant contributions to U.S. economic growth and security, invest in artists and their creativity, and drive technological innovation,” the RIAA notes.

While both groups have numerous demands, it’s clear that each seeks an environment where not only infringers can be held liable, but also Internet platforms and services.

For the RIAA, there is a big focus on the so-called ‘Value Gap’, a phenomenon found on user-uploaded content sites like YouTube that are able to offer infringing content while avoiding liability due to Section 512 of the DMCA.

“Today, user-uploaded content services, which have developed sophisticated on-demand music platforms, use this as a shield to avoid licensing music on fair terms like other digital services, claiming they are not legally responsible for the music they distribute on their site,” the RIAA writes.

“Services such as Apple Music, TIDAL, Amazon, and Spotify are forced to compete with services that claim they are not liable for the music they distribute.”

But if sites like YouTube are exercising their rights while acting legally under current US law, how can partners Canada and Mexico do any better? For the RIAA, that can be achieved by holding them to standards envisioned by the group when the DMCA was passed, not how things have panned out since.

Demanding that negotiators “protect the original intent” of safe harbor, the RIAA asks that a “high-level and high-standard service provider liability provision” is pursued. This, the music group says, should only be available to “passive intermediaries without requisite knowledge of the infringement on their platforms, and inapplicable to services actively engaged in communicating to the public.”

In other words, make sure that YouTube and similar sites won’t enjoy the same level of safe harbor protection as they do today.

The RIAA also requires any negotiated safe harbor provisions in NAFTA to be flexible in the event that the DMCA is tightened up in response to the ongoing safe harbor rules study.

In any event, NAFTA should not “support interpretations that no longer reflect today’s digital economy and threaten the future of legitimate and sustainable digital trade,” the RIAA states.

For the MPAA, Section 512 is also perceived as a problem. While noting that the original intent was to foster a system of shared responsibility between copyright owners and service providers, the MPAA says courts have subsequently let copyright holders down. Like the RIAA, the MPAA also suggests that Canada and Mexico can be held to higher standards.

“We recommend a new approach to this important trade policy provision by moving to high-level language that establishes intermediary liability and appropriate limitations on liability. This would be fully consistent with U.S. law and avoid the same misinterpretations by policymakers and courts overseas,” the MPAA writes.

“In so doing, a modernized NAFTA would be consistent with Trade Promotion Authority’s negotiating objective of ‘ensuring that standards of protection and enforcement keep pace with technological developments’.”

The MPAA also has some specific problems with Mexico, including unauthorized camcording. The Hollywood group says that 85 illicit audio and video recordings of films were linked to Mexican theaters in 2016. However, recording is not currently a criminal offense in Mexico.

Another issue for the MPAA is that criminal sanctions for commercial scale infringement are only available if the infringement is for profit.

“This has hampered enforcement against the above-discussed camcording problem but also against online infringement, such as peer-to-peer piracy, that may be on a scale that is immensely harmful to U.S. rightsholders but nonetheless occur without profit by the infringer,” the MPAA writes.

“The modernized NAFTA like other U.S. bilateral free trade agreements must provide for criminal sanctions against commercial scale infringements without proof of profit motive.”

Also of interest are the MPAA’s complaints against Mexico’s telecoms laws. Unlike in the US and many countries in Europe, Mexico’s ISPs are forbidden to hand out their customers’ personal details to rights holders looking to sue. This, the MPAA says, needs to change.

The submissions from the RIAA and MPAA can be found here and here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Is your product “Powered by Raspberry Pi”?

Post Syndicated from Mike Buffham original https://www.raspberrypi.org/blog/powered-by-raspberry-pi/

One of the most exciting things for us about the growth of the Raspberry Pi community has been the number of companies that have grown up around the platform, and who have chosen to embed our products into their own. While many of these design-ins have been “silent”, a number of people have asked us for a standardised way to indicate that a product contains a Raspberry Pi or a Raspberry Pi Compute Module.

Powered by Raspberry Pi Logo

At the end of last year, we introduced a “Powered by Raspberry Pi” logo to meet this need. It is now included in our trademark rules and brand guidelines, which you can find on our website. Below we’re showing an early example of a “Powered by Raspberry Pi”-branded device, the KUNBUS Revolution Pi industrial PC. It has already made it onto the market, and we think it will inspire you to include our logo on the packaging of your own product.

KUNBUS RevPi
Powered by Raspberry Pi logo on RevPi

Using the “Powered by Raspberry Pi” brand

Adding the “Powered by Raspberry Pi” logo to your packaging design is a great way to remind your customers that a portion of the sale price of your product goes to the Raspberry Pi Foundation and supports our educational work.

As with all things Raspberry Pi, our rules for using this brand are fairly straightforward: the only thing you need to do is to fill out this simple application form. Once you have submitted it, we will review your details and get back to you as soon as possible.

When we approve your application, we will require that you use one of the official “Powered by Raspberry Pi” logos and that you ensure it is at least 30 mm wide. We are more than happy to help you if you have any design queries related to this – just contact us at info@raspberrypi.org

The post Is your product “Powered by Raspberry Pi”? appeared first on Raspberry Pi.

South Korean Webhost Nayana Pays USD1 Million Ransom

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/jTy5T4S7TZQ/

So far this Nayana payout is the biggest ransomware payment I’ve seen reported, there’s probably some bigger ones been paid but kept undercover. Certainly a good deal for the bad actors in this play, and well using an outdated Kernel along with PHP and Apache versions from 2006 you can’t feel too sorry for Nayana. […]

The post South Korean…

Read the full post at darknet.org.uk

Court Grants Subpoenas to Unmask ‘TVAddons’ and ‘ZemTV’ Operators

Post Syndicated from Ernesto original https://torrentfreak.com/court-grants-subpoenas-to-unmask-tvaddons-and-zemtv-operators-170621/

Earlier this month we broke the news that third-party Kodi add-on ZemTV and the TVAddons library were being sued in a federal court in Texas.

In a complaint filed by American satellite and broadcast provider Dish Network, both stand accused of copyright infringement, facing up to $150,000 for each offense.

While the allegations are serious, Dish doesn’t know the full identities of the defendants.

To find out more, the company requested a broad range of subpoenas from the court, targeting Amazon, Github, Google, Twitter, Facebook, PayPal, and several hosting providers.

From Dish’s request

This week the court granted the subpoenas, which means that they can be forwarded to the companies in question. Whether that will be enough to identify the people behind ‘TVAddons’ and ‘ZemTV’ remains to be seen, but Dish has cast its net wide.

For example, the subpoena directed at Google covers any type of information that can be used to identify the account holder of taacc14@gmail.com, which is believed to be tied to ZemTV.

The information requested from Google includes IP address logs with session date and timestamps, but also covers “all communications,” including GChat messages from 2014 onwards.

Similarly, Twitter is required to hand over information tied to the accounts of the users “TV Addons” and “shani_08_kodi” as well as other accounts linked to tvaddons.ag and streamingboxes.com. This also applies the various tweets that were sent through the account.

The subpoena specifically mentions “all communications, including ‘tweets’, Twitter sent to or received from each Twitter Account during the time period of February 1, 2014 to present.”

From the Twitter subpoena

Similar subpoenas were granted for the other services, tailored towards the information Dish hopes to find there. For example, the broadcast provider also requests details of each transaction from PayPal, as well as all debits and credits to the accounts.

In some parts, the subpoenas appear to be quite broad. PayPal is asked to reveal information on any account with the credit card statement “Shani,” for example. Similarly, Github is required to hand over information on accounts that are ‘associated’ with the tvaddons.ag domain, which is referenced by many people who are not directly connected to the site.

The service providers in question still have the option to challenge the subpoenas or ask the court for further clarification. A full overview of all the subpoena requests is available here (Exhibit 2 and onwards), including all the relevant details. This also includes several letters to foreign hosting providers.

While Dish still appears to be keen to find out who is behind ‘TVAddons’ and ‘ZemTV,’ not much has been heard from the defendants in question.

ZemTV developer “Shani” shut down his addon soon after the lawsuit was announced, without mentioning it specifically. TVAddons, meanwhile, has been offline for well over a week, without any notice in public about the reason for the prolonged downtime.

The court’s order granting the subpoenas and letters of request is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

In the Works – AWS Region in Hong Kong

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-hong-kong/

Last year we launched new AWS Regions in Canada, India, Korea, the UK (London), and the United States (Ohio), and announced that new regions are coming to France (Paris), China (Ningxia), and Sweden (Stockholm).

Coming to Hong Kong in 2018
Today, I am happy to be able to tell you that we are planning to open up an AWS Region in Hong Kong, in 2018. Hong Kong is a leading international financial center, well known for its service oriented economy. It is rated highly on innovation and for ease of doing business. As an evangelist, I get to visit many great cities in the world, and was lucky to have spent some time in Hong Kong back in 2014 and met a number of awesome customers there. Many of these customers have given us feedback that they wanted a local AWS Region.

This will be the eighth AWS Region in Asia Pacific joining six other Regions there — Singapore, Tokyo, Sydney, Beijing, Seoul, and Mumbai, and an additional Region in China (Ningxia) expected to launch in the coming months. Together, these Regions will provide our customers with a total of 19 Availability Zones (AZs) and allow them to architect highly fault tolerant applications.

Today, our infrastructure comprises 43 Availability Zones across 16 geographic regions worldwide, with another three AWS Regions (and eight Availability Zones) in France, China, and Sweden coming online throughout 2017 and 2018, (see the AWS Global Infrastructure page for more info).

We are looking forward to serving new and existing customers in Hong Kong and working with partners across Asia-Pacific. Of course, the new region will also be open to existing AWS customers who would like to process and store data in Hong Kong. Public sector organizations such as government agencies, educational institutions, and nonprofits in Hong Kong will be able to use this region to store sensitive data locally (the AWS in the Public Sector page has plenty of success stories drawn from our worldwide customer base).

If you are a customer or a partner and have specific questions about this Region, you can contact our Hong Kong team.

Help Wanted
If you are interested in learning more about AWS positions in Hong Kong, please visit the Amazon Jobs site and set the location to Hong Kong.

Jeff;