Tag Archives: 2014

Singapore ISPs Block 53 Pirate Sites Following MPAA Legal Action

Post Syndicated from Andy original https://torrentfreak.com/singapore-isps-block-53-pirate-sites-following-mpaa-legal-action-180521/

Under increasing pressure from copyright holders, in 2014 Singapore passed amendments to copyright law that allow ISPs to block ‘pirate’ sites.

“The prevalence of online piracy in Singapore turns customers away from legitimate content and adversely affects Singapore’s creative sector,” said then Senior Minister of State for Law Indranee Rajah.

“It can also undermine our reputation as a society that respects the protection of intellectual property.”

After the amendments took effect in December 2014, there was a considerable pause before any websites were targeted. However, in September 2016, at the request of the MPA(A), Solarmovie.ph became the first website ordered to be blocked under Singapore’s amended Copyright Act. The High Court subsequently ordering several major ISPs to disable access to the site.

A new wave of blocks announced this morning are the country’s most significant so far, with dozens of ‘pirate’ sites targeted following a successful application by the MPAA earlier this year.

In total, 53 sites across 154 domains – including those operated by The Pirate Bay plus KickassTorrents and Solarmovie variants – have been rendered inaccessible by ISPs including Singtel, StarHub, M1, MyRepublic and ViewQwest.

“In Singapore, these sites are responsible for a major portion of copyright infringement of films and television shows,” an MPAA spokesman told The Straits Times (paywall).

“This action by rights owners is necessary to protect the creative industry, enabling creators to create and keep their jobs, protect their works, and ensure the continued provision of high-quality content to audiences.”

Before granting a blocking injunction, the High Court must satisfy itself that the proposed online locations meet the threshold of being “flagrantly infringing”. This means that a site like YouTube, which carries a lot of infringing content but is not dedicated to infringement, would not ordinarily get caught up in the dragnet.

Sites considered for blocking must have a primary purpose to infringe, a threshold that is tipped in copyright holders’ favor when the sites’ operators display a lack of respect for copyright law and have already had their domains blocked in other jurisdictions.

The Court also weighs a number of additional factors including whether blocking would place an unacceptable burden on the shoulders of ISPs, whether the blocking demand is technically possible, and whether it will be effective.

In common with other regions such as the UK and Australia, for example, sites targeted for blocking must be informed of the applications made against them, to ensure they’re given a chance to defend themselves in court. No fully-fledged ‘pirate’ site has ever defended a blocking application in Singapore or indeed any jurisdiction in the world.

Finally, should any measures be taken by ‘pirate’ sites to evade an ISP blockade, copyright holders can apply to the Singapore High Court to amend the blocking order. This is similar to the Australian model where each application must be heard on its merits, rather than the UK model where a more streamlined approach is taken.

According to a recent report by Motion Picture Association Canada, at least 42 countries are now obligated to block infringing sites. In Europe alone, 1,800 sites and 5,300 domains have been rendered inaccessible, with Portugal, Italy, the UK, and Denmark leading the way.

In Canada, where copyright holders are lobbying hard for a site-blocking regime of their own, there’s pressure to avoid the “uncertain, slow and expensive” route of going through the courts.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

masscan, macOS, and firewall

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/05/masscan-macos-and-firewall.html

One of the more useful features of masscan is the “–banners” check, which connects to the TCP port, sends some request, and gets a basic response back. However, since masscan has it’s own TCP stack, it’ll interfere with the operating system’s TCP stack if they are sharing the same IPv4 address. The operating system will reply with a RST packet before the TCP connection can be established.

The way to fix this is to use the built-in packet-filtering firewall to block those packets in the operating-system TCP/IP stack. The masscan program still sees everything before the packet-filter, but the operating system can’t see anything after the packet-filter.

Note that we are talking about the “packet-filter” firewall feature here. Remember that macOS, like most operating systems these days, has two separate firewalls: an application firewall and a packet-filter firewall. The application firewall is the one you see in System Settings labeled “Firewall”, and it controls things based upon the application’s identity rather than by which ports it uses. This is normally “on” by default. The packet-filter is normally “off” by default and is of little use to normal users.

Also note that macOS changed packet-filters around version 10.10.5 (“Yosemite”, October 2014). The older one is known as “ipfw“, which was the default firewall for FreeBSD (much of macOS is based on FreeBSD). The replacement is known as PF, which comes from OpenBSD. Whereas you used to use the old “ipfw” command on the command line, you now use the “pfctl” command, as well as the “/etc/pf.conf” configuration file.

What we need to filter is the source port of the packets that masscan will send, so that when replies are received, they won’t reach the operating-system stack, and just go to masscan instead. To do this, we need find a range of ports that won’t conflict with the operating system. Namely, when the operating system creates outgoing connections, it randomly chooses a source port within a certain range. We want to use masscan to use source ports in a different range.

To figure out the range macOS uses, we run the following command:

sysctl net.inet.ip.portrange.first net.inet.ip.portrange.last

On my laptop, which is probably the default for macOS, I get the following range. Sniffing with Wireshark confirms this is the range used for source ports for outgoing connections.

net.inet.ip.portrange.first: 49152
net.inet.ip.portrange.last: 65535

So this means I shouldn’t use source ports anywhere in the range 49152 to 65535. On my laptop, I’ve decided to use for masscan the ports 40000 to 41023. The range masscan uses must be a power of 2, so here I’m using 1024 (two to the tenth power).

To configure masscan, I can either type the parameter “–source-port 40000-41023” every time I run the program, or I can add the following line to /etc/masscan/masscan.conf. Remember that by default, masscan will look in that configuration file for any configuration parameters, so you don’t have to keep retyping them on the command line.

source-port = 40000-41023

Next, I need to add the following firewall rule to the bottom of /etc/pf.conf:

block in proto tcp from any to any port 40000 >< 41024

However, we aren’t done yet. By default, the packet-filter firewall is off on some versions of macOS. Therefore, every time you reboot your computer, you need to enable it. The simple way to do this is on the command line run:

pfctl -e

Or, if that doesn’t work, try:

pfctl -E

If the firewall is already running, then you’ll need to load the file explicitly (or reboot):

pfctl -f /etc/pf.conf

You can check to see if the rule is active:

pfctl -s rules

ISP Telenor Will Block The Pirate Bay in Sweden Without a Shot Fired

Post Syndicated from Andy original https://torrentfreak.com/isp-telenor-will-block-the-pirate-bay-in-sweden-without-a-shot-fired-180520/

Back in 2014, Universal Music, Sony Music, Warner Music, Nordisk Film and the Swedish Film Industry filed a lawsuit against Bredbandsbolaget, one of Sweden’s largest ISPs.

The copyright holders asked the Stockholm District Court to order the ISP to block The Pirate Bay and streaming site Swefilmer, claiming that the provider knowingly facilitated access to the pirate platforms and assisted their pirating users.

Soon after the ISP fought back, refusing to block the sites in a determined response to the Court.

“Bredbandsbolaget’s role is to provide its subscribers with access to the Internet, thereby contributing to the free flow of information and the ability for people to reach each other and communicate,” the company said in a statement.

“Bredbandsbolaget does not block content or services based on individual organizations’ requests. There is no legal obligation for operators to block either The Pirate Bay or Swefilmer.”

In February 2015 the parties met in court, with Bredbandsbolaget arguing in favor of the “important principle” that ISPs should not be held responsible for content exchanged over the Internet, in the same way the postal service isn’t responsible for the contents of an envelope.

But with TV companies SVT, TV4 Group, MTG TV, SBS Discovery and C More teaming up with the IFPI alongside Paramount, Disney, Warner and Sony in the case, Bredbandsbolaget would need to pull out all the stops to obtain victory. The company worked hard and initially the news was good.

In November 2015, the Stockholm District Court decided that the copyright holders could not force Bredbandsbolaget to block the pirate sites, ruling that the ISP’s operations did not amount to participation in the copyright infringement offenses carried out by some of its ‘pirate’ subscribers.

However, the case subsequently went to appeal, with the brand new Patent and Market Court of Appeal hearing arguments. In February 2017 it handed down its decision, which overruled the earlier ruling of the District Court and ordered Bredbandsbolaget to implement “technical measures” to prevent its customers accessing the ‘pirate’ sites through a number of domain names and URLs.

With nowhere left to go, Bredbandsbolaget and owner Telenor were left hanging onto their original statement which vehemently opposed site-blocking.

“It is a dangerous path to go down, which forces Internet providers to monitor and evaluate content on the Internet and block websites with illegal content in order to avoid becoming accomplices,” they said.

In March 2017, Bredbandsbolaget blocked The Pirate Bay but said it would not give up the fight.

“We are now forced to contest any future blocking demands. It is the only way for us and other Internet operators to ensure that private players should not have the last word regarding the content that should be accessible on the Internet,” Bredbandsbolaget said.

While it’s not clear whether any additional blocking demands have been filed with the ISP, this week an announcement by Bredbandsbolaget parent company Telenor revealed an unexpected knock-on effect. Seemingly without a single shot being fired, The Pirate Bay will now be blocked by Telenor too.

The background lies in Telenor’s acquisition of Bredbandsbolaget back in 2005. Until this week the companies operated under separate brands but will now merge into one entity.

“Telenor Sweden and Bredbandsbolaget today take the final step on their joint trip and become the same company with the same name. As a result, Telenor becomes a comprehensive provider of broadband, TV and mobile communications,” the company said in a statement this week.

“Telenor Sweden and Bredbandsbolaget have shared both logo and organization for the last 13 years. Today, we take the last step in the relationship and consolidate the companies under the same name.”

Up until this final merger, 600,000 Bredbandsbolaget broadband customers were denied access to The Pirate Bay. Now it appears that Telenor’s 700,000 fiber and broadband customers will be affected too. The new single-brand company says it has decided to block the notorious torrent site across its entire network.

“We have not discontinued Bredbandsbolaget, but we have merged Telenor and Bredbandsbolaget and become one,” the company said.

“When we share the same network, The Pirate Bay is blocked by both Telenor and Bredbandsbolaget and there is nothing we plan to change in the future.”

TorrentFreak contacted the PR departments of both Telenor and Bredbandsbolaget requesting information on why a court order aimed at only the latter’s customers would now affect those of the former too, more than doubling the blockade’s reach. Neither company responded which leaves only speculation as to its motives.

On the one hand, the decision to voluntarily implement an expanded blockade could perhaps be viewed as a little unusual given how much time, effort and money has been invested in fighting web-blockades in Sweden.

On the other, the merger of the companies may present legal difficulties as far as the court order goes and it could certainly cause friction among the customer base of Telenor if some customers could access TPB, and others could not.

In any event, the legal basis for web-blocking on copyright infringement grounds was firmly established last year at the EU level, which means that Telenor would lose any future legal battle, should it decide to dig in its heels. On that basis alone, the decision to block all customers probably makes perfect commercial sense.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Supply-Chain Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/supply-chain_se.html

Earlier this month, the Pentagon stopped selling phones made by the Chinese companies ZTE and Huawei on military bases because they might be used to spy on their users.

It’s a legitimate fear, and perhaps a prudent action. But it’s just one instance of the much larger issue of securing our supply chains.

All of our computerized systems are deeply international, and we have no choice but to trust the companies and governments that touch those systems. And while we can ban a few specific products, services or companies, no country can isolate itself from potential foreign interference.

In this specific case, the Pentagon is concerned that the Chinese government demanded that ZTE and Huawei add “backdoors” to their phones that could be surreptitiously turned on by government spies or cause them to fail during some future political conflict. This tampering is possible because the software in these phones is incredibly complex. It’s relatively easy for programmers to hide these capabilities, and correspondingly difficult to detect them.

This isn’t the first time the United States has taken action against foreign software suspected to contain hidden features that can be used against us. Last December, President Trump signed into law a bill banning software from the Russian company Kaspersky from being used within the US government. In 2012, the focus was on Chinese-made Internet routers. Then, the House Intelligence Committee concluded: “Based on available classified and unclassified information, Huawei and ZTE cannot be trusted to be free of foreign state influence and thus pose a security threat to the United States and to our systems.”

Nor is the United States the only country worried about these threats. In 2014, China reportedly banned antivirus products from both Kaspersky and the US company Symantec, based on similar fears. In 2017, the Indian government identified 42 smartphone apps that China subverted. Back in 1997, the Israeli company Check Point was dogged by rumors that its government added backdoors into its products; other of that country’s tech companies have been suspected of the same thing. Even al-Qaeda was concerned; ten years ago, a sympathizer released the encryption software Mujahedeen Secrets, claimed to be free of Western influence and backdoors. If a country doesn’t trust another country, then it can’t trust that country’s computer products.

But this trust isn’t limited to the country where the company is based. We have to trust the country where the software is written — and the countries where all the components are manufactured. In 2016, researchers discovered that many different models of cheap Android phones were sending information back to China. The phones might be American-made, but the software was from China. In 2016, researchers demonstrated an even more devious technique, where a backdoor could be added at the computer chip level in the factory that made the chips ­ without the knowledge of, and undetectable by, the engineers who designed the chips in the first place. Pretty much every US technology company manufactures its hardware in countries such as Malaysia, Indonesia, China and Taiwan.

We also have to trust the programmers. Today’s large software programs are written by teams of hundreds of programmers scattered around the globe. Backdoors, put there by we-have-no-idea-who, have been discovered in Juniper firewalls and D-Link routers, both of which are US companies. In 2003, someone almost slipped a very clever backdoor into Linux. Think of how many countries’ citizens are writing software for Apple or Microsoft or Google.

We can go even farther down the rabbit hole. We have to trust the distribution systems for our hardware and software. Documents disclosed by Edward Snowden showed the National Security Agency installing backdoors into Cisco routers being shipped to the Syrian telephone company. There are fake apps in the Google Play store that eavesdrop on you. Russian hackers subverted the update mechanism of a popular brand of Ukrainian accounting software to spread the NotPetya malware.

In 2017, researchers demonstrated that a smartphone can be subverted by installing a malicious replacement screen.

I could go on. Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn’t an option; the tech world is far too internationally interdependent for that. We can’t trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government. And just as Russia is penetrating the US power grid so they have that capability in the event of hostilities, many countries are almost certainly doing the same thing at the consumer level.

We don’t know whether the risk of Huawei and ZTE equipment is great enough to warrant the ban. We don’t know what classified intelligence the United States has, and what it implies. But we do know that this is just a minor fix for a much larger problem. It’s doubtful that this ban will have any real effect. Members of the military, and everyone else, can still buy the phones. They just can’t buy them on US military bases. And while the US might block the occasional merger or acquisition, or ban the occasional hardware or software product, we’re largely ignoring that larger issue. Solving it borders on somewhere between incredibly expensive and realistically impossible.

Perhaps someday, global norms and international treaties will render this sort of device-level tampering off-limits. But until then, all we can do is hope that this particular arms race doesn’t get too far out of control.

This essay previously appeared in the Washington Post.

Former Judge Accuses IP Court of Using ‘Pirate’ Microsoft Software

Post Syndicated from Andy original https://torrentfreak.com/former-judge-accuses-ip-court-of-using-pirate-microsoft-software-180429/

While piracy of movies, TV shows, and music grabs most of the headlines, software piracy is a huge issue, from both consumer and commercial perspectives.

For many years, software such as Photoshop has been pirated on a grand scale and around the world, millions of computers rely on cracked and unlicensed copies of Microsoft’s Windows software.

One of the key drivers of this kind of piracy is the relative expense of software. Open source variants are nearly always available but big brand names always seem more popular due to their market penetration and perceived ease of use.

While using pirated software very rarely gets individuals into trouble, the same cannot be said of unlicensed commercial operators. That appears to be the case in Russia where somewhat ironically the Court for Intellectual Property Rights stands accused of copyright infringement.

A complaint filed by the Paragon law firm at the Prosecutor General’s Office of the Court for Intellectual Property Rights (CIP) alleges that the Court is illegally using Microsoft software, something which has the potential to affect the outcome of court cases involving the US-based software giant.

Paragon is representing Alexander Shmuratov, who is a former Assistant Judge at the Court for Intellectual Property Rights. Shmuratov worked at the Court for several years and claims that the computers there were being operated with expired licenses.

Shmuratov himself told Kommersant that he “saw the notice of an activation failure every day when using MS Office products” in intellectual property court.

A representative of the Prosecutor General’s Office confirmed that a complaint had been received but said it had been forwarded to the Ministry of Internal Affairs.

In respect of the counterfeit software claims, CIP categorically denies the allegations. CIP says that licenses for all Russian courts were purchased back in 2008 and remained in force until 2011. In 2013, Microsoft agreed to an extension.

Only adding more intrigue to the story, CIP Assistant chairman Catherine Ulyanova said that the initator of the complaint, former judge Alexander Shmuratov, was dismissed from the CIP because he provided false information about income. He later mounted a challenge against his dismissal but was unsuccessful.

Ulyanova said that Microsoft licensed all courts from 2006 for use of Windows and MS Office. The licenses were acquired through a third-party company and more licenses than necessary were purchased, with some licenses being redistributed for use by CIP in later years with the consent of Microsoft.

Kommersant was unable to confirm how licenses were paid for beyond December 2011 but apparently an “official confirmation letter from the Irish headquarters of Microsoft, which does not object to the transfer of CIP licenses” had been sent to the Court.

Responding to Shmuratov’s allegations that software he used hadn’t been activated, Ulyanova said that technical problems had no relationship with the existence of software licenses.

The question of whether the Court is properly licensed will be determined at a later date but observers are already raising questions concerning CIP’s historical dealings with Microsoft not only in terms of licensing, but in cases it handled.

In the period 2014-2017, the Court for Intellectual Property Rights handled around 80 cases involving Microsoft and claims of between 50 thousand ($800) and several million rubles.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Registrars Suspend 11 Pirate Site Domains, 89 More in the Crosshairs

Post Syndicated from Andy original https://torrentfreak.com/registrars-suspend-11-pirate-site-domains-89-more-in-the-crosshairs-180423/

In addition to website blocking which is running rampant across dozens of countries right now, targeting the domains of pirate sites is considered to be a somewhat effective anti-piracy tool.

The vast majority of websites are found using a recognizable name so when they become inaccessible, site operators have to work quickly to get the message out to fans. That can mean losing visitors, at least in the short term, and also contributes to the rise of copy-cat sites that may not have users’ best interests at heart.

Nevertheless, crime-fighting has always been about disrupting the ability of the enemy to do business so with this in mind, authorities in India began taking advice from the UK’s Police Intellectual Property Crime Unit (PIPCU) a couple of years ago.

After studying the model developed by PIPCU, India formed its Digital Crime Unit (DCU), which follows a multi-stage plan.

Initially, pirate sites and their partners are told to cease-and-desist. Next, complaints are filed with advertisers, who are asked to stop funding site activities. Service providers and domain registrars also receive a written complaint from the DCU, asking them to suspend services to the sites in question.

Last July, the DCU earmarked around 9,000 sites where pirated content was being made available. From there, 1,300 were placed on a shortlist for targeted action. Precisely how many have been contacted thus far is unclear but authorities are now reporting success.

According to local reports, the Maharashtra government’s Digital Crime Unit has managed to have 11 pirate site domains suspended following complaints from players in the entertainment industry.

As is often the case (and to avoid them receiving even more attention) the sites in question aren’t being named but according to Brijesh Singh, special Inspector General of Police in Maharashtra, the sites had a significant number of visitors.

Their domain registrars were sent a notice under Section 149 of the Code Of Criminal Procedure, which grants police the power to take preventative action when a crime is suspected. It’s yet to be confirmed officially but it seems likely that pirate sites utilizing local registrars were targeted by the authorities.

“Responding to our notice, the domain names of all these websites, that had a collective viewership of over 80 million, were suspended,” Singh said.

Laxman Kamble, a police inspector attached to the state government’s Cyber Cell, said the pilot project was launched after the government received complaints from Viacom and Star but back in January there were reports that the MPAA had also become involved.

Using the model pioneered by London’s PIPCU, 19 parameters were applied to list of pirate sites in order to place them on the shortlist. They are reported to include the type of content being uploaded, downloaded, and the number of downloads overall.

Kamble reports that a further 89 websites, that have domains registered abroad but are very popular in India, are now being targeted. Whether overseas registrars will prove as compliant will remain to be seen. After booking initial success, even PIPCU itself experienced problems keeping up the momentum with registrars.

In 2014, information obtained by TorrentFreak following a Freedom of Information request revealed that only five out of 70 domain registrars had complied with police requests to suspend domains.

A year later, PIPCU confirmed that suspending pirate domain names was no longer a priority for them after ICANN ruled that registrars don’t have to suspend domain names without a valid court order.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Обнародвани са съществени изменения в Закона за авторското право и сродните му права

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/23/zid-zapsp-2/

На 29 март 2018 г. в извънреден брой на Държавен вестник са обнародвани изменения в Закона за авторското право и сродните права. Приетите изменения отразяват разпоредбите на Директива 2014/26 /ЕС, която трябваше да бъде приета в българското законодателство до април 2016 г. През януари 2018 г. Европейската комисия отнесе въпроса до Съда на ЕС и поиска България да бъде глобена с по около 19 000 евро на ден за неспазване на законодателството на ЕС.

Директивата има за цел да предвиди координирането на националните правила относно достъпа до дейността по управление на авторското право и сродните му права от организации за колективно управление на авторски права, особеностите на тяхното управление и надзорната рамка (съобр.8), да определи изискванията, приложими за организациите за колективно управление на авторски права, за да се гарантира висок стандарт на управление, финансово управление, прозрачност и отчетност (съобр.9), а също да да се осигури необходимото минимално качество на трансграничните услуги, предоставяни от организации за колективно управление на авторски права, особено по отношение на прозрачността на представяния репертоар и точността на финансовите потоци, свързани с използването на правата, и да се улесни предоставяне на многотериториална многорепертоарна услуга (съобр.40).

Измененията са съществени и пространни, вероятно скоро ще има подробни анализи.

И нещо от ПЗР на ЗИД:
§ 27. В 6-месечен срок от влизането в сила на този закон министърът на културата представя на Европейската комисия доклад за състоянието и развитието на многотериториалното отстъпване на права в интернет на територията на Република България. Докладът съдържа информация за наличието на многотериториални разрешения, за спазването на глава единадесета „и“ от организациите за колективно управление на права, както и оценка на развитието на многотериториалното разрешаване на използването в интернет на музикални произведения от ползватели, носители на права и от други заинтересовани лица.
§ 28. Министърът на културата предоставя на Европейската комисия списък на регистрираните организации за колективно управление на права и я уведомява за промените в него в тримесечен срок от настъпването им.
§ 29. В Закона за електронните съобщения се правят следните допълнения:
1. В чл. 73, ал. 3 се създава т. 15:
„15. задължения за предоставяне на вярна и точна информация за броя на потребителите и абонатите на предприятията, предоставящи електронни съобщителни мрежи и/или услуги.“
2. В чл. 231, ал. 2 накрая се добавя „договорите с доставчиците на съдържание, както и актуална база данни за абонатите, при спазване на изискванията за защита на личните данни“.
§ 30. В Закона за закрила и развитие накултурата в чл. 31, ал. 1, т. 1 след думите „ал. 2“ се добавя „и чл. 98в1, ал. 6“.
§ 31. Законът влиза в сила от деня на обнародването му в „Държавен вестник“, с изключение на § 18 и 30, които влизат в сила 9 месеца след обнародването му.”

Implementing safe AWS Lambda deployments with AWS CodeDeploy

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/

This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect

AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.

Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.

In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.

Scenario

For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.

Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.

Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.

Prerequisites

To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:

  • cloudformation:*
  • lambda:*
  • codedeploy:*
  • iam:create*

You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.

All of the code used in this post can be found in this GitHub repository: https://github.com/aws-samples/aws-safe-lambda-deployments.

Walkthrough

For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments.  The deployment is handled and automated by CloudFormation.

SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.

Create a SAM template

To get started, write your SAM template and call it template.yaml.

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An example SAM template for Lambda Safe Deployments.

Resources:

  returnS3Buckets:
    Type: AWS::Serverless::Function
    Properties:
      Handler: returnS3Buckets.handler
      Runtime: nodejs6.10
      AutoPublishAlias: live
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "s3:ListAllMyBuckets"
            Resource: '*'
      DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Hooks:
            PreTraffic: !Ref preTrafficHook
      Events:
        Api:
          Type: Api
          Properties:
            Path: /test
            Method: get

  preTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: preTrafficHook.handler
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "codedeploy:PutLifecycleEventHookExecutionStatus"
            Resource:
              !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "lambda:InvokeFunction"
            Resource: !Ref returnS3Buckets.Version
      Runtime: nodejs6.10
      FunctionName: 'CodeDeployHook_preTrafficHook'
      DeploymentPreference:
        Enabled: false
      Timeout: 5
      Environment:
        Variables:
          NewVersion: !Ref returnS3Buckets.Version

This template creates two functions:

  • returnS3Buckets
  • preTrafficHook

The returnS3Buckets function is where your application logic lives. It’s a simple piece of code that uses the AWS SDK for JavaScript in Node.JS to call the Amazon S3 listBuckets API action and return the number of buckets.

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			callback(null, {
				statusCode: 200,
				body: allBuckets.length
			});
		}
	});	
}

Review the key parts of the SAM template that defines returnS3Buckets:

  • The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
  • The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
  • The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
  • The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {

	console.log("Entering PreTraffic Hook!");
	
	// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
    var deploymentId = event.DeploymentId;
	var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;

	var functionToTest = process.env.NewVersion;
	console.log("Testing new function version: " + functionToTest);

	// Perform validation of the newly deployed Lambda version
	var lambdaParams = {
		FunctionName: functionToTest,
		InvocationType: "RequestResponse"
	};

	var lambdaResult = "Failed";
	lambda.invoke(lambdaParams, function(err, data) {
		if (err){	// an error occurred
			console.log(err, err.stack);
			lambdaResult = "Failed";
		}
		else{	// successful response
			var result = JSON.parse(data.Payload);
			console.log("Result: " +  JSON.stringify(result));

			// Check the response for valid results
			// The response will be a JSON payload with statusCode and body properties. ie:
			// {
			//		"statusCode": 200,
			//		"body": 51
			// }
			if(result.body == 9){	
				lambdaResult = "Succeeded";
				console.log ("Validation testing succeeded!");
			}
			else{
				lambdaResult = "Failed";
				console.log ("Validation testing failed!");
			}

			// Complete the PreTraffic Hook by sending CodeDeploy the validation status
			var params = {
				deploymentId: deploymentId,
				lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
				status: lambdaResult // status can be 'Succeeded' or 'Failed'
			};
			
			// Pass AWS CodeDeploy the prepared validation test results.
			codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
				if (err) {
					// Validation failed.
					console.log('CodeDeploy Status update failed');
					console.log(err, err.stack);
					callback("CodeDeploy Status update failed");
				} else {
					// Validation succeeded.
					console.log('Codedeploy status updated successfully');
					callback(null, 'Codedeploy status updated successfully');
				}
			});
		}  
	});
}

The hook is hardcoded to check that the number of S3 buckets returned is 9.

Review the key parts of the SAM template that defines preTrafficHook:

  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
  • This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
  • The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID].  Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
  • Set the Timeout attribute to allow enough time to complete your validation tests.
  • Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.

Deploy the function

Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.

First, package the function. This command returns a CloudFormation importable file, packaged.yaml.

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml

Now deploy everything:

sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:

SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:

There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.

An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.

Publish a new Lambda function version

Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			//callback(null, allBuckets.length);

			//  New Code begins here
			var counter=0;
			for(var i  in allBuckets){
				if(allBuckets[i].Name[0] === "a")
					counter++;
			}
			console.log("Total buckets starting with a: " + counter);

			callback(null, {
				statusCode: 200,
				body: counter
			});
			
		}
	});	
}

Repackage and redeploy with the same two commands as earlier:

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
	
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:

During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.

The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:

The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):

Verify the traffic shift

During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.

A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:

After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.

Check the results

If you invoke the function alias manually, you see the results of the new implementation.

aws lambda invoke –function [lambda arn to live alias] out.txt

You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:

Summary

This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.

Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.

[$] The rhashtable documentation I wanted to read

Post Syndicated from corbet original https://lwn.net/Articles/751374/rss

The rhashtable data structure is a generic resizable hash-table
implementation in the Linux kernel, which LWN first introduced as “relativistic
hash tables” back in 2014. I thought at the time that it might be fun to make
use of rhashtables, but didn’t, until an opportunity arose through my work on
the Lustre filesystem. Lustre is a cluster filesystem that is currently in
drivers/staging while the code is revised to meet upstream
requirements. One of those requirements is to avoid duplicating
similar functionality where possible. As Lustre contains a resizable
hash table, it really needs to be converted to use rhashtables instead — at
last I have my opportunity.

Subscribers can read on for a look at the rhashtable API by guest author
Neil Brown.

Всички са маскари?

Post Syndicated from Bozho original https://blog.bozho.net/blog/3078

Вчера се обяви политическото обединение Демократична България – с Да, България, ДСБ и Зелените. И като част от екипа, обявен като алтернатива на управлението, ще си позволя да напиша първия ми фокусирано партиен блогпост от изборите миналата година досега.

Съгласих се да стана част от екипа на управленската алтернатива защото не обичам да се „скатавам“. Да, мога да измисля достатъчно оправдания защо да не участвам, а и публичността носи рискове, но оправдания всеки има. Дигиталната трансформация, и електронното управление като част от нея, са нещо, с което смятам, че мога да помогна. И също така – нещо, което е крайно належащо, ако не искаме да изоставаме като държава в дългосрочен план.

Но няма да се фокусирам върху моята роля, няма да влизам в патетични слова за светлото бъдеще, за топлите отношения в новото обединение, за грандиозните ни резултати на следващите избори и т.н. Вместо това ще обобщя коментарите на хората (из социалните мрежи и новинарските сайтове) и ще опитам да дам друга перспектвиа.

У доста хора има осезаем негативизъм към това обединение. Всеки със своята причина, с която не мога да споря. но този негативизъм кулминира в крайно множество твърдения за обединението и за партиите в него, които твърдения мога да опитам да оспоря. И целта не е да кажа „аха! не сте прави да не ни харесвате“ (защото това е субективно и всеки има право да не харесва каквото си иска), а по-скоро да допълня картината, която всеки има за политическия пейзаж. Ще разгледам 10 твърдения/коментара, които са преобладаващи. И не за да влизам в „обяснителен режим“, а за да вляза в своеобразен диалог с по-скептичните.

Обединяватe се само за да минете 4-процентовата бариера

Едно от следствията на такова обединение ще е влизане в парламента. Но целта на обединението не е това. Целта е хората с демократични виждания за България (а те всъщност не са малко) да има за кого да гласуват без да се чудят дали не помагат на БСП, ДПС, Кремъл или който и да е друг. Целта е дългосрочна. И предвид обявяването на конкретен управленски екип, целта е управление. И то адекватно, съвременно и експертно, а не махленско.

Има и друг аспект – откакто се учреди Да, България имаме един „стъклен таван“, изразяващ се в „абе, харесваме ви, ама ще влезете ли?“ (чувал съм го на няколко срещи с потенциални избиратели). Та, надяваме се с това обединение да счупим този стъклен таван, т.е. тези, които биха ни подкрепили, да го направят. Социолозите да кажат.

Късно е либе за китка / защо чак сега

…или продължаващото вече година обвинение срещу Да, България, че „е разцепила дясното“. Както имах случай да спомена днес – предишните избори не бяха успех, но решението за явяване като „Да, България“ все пак беше правилно. По много причини, които тук само ще маркирам – след изборите направих един анализ на екзит половете и от него излезе, че макар да не е постигнала желания резултата, Да, България е привлякла най-голям процент нови избиратели и гласуващи за първи път. Взела е повече от ГЕРБ, отколкото от РБ(2014). В „дългата нива“, която имаме да орем, това всичко е от значение. Дали от по-голямо значение от едно евентуално влизане в парламента, не знам. Но в момента имаме Да, България със собствено лице и послания, която влиза в продуктивен диалог и обединения. Която привлича нови хора в местни структури из страната. Да, България на 2 седмици, влизаща в „поредното дясно обединение“ (както несъмнено щеше да бъде наречена евентуална коалиция тогава), не е ясно дали щеше да има същата съдба. Наскоро някой ми каза, че е приятно изненадан, че не сме умрели след изборите. И да, не сме.

Оставете жълтите павета и ходете из страната

Съвсем правилно. Това и правим. За една година учредихме десетки местни структури – в областни градове, но и в по-малки населени места. Съвсем наясно сме, че избори не се печелят във фейсбук и с разходки между Кърниградска и Драган Цанков (адресите на централите на ДСБ и Да, България).

Леви сте, не сте десни!

Това „обвинение“ към Да, България тръгна от самото начало без още да имаме програма. После продължи въпреки програмата и въпреки десетките позиции, с които излизахме. Може би идва от първоначалното определяне като „нито ляво, нито дясно“, не знам. И макар да смятам лявото и дясното за изпразнени от съдържание в България, все пак трябва да подчертая, че Да, България е центристка партия, стояща все пак от дясната страна на центъра. По много причини – от политическото позициониране на самите членове, през предизборната програма, до десетките позиции по различни теми, в които неизменно присъстват защитата на частната собственост, частната инициатива, предприемачеството ненамесата на държавата в личния живот на хората и др. десни принципи и ценности. Да, сред позициите има и някои, които могат да се определят като по-леви (сещам се за 1-2 примера), но именно затова сме в центъра. Може би объркването на мястото в спектъра идва заради противопоставянето на либералното и консервативното. И тъй като в САЩ лявото и либерално, а дясното – консервативно, някой може да реши, че това непременно винаги е така. Ами не е. Не, не сме леви. И да, по-скоро либерални, отколкото консервативни сме (но това не значи, че нямаме консервативно-мислещи хора).

Ама Зелените са леви!

Исторически, зелените партии наистина са в лявата част на спектъра. Защитата на природата от човешката дейност е неизменно в конфликт с неограничената свобода на бизнеса. Но конкретни нашите Зелени по-скоро клонят към центъра. Съпредседател в момента им в Владислав Панев, финансист с доста дясно мислене. В органите на зелените има както по-ляво, така и по-дясно мислещи хора. А в настоящия момент, „зелено“ може и да не значи „ляво“ – устойчивото развитие е добро и за бизнеса и за природата, просто е различен поглед. Един пример от днес – министър Нено Димов, който е считан за много десен (идвайки от Институт за дясна политика), предлага зелени мерки във връзка с автомобилите. А и дори зелените да са по-вляво от центъра, съгласили сме се да сме на различни мнения по някои теми. В България трима души не могат да са на едно мнение по всички теми, камо ли три партии. Имаме общи цели, а всяка партия си има собствено лице.

Поредното механично обединение без смисъл

…или „с какво е по-различно от Синята коалиция и РБ“. Без това да се разглежда като критика към предните десни обединения, това не идва в предизборен период. Не сядаме на една маса заради идващи избори, а с по-дългосрочна перспектива. Освен това то идва след една година общи позиции и действия по редица въпроси, както на национално ниво, така и по места. Така че обединението не е механично – със сигурност и с ДСБ и със Зелените имаме общи ценности за това какво е правова държава.

Сорос, Америка за България и Прокопиев ви финансират!

Тук мога да дам напълно верния отговор, че това са пълни глупости. Партията не е получавала нито лев – нито от Сорос, нито от Америка за България, нито от Прокопиев. Има си списък с дарители, сметната палата го има, отчетите са ни публични. Пари общо взето почти нямаме. Ако Сорос е искал да превежда, объркал е IBAN-а. Мога да вляза и в подробности, също така. Да, някои хора от ръководството са били част от НПО-та, които са получавали грантово финансиране. Но НПО-тата не са лоши, а грантовото финансиране не е пари на калпак. В момента, доколкото ми е известно, няколко члена на Да, България участват в НПО, което е бенефициент на Америка за България. Но именно това беше причината те да нямат желание да бъдат в органите на партията, за да няма погрешни тълкувания. Е, това не спира Блиц и ПИК да обясняват за грантовете.

Защо не говорите за (проблем Х, който за мен е важен)

Говорим по много проблеми, имаме много позиции. Но най-вероятно това не стига до вас, отчасти заради наши грешки, отчасти защото извън няколко онлайн издания, отразяването ни по медиите е силно ограничено (канят всякакви алкохолици в националните медии, но отменят участие на Христо Иванов, например). Ясно е, че това е средата, в която работим и трябва да се съобразяваме с нея. Което включва фокусиране в няколко по-ключови теми, а не разпиляването в много теми, така че много хора да припознаят своя проблем.

Само говорите, а нищо не правите

Всъщност правим, но отново голяма част от активността не достига до широк кръг хора. Повечето конкретни действия са на местно ниво, но и на национално не са само приказки – организирали сме акции, изпращали сме становища и предложения за изменения на законопроекти до Народното събрание. И да, говорили сме по темите, които са важни и сме посочвали проблеми. Защото политическото говорене все пак е „правене на нещо“.

Със стари к*рви, нов бардак!

Кой е стара к*рва бе?? 🙂 А сериозно – в лицата, обявени днес има някои познати (с по един-два мандата в парламента, например), и доста непознати на широката публика. Така че това е по-скоро клише по инерция, отколкото базирано на някакви обективни факти. Естествено, че трябва да има хора с опит и естествено, че има много нови лица.

В обобщение – нормално и правилно е избирателите да са взискателни. И при опита който имаме с разпадащи се десни обединения е нормално да са скептични. Надявам се да съм разсеял поне малко от този скептицизъм. Но той е и полезен. Нямаме претенции за безгрешност и святост и всяка критика простираща се отвън „я се разкарайте“ няма да бъде подмината.

А иначе, всички са маскари, то е ясно.

Innovation Flywheels and the AWS Serverless Application Repository

Post Syndicated from Tim Wagner original https://aws.amazon.com/blogs/compute/innovation-flywheels-and-the-aws-serverless-application-repository/

At AWS, our customers have always been the motivation for our innovation. In turn, we’re committed to helping them accelerate the pace of their own innovation. It was in the spirit of helping our customers achieve their objectives faster that we launched AWS Lambda in 2014, eliminating the burden of server management and enabling AWS developers to focus on business logic instead of the challenges of provisioning and managing infrastructure.

 

In the years since, our customers have built amazing things using Lambda and other serverless offerings, such as Amazon API Gateway, Amazon Cognito, and Amazon DynamoDB. Together, these services make it easy to build entire applications without the need to provision, manage, monitor, or patch servers. By removing much of the operational drudgery of infrastructure management, we’ve helped our customers become more agile and achieve faster time-to-market for their applications and services. By eliminating cold servers and cold containers with request-based pricing, we’ve also eliminated the high cost of idle capacity and helped our customers achieve dramatically higher utilization and better economics.

After we launched Lambda, though, we quickly learned an important lesson: A single Lambda function rarely exists in isolation. Rather, many functions are part of serverless applications that collectively deliver customer value. Whether it’s the combination of event sources and event handlers, as serverless web apps that combine APIs with functions for dynamic content with static content repositories, or collections of functions that together provide a microservice architecture, our customers were building and delivering serverless architectures for every conceivable problem. Despite the economic and agility benefits that hundreds of thousands of AWS customers were enjoying with Lambda, we realized there was still more we could do.

How Customer Feedback Inspired Us to Innovate

We heard from our customers that getting started—either from scratch or when augmenting their implementation with new techniques or technologies—remained a challenge. When we looked for serverless assets to share, we found stellar examples built by serverless pioneers that represented a multitude of solutions across industries.

There were apps to facilitate monitoring and logging, to process image and audio files, to create Alexa skills, and to integrate with notification and location services. These apps ranged from “getting started” examples to complete, ready-to-run assets. What was missing, however, was a unified place for customers to discover this diversity of serverless applications and a step-by-step interface to help them configure and deploy them.

We also heard from customers and partners that building their own ecosystems—ecosystems increasingly composed of functions, APIs, and serverless applications—remained a challenge. They wanted a simple way to share samples, create extensibility, and grow consumer relationships on top of serverless approaches.

 

We built the AWS Serverless Application Repository to help solve both of these challenges by offering publishers and consumers of serverless apps a simple, fast, and effective way to share applications and grow user communities around them. Now, developers can easily learn how to apply serverless approaches to their implementation and business challenges by discovering, customizing, and deploying serverless applications directly from the Serverless Application Repository. They can also find libraries, components, patterns, and best practices that augment their existing knowledge, helping them bring services and applications to market faster than ever before.

How the AWS Serverless Application Repository Inspires Innovation for All Customers

Companies that want to create ecosystems, share samples, deliver extensibility and customization options, and complement their existing SaaS services use the Serverless Application Repository as a distribution channel, producing apps that can be easily discovered and consumed by their customers. AWS partners like HERE have introduced their location and transit services to thousands of companies and developers. Partners like Datadog, Splunk, and TensorIoT have showcased monitoring, logging, and IoT applications to the serverless community.

Individual developers are also publishing serverless applications that push the boundaries of innovation—some have published applications that leverage machine learning to predict the quality of wine while others have published applications that monitor crypto-currencies, instantly build beautiful image galleries, or create fast and simple surveys. All of these publishers are using serverless apps, and the Serverless Application Repository, as the easiest way to share what they’ve built. Best of all, their customers and fellow community members can find and deploy these applications with just a few clicks in the Lambda console. Apps in the Serverless Application Repository are free of charge, making it easy to explore new solutions or learn new technologies.

Finally, we at AWS continue to publish apps for the community to use. From apps that leverage Amazon Cognito to sync user data across applications to our latest collection of serverless apps that enable users to quickly execute common financial calculations, we’re constantly looking for opportunities to contribute to community growth and innovation.

At AWS, we’re more excited than ever by the growing adoption of serverless architectures and the innovation that services like AWS Lambda make possible. Helping our customers create and deliver new ideas drives us to keep inventing ways to make building and sharing serverless apps even easier. As the number of applications in the Serverless Application Repository grows, so too will the innovation that it fuels for both the owners and the consumers of those apps. With the general availability of the Serverless Application Repository, our customers become more than the engine of our innovation—they become the engine of innovation for one another.

To browse, discover, deploy, and publish serverless apps in minutes, visit the Serverless Application Repository. Go serverless—and go innovate!

Dr. Tim Wagner is the General Manager of AWS Lambda and Amazon API Gateway.

New – Amazon DynamoDB Continuous Backups and Point-In-Time Recovery (PITR)

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-continuous-backups-and-point-in-time-recovery-pitr/

The Amazon DynamoDB team is back with another useful feature hot on the heels of encryption at rest. At AWS re:Invent 2017 we launched global tables and on-demand backup and restore of your DynamoDB tables and today we’re launching continuous backups with point-in-time recovery (PITR).

You can enable continuous backups with a single click in the AWS Management Console, a simple API call, or with the AWS Command Line Interface (CLI). DynamoDB can back up your data with per-second granularity and restore to any single second from the time PITR was enabled up to the prior 35 days. We built this feature to protect against accidental writes or deletes. If a developer runs a script against production instead of staging or if someone fat-fingers a DeleteItem call, PITR has you covered. We also built it for the scenarios you can’t normally predict. You can still keep your on-demand backups for as long as needed for archival purposes but PITR works as additional insurance against accidental loss of data. Let’s see how this works.

Continuous Backup

To enable this feature in the console we navigate to our table and select the Backups tab. From there simply click Enable to turn on the feature. I could also turn on continuous backups via the UpdateContinuousBackups API call.

After continuous backup is enabled we should be able to see an Earliest restore date and Latest restore date

Let’s imagine a scenario where I have a lot of old user profiles that I want to delete.

I really only want to send service updates to our active users based on their last_update date. I decided to write a quick Python script to delete all the users that haven’t used my service in a while.

import boto3
table = boto3.resource("dynamodb").Table("VerySuperImportantTable")
items = table.scan(
    FilterExpression="last_update >= :date",
    ExpressionAttributeValues={":date": "2014-01-01T00:00:00"},
    ProjectionExpression="ImportantId"
)['Items']
print("Deleting {} Items! Dangerous.".format(len(items)))
with table.batch_writer() as batch:
    for item in items:
        batch.delete_item(Key=item)

Great! This should delete all those pesky non-users of my service that haven’t logged in since 2013. So,— CTRL+C CTRL+C CTRL+C CTRL+C (interrupt the currently executing command).

Yikes! Do you see where I went wrong? I’ve just deleted my most important users! Oh, no! Where I had a greater-than sign, I meant to put a less-than! Quick, before Jeff Barr can see, I’m going to restore the table. (I probably could have prevented that typo with Boto 3’s handy DynamoDB conditions: Attr("last_update").lt("2014-01-01T00:00:00"))

Restoring

Luckily for me, restoring a table is easy. In the console I’ll navigate to the Backups tab for my table and click Restore to point-in-time.

I’ll specify the time (a few seconds before I started my deleting spree) and a name for the table I’m restoring to.

For a relatively small and evenly distributed table like mine, the restore is quite fast.

The time it takes to restore a table varies based on multiple factors and restore times are not neccesarily coordinated with the size of the table. If your dataset is evenly distributed across your primary keys you’ll be able to take advanatage of parallelization which will speed up your restores.

Learn More & Try It Yourself
There’s plenty more to learn about this new feature in the documentation here.

Pricing for continuous backups varies by region and is based on the current size of the table and all indexes.

A few things to note:

  • PITR works with encrypted tables.
  • If you disable PITR and later reenable it, you reset the start time from which you can recover.
  • Just like on-demand backups, there are no performance or availability impacts to enabling this feature.
  • Stream settings, Time To Live settings, PITR settings, tags, Amazon CloudWatch alarms, and auto scaling policies are not copied to the restored table.
  • Jeff, it turns out, knew I restored the table all along because every PITR API call is recorded in AWS CloudTrail.

Let us know how you’re going to use continuous backups and PITR on Twitter and in the comments.
Randall

Съд на ЕС: достъп до документи – триалози

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/03/23/ecj-17/

Стана известно решението от 22 март 2018 по делото  T-540/15 De Capitani v European Parliament.  Делото е с предмет искане на основание член 263 ДФЕС за отмяна на решение A(2015) 4931 на Европейския парламент, с което на жалбоподателя е отказан пълен достъп до документи – в частност бързи споразумения относно текущите обикновени законодателни процедури, предложени във всички комитети, таблици с няколко колони (описващи предложенията на Европейската комисия, ориентацията на парламентарната комисия, измененията, предложени от вътрешните органи на Съвета на Европейския съюз, и ако има такива, предложените компромисни проекти),  представени на участниците в триалозите за текущите обикновени законодателни процедури.
Парламентът отговаря на жалбоподателя, че поради много големия брой документи, посочени в първоначалното заявление,  искането   трябвало да се отхвърли.Де Капитани свежда искането до седем четириколонни таблици, изготвяни при текущите към датата на първоначалното заявление триалози.
Така че делото се отнася до публичността на четириколонните таблици за целите на триалозите – обявени впрочем отдавна за една от най-непрозрачните процедури в правотворчеството.
.

Парламентът отказва информация със  следните мотиви:

–        процесът на вземане на решения щял да бъде реално, конкретно и сериозно засегнат от оповестяването на четвъртата колона на спорните документи,

–        областта на полицейското сътрудничество, към която спадат спорните документи, била много чувствителна и оповестяването на тяхната четвърта колона щяло да навреди на доверието между държавите членки и между институциите на Европейския съюз, а поради това и на доброто сътрудничество между тях, както и на вътрешния процес на вземане на решения на Парламента,

–        оповестяване в момента, в който все още се водят преговори, вероятно щяло да доведе до опасност от упражняване на обществен натиск върху докладчика, върху докладчиците в сянка и политическите групи, тъй като преговорите са относно много чувствителните въпроси за защита на данните и за управителния съвет на Агенцията на Европейския съюз за сътрудничество и обучение в областта на правоприлагането (Европол),

–        предоставянето на достъп до четвъртата колона на спорните документи щяло да предизвика колебания у председателството на Съвета за споделянето на информация и сътрудничество с преговарящия екип на Парламента, а именно с докладчика; освен това поради нарасналия натиск от страна на националните органи и на групите по интереси екипът щял да бъде принуден да направи преждевременно стратегически избор, изразяващ се в определяне кога да се отстъпи на Съвета и кога да се иска повече от неговото председателство, което „щяло ужасно да усложни възможностите за постигане на съгласие на обща основа“,

–        оказвало се, че принципът, че „нищо не е договорено, докато не бъде договорено всичко“ е особено важен за доброто функциониране на законодателната процедура и поради това оповестяването преди края на преговорите на даден елемент, дори да не е чувствителен по същността си, би могло да има отрицателни последици за всички други аспекти на дадена преписка, нещо повече, имало опасност оповестяването на становища, които още не са станали окончателни, да създаде неточна представа за истинските становища на институциите,

–        поради това трябвало да се откаже достъп до цялата четвърта колона, до одобряването на текста, предмет на споразумение, от съзаконодателите.

“Що се отнася до наличието на евентуален по-висш обществен интерес, Парламентът заявява, че сами по себе си принципът на прозрачност и високите изисквания на демокрацията не са и не биха могли да бъдат по-висш обществен интерес.”[8]

Общият съд смята, че е необходимо  да припомни съдебната практика във връзка с тълкуването на Регламент № 1049/2001, след това основните характеристики на триалозите, и на трето място, дали трябва да утвърди наличието на обща презумпция, по силата на която съответната институция може да откаже достъпа до четвъртата колона от таблиците от текущите триалози. Накрая, ако Общият съд стигне до извода, че такава презумпция не съществува, той ще изследва дали пълното оповестяване на спорните документи ще засегне сериозно въпросния процес на вземане на решения по смисъла на член 4, параграф 3, първа алинея от Регламент № 1049/2001.

 Общият съд отбелязва, че в резолюцията си от 11 март 2014 г. за достъпа на обществеността до документите Парламентът отправя покана към Комисията, Съвета и към себе си „да гарантират по принцип по-голяма прозрачност на неофициалните триалози, като организират открити срещи, като публикуват документите си, включително графиците, дневния ред, протоколите, разгледаните документи, приетите решения, информацията относно делегациите на държавите членки, както и техните позиции и протоколи, в уеднаквен и лесно достъпен в интернет формат, при спазване на изключенията, изброени в член 4, параграф 1 от Регламент № 1049/2001“.

С оглед на всичко изложено по-горе, нито един от изложените от Парламента мотиви, поотделно или взети заедно, не показва, че пълният достъп до спорните документи е могъл да засегне конкретно и реално, разумно предвидимо, а не чисто хипотетично, разглеждания процес на вземане на решения по смисъла на член 4, параграф 3, първа алинея от Регламент № 1049/2001.

Като е отказал с обжалваното решение оповестяването в хода на процедурата на четвъртата колона от спорните документи, с мотива че то би довело до сериозно засягане на неговия процес на вземане на решения, Парламентът е нарушил член 4, параграф 3, първа алинея от Регламент № 1049/2001.

Обжалваното решение трябва да се отмени.

 

Simplicity is a Feature for Cloud Backup

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/distributed-cloud-backup-for-businesses/

cloud on a blue background
For Joel Wagener, Director of IT at AIBS, simplicity is an important feature he looks for in software applications to use in his organization. So maybe it’s not unexpected that Joel chose Backblaze for Business to back up AIBS’s staff computers. According to Joel, “It just works.”American Institute of Biological Sciences

AIBS (The American Institute of Biological Sciences) is a non-profit scientific association dedicated to advancing biological research and education. Founded in 1947 as part of the National Academy of Sciences, AIBS later became independent and now has over 100 member organizations. AIBS works to ensure that the public, legislators, funders, and the community of biologists have access to and use information that will guide them in making informed decisions about matters that require biological knowledge.

AIBS started using Backblaze for Business Cloud Backup several years ago to make sure that the organization’s data was backed up and protected from accidental loss or computer failure. AIBS is based in Washington, D.C., but is a virtual organization, with staff dispersed around the United States. AIBS needed a backup solution that worked anywhere a staff member was located, and was easy to use, as well. Joel has made Backblaze a default part of the configuration management for all the AIBS endpoints, which in their case are exclusively Macintosh.

AIBS biological images

“We started using Backblaze on a single computer in 2014, then not too long after that decided to deploy it to all our endpoints,” explains Joel. “We use Groups to oversee backups and for central billing, but we let each user manage their own computer and restore files on their own if they need to.”

“Backblaze stays out of the way until we need it. It’s fairly lightweight, and I appreciate that it’s simple,” says Joel. “It doesn’t throttle backups and the price point is good. I have family members who use Backblaze, as well.”

Backblaze’s Groups feature permits an organization to oversee and manage the user accounts, including restores, or let users handle that themselves. This flexibility fits a variety of organizations, where various degrees of oversight or independence are desirable. The finance and HR departments could manage their own data, for example, while the rest of the organization could be managed by IT. All groups can be billed centrally no matter how other functionality is set up.

“If we have a computer that needs repair, we can put a loaner computer in that person’s hands and they can immediately get the data they need directly from the Backblaze cloud backup, which is really helpful. When we get the original computer back from repair we can do a complete restore and return it to the user all ready to go again. When we’ve needed restores, Backblaze has been reliable.”

Joel also likes that the memory footprint of Backblaze is light — the clients for both Macintosh and Windows are native, and designed to use minimum system resources and not impact any applications used on the computer. He also likes that updates to the client software are pushed out when necessary.

Backblaze for Business

Backblaze for Business also helps IT maintain archives of users’ computers after they leave the organization.

“We like that we have a ready-made archive of a computer when someone leaves,” said Joel. The Backblaze backup is there if we need to retrieve anything that person was working on.”

There are other capabilities in Backblaze that Joel likes, but hasn’t had a chance to use yet.

“We’ve used Casper (Jamf) to deploy and manage software on endpoints without needing any interaction from the user. We haven’t used it yet for Backblaze, but we know that Backblaze supports it. It’s a handy feature to have.”

”It just works.”
— Joel Wagener, AIBS Director of IT

Perhaps the best thing about Backblaze for Business isn’t a specific feature that can be found on a product data sheet.

“When files have been lost, Backblaze has provided us access to multiple prior versions, and this feature has been important and worked well several times,” says Joel.

“That provides needed peace of mind to our users, and our IT department, as well.”

The post Simplicity is a Feature for Cloud Backup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Our Newest AWS Community Heroes (Spring 2018 Edition)

Post Syndicated from Betsy Chernoff original https://aws.amazon.com/blogs/aws/our-newest-aws-community-heroes-spring-2018-edition/

The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these Heroes share their time and knowledge across social media and in-person events. Heroes also actively help drive content at Meetups, workshops, and conferences.

This March, we have five Heroes that we’re happy to welcome to our network of cloud innovators:

Peter Sbarski

Peter Sbarski is VP of Engineering at A Cloud Guru and the organizer of Serverlessconf, the world’s first conference dedicated entirely to serverless architectures and technologies. His work at A Cloud Guru allows him to work with, talk and write about serverless architectures, cloud computing, and AWS. He has written a book called Serverless Architectures on AWS and is currently collaborating on another book called Serverless Design Patterns with Tim Wagner and Yochay Kiriaty.

Peter is always happy to talk about cloud computing and AWS, and can be found at conferences and meetups throughout the year. He helps to organize Serverless Meetups in Melbourne and Sydney in Australia, and is always keen to share his experience working on interesting and innovative cloud projects.

Peter’s passions include serverless technologies, event-driven programming, back end architecture, microservices, and orchestration of systems. Peter holds a PhD in Computer Science from Monash University, Australia and can be followed on Twitter, LinkedIn, Medium, and GitHub.

 

 

 

Michael Wittig

Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. widdix maintains several AWS related open source projects, most notably a collection of production-ready CloudFormation templates. In 2016, widdix released marbot: a Slack bot supporting your DevOps team to detect and solve incidents on AWS.

In close collaboration with his brother Andreas Wittig, the Wittig brothers are actively creating AWS related content. Their book Amazon Web Services in Action (Manning) introduces AWS with a strong focus on automation. Andreas and Michael run the blog cloudonaut.io where they share their knowledge about AWS with the community. The Wittig brothers also published a bunch of video courses with O’Reilly, Manning, Pluralsight, and A Cloud Guru. You can also find them speaking at conferences and user groups in Europe. Both brothers are co-organizing the AWS user group in Stuttgart.

 

 

 

 

Fernando Hönig

Fernando is an experienced Infrastructure Solutions Leader, holding 5 AWS Certifications, with extensive IT Architecture and Management experience in a variety of market sectors. Working as a Cloud Architect Consultant in United Kingdom since 2014, Fernando built an online community for Hispanic speakers worldwide.

Fernando founded a LinkedIn Group, a Slack Community and a YouTube channel all of them named “AWS en Español”, and started to run a monthly webinar via YouTube streaming where different leaders discuss aspects and challenges around AWS Cloud.

During the last 18 months he’s been helping to run and coach AWS User Group leaders across LATAM and Spain, and 10 new User Groups were founded during this time.

Feel free to follow Fernando on Twitter, connect with him on LinkedIn, or join the ever-growing Hispanic Community via Slack, LinkedIn or YouTube.

 

 

 

Anders Bjørnestad

Anders is a consultant and cloud evangelist at Webstep AS in Norway. He finished his degree in Computer Science at the Norwegian Institute of Technology at about the same time the Internet emerged as a public service. Since then he has been an IT consultant and a passionate advocate of knowledge-sharing.

He architected and implemented his first customer solution on AWS back in 2010, and is essential in building Webstep’s core cloud team. Anders applies his broad expert knowledge across all layers of the organizational stack. He engages with developers on technology and architectures and with top management where he advises about cloud strategies and new business models.

Anders enjoys helping people increase their understanding of AWS and cloud in general, and holds several AWS certifications. He co-founded and co-organizes the AWS User Groups in the largest cities in Norway (Oslo, Bergen, Trondheim and Stavanger), and also uses any opportunity to engage in events related to AWS and cloud wherever he is.

You can follow him on Twitter or connect with him on LinkedIn.

To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.

 

 

 

 

 

 

 

 

Dolby Labs Sues Adobe For Copyright Infringement

Post Syndicated from Andy original https://torrentfreak.com/dolby-labs-sues-adobe-for-copyright-infringement-180314/

Adobe has some of the most recognized software products on the market today, including Photoshop which has become a household name.

While the company has been subjected to more than its fair share of piracy over the years, a new lawsuit accuses the software giant itself of infringement.

Dolby Laboratories is best known as a company specializing in noise reduction and audio encoding and compression technologies. Its reversed double ‘D’ logo is widely recognized after appearing on millions of home hi-fi systems and film end credits.

In a complaint filed this week at a federal court in California, Dolby Labs alleges that after supplying its products to Adobe for 15 years, the latter has failed to live up to its licensing obligations and is guilty of copyright infringement and breach of contract.

“Between 2002 and 2017, Adobe designed and sold its audio-video content creation and editing software with Dolby’s industry-leading audio processing technologies,” Dolby’s complaint reads.

“The basic terms of Adobe’s licenses for products containing Dolby technologies are clear; when Adobe granted its customer a license to any Adobe product that contained Dolby technology, Adobe was contractually obligated to report the sale to Dolby and pay the agreed-upon royalty.”

Dolby says that Adobe promised it wouldn’t sell its any of its products (such as Audition, After Effects, Encore, Lightroom, and Premiere Pro) outside the scope of its licenses with Dolby. Those licenses included clauses which grant Dolby the right to inspect Adobe’s records through a third-party audit, in order to verify the accuracy of Adobe’s sales reporting and associated payment of royalties.

Over the past several years, however, things didn’t go to plan. The lawsuit claims that when Dolby tried to audit Adobe’s books, Adobe refused to “engage in even basic auditing and information sharing practices,” a rather ironic situation given the demands that Adobe places on its own licensees.

Dolby’s assessment is that Adobe spent years withholding this information in an effort to hide the full scale of its non-compliance.

“The limited information that Dolby has reviewed to-date demonstrates that Adobe included Dolby technologies in numerous Adobe software products and collections of products, but refused to report each sale or pay the agreed-upon royalties owed to Dolby,” the lawsuit claims.

Due to the lack of information in Dolby’s possession, the company says it cannot determine the full scope of Adobe’s infringement. However, Dolby accuses Adobe of multiple breaches including bundling licensed products together but only reporting one sale, selling multiple products to one customer but only paying a single license, failing to pay licenses on product upgrades, and even selling products containing Dolby technology without paying a license at all.

Dolby entered into licensing agreements with Adobe in 2003, 2012 and 2013, with each agreement detailing payment of royalties by Adobe to Dolby for each product licensed to Adobe’s customers containing Dolby technology. In the early days when the relationship between the companies first began, Adobe sold either a physical product in “shrink-wrap” form or downloads from its website, a position which made reporting very easy.

In late 2011, however, Adobe began its transition to offering its Creative Cloud (SaaS model) under which customers purchase a subscription to access Adobe software, some of which contains Dolby technology. Depending on how much the customer pays, users can select up to thirty Adobe products. At this point, things appear to have become much more complex.

On January 15, 2015, Dolby tried to inspect Adobe’s books for the period 2012-2014 via a third-party auditing firm. But, according to Dolby, over the next three years “Adobe employed various tactics to frustrate Dolby’s right to audit Adobe’s inclusion of Dolby Technologies in Adobe’s products.”

Dolby points out that under Adobe’s own licensing conditions, businesses must allow Adobe’s auditors to allow the company to inspect their records on seven days’ notice to confirm they are not in breach of Adobe licensing terms. Any discovered shortfalls in licensing must then be paid for, at a rate higher than the original license. This, Dolby says, shows that Adobe is clearly aware of why and how auditing takes place.

“After more than three years of attempting to audit Adobe’s Sales of products containing Dolby Technologies, Dolby still has not received the information required to complete an audit for the full time period,” Dolby explains.

But during this period, Adobe didn’t stand still. According to Dolby, Adobe tried to obtain new licensing from Dolby at a lower price. Dolby stood its ground and insisted on an audit first but despite an official demand, Adobe didn’t provide the complete set of books and records requested.

Eventually, Dolby concluded that Adobe had “no intention to fully comply with its audit obligations” so called in its lawyers to deal with the matter.

“Adobe’s direct and induced infringements of Dolby Licensing’s copyrights in the Asserted Dolby Works are and have been knowing, deliberate, and willful. By its unauthorized copying, use, and distribution of the Asserted Dolby Works and the Adobe Infringing Products, Adobe has violated Dolby Licensing’s exclusive rights..,” the lawsuit reads.

Noting that Adobe has profited and gained a commercial advantage as a result of its alleged infringement, Dolby demands injunctive relief restraining the company from any further breaches in violation of US copyright law.

“Dolby now brings this action to protect its intellectual property, maintain fairness across its licensing partnerships, and to fund the next generations of technology that empower the creative community which Dolby serves,” the company concludes.

Dolby’s full complaint can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Coding is for girls

Post Syndicated from magda original https://www.raspberrypi.org/blog/coding-is-for-girls/

Less than four years ago, Magda Jadach was convinced that programming wasn’t for girls. On International Women’s Day, she tells us how she discovered that it definitely is, and how she embarked on the new career that has brought her to Raspberry Pi as a software developer.

“Coding is for boys”, “in order to be a developer you have to be some kind of super-human”, and “it’s too late to learn how to code” – none of these three things is true, and I am going to prove that to you in this post. By doing this I hope to help some people to get involved in the tech industry and digital making. Programming is for anyone who loves to create and loves to improve themselves.

In the summer of 2014, I started the journey towards learning how to code. I attended my first coding workshop at the recommendation of my boyfriend, who had constantly told me about the skill and how great it was to learn. I was convinced that, at 28 years old, I was already too old to learn. I didn’t have a technical background, I was under the impression that “coding is for boys”, and I lacked the superpowers I was sure I needed. I decided to go to the workshop only to prove him wrong.

Later on, I realised that coding is a skill like any other. You can compare it to learning any language: there’s grammar, vocabulary, and other rules to acquire.

Log In or Sign Up to View

See posts, photos and more on Facebook.

Alien message in console

To my surprise, the workshop was completely inspiring. Within six hours I was able to create my first web page. It was a really simple page with a few cats, some colours, and ‘Hello world’ text. This was a few years ago, but I still remember when I first clicked “view source” to inspect the page. It looked like some strange alien message, as if I’d somehow broken the computer.

I wanted to learn more, but with so many options, I found myself a little overwhelmed. I’d never taught myself any technical skill before, and there was a lot of confusing jargon and new terms to get used to. What was HTML? CSS and JavaScript? What were databases, and how could I connect together all the dots and choose what I wanted to learn? Luckily I had support and was able to keep going.

At times, I felt very isolated. Was I the only girl learning to code? I wasn’t aware of many female role models until I started going to more workshops. I met a lot of great female developers, and thanks to their support and help, I kept coding.

Another struggle I faced was the language barrier. I am not a native speaker of English, and diving into English technical documentation wasn’t easy. The learning curve is daunting in the beginning, but it’s completely normal to feel uncomfortable and to think that you’re really bad at coding. Don’t let this bring you down. Everyone thinks this from time to time.

Play with Raspberry Pi and quit your job

I kept on improving my skills, and my interest in developing grew. However, I had no idea that I could do this for a living; I simply enjoyed coding. Since I had a day job as a journalist, I was learning in the evenings and during the weekends.

I spent long hours playing with a Raspberry Pi and setting up so many different projects to help me understand how the internet and computers work, and get to grips with the basics of electronics. I built my first ever robot buggy, retro game console, and light switch. For the first time in my life, I had a soldering iron in my hand. Day after day I become more obsessed with digital making.

Magdalena Jadach on Twitter

solderingiron Where have you been all my life? Weekend with #raspberrypi + @pimoroni + @Pololu + #solder = best time! #electricity

One day I realised that I couldn’t wait to finish my job and go home to finish some project that I was working on at the time. It was then that I decided to hand over my resignation letter and dive deep into coding.

For the next few months I completely devoted my time to learning new skills and preparing myself for my new career path.

I went for an interview and got my first ever coding internship. Two years, hundreds of lines of code, and thousands of hours spent in front of my computer later, I have landed my dream job at the Raspberry Pi Foundation as a software developer, which proves that dreams come true.

Animated GIF – Find & Share on GIPHY

Discover & share this Animated GIF with everyone you know. GIPHY is how you search, share, discover, and create GIFs.

Where to start?

I recommend starting with HTML & CSS – the same path that I chose. It is a relatively straightforward introduction to web development. You can follow my advice or choose a different approach. There is no “right” or “best” way to learn.

Below is a collection of free coding resources, both from Raspberry Pi and from elsewhere, that I think are useful for beginners to know about. There are other tools that you are going to want in your developer toolbox aside from HTML.

  • HTML and CSS are languages for describing, structuring, and styling web pages
  • You can learn JavaScript here and here
  • Raspberry Pi (obviously!) and our online learning projects
  • Scratch is a graphical programming language that lets you drag and combine code blocks to make a range of programs. It’s a good starting point
  • Git is version control software that helps you to work on your own projects and collaborate with other developers
  • Once you’ve got started, you will need a code editor. Sublime Text or Atom are great options for starting out

Coding gives you so much new inspiration, you learn new stuff constantly, and you meet so many amazing people who are willing to help you develop your skills. You can volunteer to help at a Code Club or  Coder Dojo to increase your exposure to code, or attend a Raspberry Jam to meet other like-minded makers and start your own journey towards becoming a developer.

The post Coding is for girls appeared first on Raspberry Pi.

[$] The true costs of hosting in the cloud

Post Syndicated from jake original https://lwn.net/Articles/748106/rss

Should we host in the cloud or on our own servers? This question was
at the center of Dmytro Dyachuk’s talk, given
during KubeCon +
CloudNativeCon
last November. While many services
simply launch in the cloud without the organizations behind them
considering other options, large
content-hosting services have actually
moved back to their own data centers: Dropbox
migrated in 2016

and Instagram
in 2014
. Because such transitions can be expensive
and risky, understanding the economics of hosting is a critical part
of launching a new service. Actual hosting costs are often
misunderstood, or secret, so it is sometimes difficult to get the
numbers right. In this article, we’ll use Dyachuk’s talk to try to
answer the “million dollar question”: “buy or rent?”

Happy birthday to us!

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/happy-birthday-2018/

The eagle-eyed among you may have noticed that today is 28 February, which is as close as you’re going to get to our sixth birthday, given that we launched on a leap day. For the last three years, we’ve launched products on or around our birthday: Raspberry Pi 2 in 2015; Raspberry Pi 3 in 2016; and Raspberry Pi Zero W in 2017. But today is a snow day here at Pi Towers, so rather than launching something, we’re taking a photo tour of the last six years of Raspberry Pi products before we don our party hats for the Raspberry Jam Big Birthday Weekend this Saturday and Sunday.

Prehistory

Before there was Raspberry Pi, there was the Broadcom BCM2763 ‘micro DB’, designed, as it happens, by our very own Roger Thornton. This was the first thing we demoed as a Raspberry Pi in May 2011, shown here running an ARMv6 build of Ubuntu 9.04.

BCM2763 micro DB

Ubuntu on Raspberry Pi, 2011-style

A few months later, along came the first batch of 50 “alpha boards”, designed for us by Broadcom. I used to have a spreadsheet that told me where in the world each one of these lived. These are the first “real” Raspberry Pis, built around the BCM2835 application processor and LAN9512 USB hub and Ethernet adapter; remarkably, a software image taken from the download page today will still run on them.

Raspberry Pi alpha board, top view

Raspberry Pi alpha board

We shot some great demos with this board, including this video of Quake III:

Raspberry Pi – Quake 3 demo

A little something for the weekend: here’s Eben showing the Raspberry Pi running Quake 3, and chatting a bit about the performance of the board. Thanks to Rob Bishop and Dave Emett for getting the demo running.

Pete spent the second half of 2011 turning the alpha board into a shippable product, and just before Christmas we produced the first 20 “beta boards”, 10 of which were sold at auction, raising over £10000 for the Foundation.

The beginnings of a Bramble

Beta boards on parade

Here’s Dom, demoing both the board and his excellent taste in movie trailers:

Raspberry Pi Beta Board Bring up

See http://www.raspberrypi.org/ for more details, FAQ and forum.

Launch

Rather to Pete’s surprise, I took his beta board design (with a manually-added polygon in the Gerbers taking the place of Paul Grant’s infamous red wire), and ordered 2000 units from Egoman in China. After a few hiccups, units started to arrive in Cambridge, and on 29 February 2012, Raspberry Pi went on sale for the first time via our partners element14 and RS Components.

Pallet of pis

The first 2000 Raspberry Pis

Unboxing continues

The first Raspberry Pi from the first box from the first pallet

We took over 100000 orders on the first day: something of a shock for an organisation that had imagined in its wildest dreams that it might see lifetime sales of 10000 units. Some people who ordered that day had to wait until the summer to finally receive their units.

Evolution

Even as we struggled to catch up with demand, we were working on ways to improve the design. We quickly replaced the USB polyfuses in the top right-hand corner of the board with zero-ohm links to reduce IR drop. If you have a board with polyfuses, it’s a real limited edition; even more so if it also has Hynix memory. Pete’s “rev 2” design made this change permanent, tweaked the GPIO pin-out, and added one much-requested feature: mounting holes.

Revision 1 versus revision 2

If you look carefully, you’ll notice something else about the revision 2 board: it’s made in the UK. 2012 marked the start of our relationship with the Sony UK Technology Centre in Pencoed, South Wales. In the five years since, they’ve built every product we offer, including more than 12 million “big” Raspberry Pis and more than one million Zeros.

Celebrating 500,000 Welsh units, back when that seemed like a lot

Economies of scale, and the decline in the price of SDRAM, allowed us to double the memory capacity of the Model B to 512MB in the autumn of 2012. And as supply of Model B finally caught up with demand, we were able to launch the Model A, delivering on our original promise of a $25 computer.

A UK-built Raspberry Pi Model A

In 2014, James took all the lessons we’d learned from two-and-a-bit years in the market, and designed the Model B+, and its baby brother the Model A+. The Model B+ established the form factor for all our future products, with a 40-pin extended GPIO connector, four USB ports, and four mounting holes.

The Raspberry Pi 1 Model B+ — entering the era of proper product photography with a bang.

New toys

While James was working on the Model B+, Broadcom was busy behind the scenes developing a follow-on to the BCM2835 application processor. BCM2836 samples arrived in Cambridge at 18:00 one evening in April 2014 (chips never arrive at 09:00 — it’s always early evening, usually just before a public holiday), and within a few hours Dom had Raspbian, and the usual set of VideoCore multimedia demos, up and running.

We launched Raspberry Pi 2 at the start of 2015, pairing BCM2836 with 1GB of memory. With a quad-core Arm Cortex-A7 clocked at 900MHz, we’d increased performance sixfold, and memory fourfold, in just three years.

Nobody mention the xenon death flash.

And of course, while James was working on Raspberry Pi 2, Broadcom was developing BCM2837, with a quad-core 64-bit Arm Cortex-A53 clocked at 1.2GHz. Raspberry Pi 3 launched barely a year after Raspberry Pi 2, providing a further doubling of performance and, for the first time, wireless LAN and Bluetooth.

All our recent products are just the same board shot from different angles

Zero to hero

Where the PC industry has historically used Moore’s Law to “fill up” a given price point with more performance each year, the original Raspberry Pi used Moore’s law to deliver early-2000s PC performance at a lower price. But with Raspberry Pi 2 and 3, we’d gone back to filling up our original $35 price point. After the launch of Raspberry Pi 2, we started to wonder whether we could pull the same trick again, taking the original Raspberry Pi platform to a radically lower price point.

The result was Raspberry Pi Zero. Priced at just $5, with a 1GHz BCM2835 and 512MB of RAM, it was cheap enough to bundle on the front of The MagPi, making us the first computer magazine to give away a computer as a cover gift.

Cheap thrills

MagPi issue 40 in all its glory

We followed up with the $10 Raspberry Pi Zero W, launched exactly a year ago. This adds the wireless LAN and Bluetooth functionality from Raspberry Pi 3, using a rather improbable-looking PCB antenna designed by our buddies at Proant in Sweden.

Up to our old tricks again

Other things

Of course, this isn’t all. There has been a veritable blizzard of point releases; RAM changes; Chinese red units; promotional blue units; Brazilian blue-ish units; not to mention two Camera Modules, in two flavours each; a touchscreen; the Sense HAT (now aboard the ISS); three compute modules; and cases for the Raspberry Pi 3 and the Zero (the former just won a Design Effectiveness Award from the DBA). And on top of that, we publish three magazines (The MagPi, Hello World, and HackSpace magazine) and a whole host of Project Books and Essentials Guides.

Chinese Raspberry Pi 1 Model B

RS Components limited-edition blue Raspberry Pi 1 Model B

Brazilian-market Raspberry Pi 3 Model B

Visible-light Camera Module v2

Learning about injection moulding the hard way

250 pages of content each month, every month

Essential reading

Forward the Foundation

Why does all this matter? Because we’re providing everyone, everywhere, with the chance to own a general-purpose programmable computer for the price of a cup of coffee; because we’re giving people access to tools to let them learn new skills, build businesses, and bring their ideas to life; and because when you buy a Raspberry Pi product, every penny of profit goes to support the Raspberry Pi Foundation in its mission to change the face of computing education.

We’ve had an amazing six years, and they’ve been amazing in large part because of the community that’s grown up alongside us. This weekend, more than 150 Raspberry Jams will take place around the world, comprising the Raspberry Jam Big Birthday Weekend.

Raspberry Pi Big Birthday Weekend 2018. GIF with confetti and bopping JAM balloons

If you want to know more about the Raspberry Pi community, go ahead and find your nearest Jam on our interactive map — maybe we’ll see you there.

The post Happy birthday to us! appeared first on Raspberry Pi.